Implementing an XML Data Provider for Oxite, Part III

In my previous post, I detailed the outline of how I would implement the XML Data Provider for Oxite.

After some refactoring and additional coding, the class structure looks like this:

oxite-class-diagram_2

The XmlTableBase and XmlTable classes are re-usable for generic LINQ friendly XML table-like storage of classes.

Most of the “tables” in the OxiteXmlContext use the XmlTable class directly. Most classes which inherit from the NamedEntity base class in Oxite get handled by the OxiteEntityTable class, which adds default behavior on top of the XmlTable. Those classes which require additional logic when serializing / deserializing the entities are implemented as classes which inherit either from XmlTable or OxiteEntityTable, as shown in the class diagram above.

I have opted to not use the standard XML serialization classes for this, because I wanted to reuse the existing classes in the Oxite model, which aren’t decorated with any serialization attributes. This allows for the Oxite model to change somewhat without any maintenance being required for the XML Data Provider.

The other option had been to implement a layer on top of the Oxite model which has Xml serialization attributes, and although perhaps a better practice, it just seemed too redundant at this point.

Therefore I have written custom serialization and deserialization methods that use generics and reflection to achieve storage and retrieval.

The serialization/deserialization methods recognize all the standard value type codes in .NET, as well as Enums, Nullable<T>, Guid and the Uri types.

Basically, the deserialization process iterates all attributes and child elements of the entity element and invokes this method:

private static void SetPropertyValue(object obj,
    string propertyName, object value)
{
    if (obj == null || value == null)
        return;

    PropertyInfo propertyInfo = obj.GetType().GetProperty(
        propertyName, BINDING_FLAGS);

    // Skip non-existing properties. Will be handled with
    // custom deserialization by descendant classes
    if (propertyInfo == null)
        return;

    Type propertyType = propertyInfo.PropertyType;

    // If nullable generic type, use the generic
    // parameter instead
    if (propertyType.Name == "Nullable`1")
        propertyType = propertyType.GetGenericArguments()[0];

    if (Type.GetTypeCode(propertyType) == TypeCode.Object)
    {
        if (propertyType == typeof(Guid))
            value = new Guid(value.ToString());
        else if (propertyType == typeof(Uri))
            value = new Uri(value.ToString());
        else // Skip on unknown types
            return;
    }

    if (propertyType.IsEnum)
        value = Enum.Parse(propertyType, value.ToString());

    propertyInfo.SetValue(obj, Convert.ChangeType(value,
        propertyType, CultureInfo.InvariantCulture), null);
}

Most of the XML Data Provider is now complete. I opted for simplicity over completeness for the initial version, deferring caching in favor of a more straight-forward thread-safety mechanism which locks the class on operations that modify files. Currently this has the weakness of locking all file operations of the same type, regardless if they conflict or not. Two comments being saved at the same time for separate posts will be queued, despite them being saved to separate files.

At the moment I feel it’s more important to work out the kinks in the repository implementations and get to a point where everything works, before I polish the whole thing to a fancy shine.

Implementing an XML Data Provider for Oxite, Part II

In my previous post, I tackled the concept of an XML Data Provider for Oxite and outlined one possible structure for storing the data.

What I have done so far, is write some generic Linq to XML classes that do most of the heavy lifting of storing and retrieving objects using reflection, and used those to implement the repositories.

I have then implemented a class called OxiteXmlContext which aims to resemble a traditional Linq to SQL data context. This class has properties like Tags, Sites, Languages, Phrases and so on. These “tables” wrap up all the functionality of storing, retrieving and removing objects in an XDocument (one for each table).

This has allowed me to reuse some of the query logic in the Linq to SQL provider. But in many cases I have been able to greatly simplify or remove the queries entirely. This, due to the fact that the XML provider does not have a second layer of classes on top of the native Oxite model, and thus not requiring any projection.

The larger storages, such as posts, pages, comments and so on, where it isn’t feasible to read everything into memory and sort it out afterwards, have been split into one folder per post/page. Each folder contains separate files for body, comments and trackbacks. This is to avoid having to rewrite the post body when a new comment is added. The relations between posts, pages, tags and areas have been indexed in separate files along with creation and publish dates, status and so on. These indexes might grow a bit once post count reaches a thousand or so. Hopefully if the author is diligent enough to post thousands of posts, the author in question will also be enough of an enthusiast to invest in an SQL storage instead.

As you can see by this class diagram, the hierarchy is quite straightforward.

xmlclasses_2

If you look closely, you’ll see a property on the OxiteXmlContext called PostHeaders. This property is actually the index and translates to a class called PostHeader. PostHeader contains, as the name indicates, header fields for the post, such as Id, Title, Status, Publish date, Tags and so on. It also contains a method called GetPost, which fetches the post from disk by reading the body.xml, comments.xml and trackbacks.xml files in the post’s folder.

Saving of the post is done either in whole by calling the Save method on OxiteXmlContext or by saving individual parts of it by calling the SaveBody, SaveComments or SaveTrackbacks method.

Less complex types, such as Site and Plugin which also contain child objects, but don’t deserve having them stored in separate files like comments for posts, instead inherit from the XmlTable class and override the serialization methods to store/retrieve these sub items from the XDocument directly.

public class OxitePluginTable : XmlTable<Plugin>
{
    public OxitePluginTable(string baseDirectory)
        : base(baseDirectory, "ID")
    {
    }

    protected override XElement ProjectEntity(TEntity entity)
    {
        XElement element = base.ProjectEntity(entity);

        Plugin plugin = (Plugin)entity;

        element.Add(new XElement("settings",
            from r in plugin.Settings.AllKeys
            select new XElement(r, plugin.Settings[r])));

        return element;
    }

    protected override TEntity ProjectEntity(XElement element)
    {
        Plugin plugin = (Plugin)base.ProjectEntity(element);
        plugin.Settings = new NameValueCollection();

        element.Element("settings").Elements().ToList().ForEach(e =>
            plugin.Settings.Add(e.Name.ToString(), e.Value));

        return plugin;
    }
}

As you can see by the PluginRepository, the code is strikingly similar to that of Linq to SQL, yet not quite exactly the same.

public class PluginRepository : IPluginRepository
{
    private OxiteXmlContext mContext;

    public PluginRepository(Site site)
    {
        mContext = new OxiteXmlContext(site);
    }

    public IList<IPlugin> GetPlugins()
    {
        var query = from p in mContext.Plugins
                    orderby p.Category, p.Name
                    select p;

        return query.Cast<IPlugin>().ToList();   
    }

    public IPlugin GetPlugin(Guid pluginID)
    {
        return mContext.Plugins.GetEntity(pluginID);
    }

    public bool GetPluginExists(Guid pluginID)
    {
        return mContext.Plugins.GetEntity(pluginID) != null;
    }

    public void Save(IPlugin plugin)
    {            
        mContext.Plugins.Save(projectPlugin(plugin));
    }

    private Plugin projectPlugin(IPlugin p)
    {
        return new Plugin
        {
            Category = p.Category,
            Enabled = p.Enabled,
            ID = p.ID,
            Name = p.Name,
            Settings = p.Settings
        };
    }

    public void Save(IPlugin plugin, NameValueCollection settings)
    {
        Plugin p = projectPlugin(plugin);
        p.Settings = settings;
        mContext.Plugins.Save(p);
    }

    public NameValueCollection GetPluginSettings(Guid pluginID)
    {
        return GetPlugin(pluginID).Settings;
    }

    public void SaveSetting(Guid pluginID, string name, string value)
    {
        Plugin plugin = mContext.Plugins.GetEntity(pluginID);
        plugin.Settings[name] = value;
        mContext.Plugins.Save(plugin);
    }
}

What’s next?

IPageRepository

The page repository still remains to be implemented.

Caching

I aim to add a simple per-process in-memory cache of entities to avoid having to fetch them from disk each time they are requested. This not only helps performance each time a page is served, but also on a lower level, since the Linq queries can result in multiple hits on the same entity, which would result in the same entity being deserialized multiple times.

Tread-safety

None of the classes are currently thread-safe, which they need to be in order for Oxite to be able to serve more than one visitor at a time, lest two instances try and write to the same file at the same time! I have purposefully skipped this part is I will implement this as part of the caching mechanism and refactor the repositories to go through the cache which in turn will serialize requests into the store.

I would really have liked to use the Concurrent classes from .NET 4.0, but I’m going to take a stab in the dark here and guess that’s not an option until next year. Which means no SpinLock either. ReaderWriterLockSlim it is!

Refactoring

I will refactor XmlEntityTable to inherit from XmlTable instead of XmlTableBase and reuse what is implemented on XmlTable.

I will try and tidy up the xml storage classes so they can be reused in other projects that need xml database-like storage.

Another real world application of MGrammar (Oslo)

I have received a ton of emails and questions following my previous post on MGrammar (which is a part of Oslo). Some of those have been people asking advice on how they can adopt MGrammar as a basis for a rule engine for more generic purposes than the one I provided.

So, I decided to sit down, download the latest Oslo SDK and write a new rule engine based on a DSL implemented in MGrammar.

Basically, what I wanted to demonstrate, was how you can create a domain specific language, in which your business analysts, end-users and grease-monkeys can express business rules in natural-like-language, which you can then feed through MGrammar and parse into something relatively coherent.

Consider the following typical business rules at Northwind Traders Co.:

  • Only supervisors may place orders with a discount higher than 7%
  • The total price of a sales order must exceed the total cost.
  • Even though the designers of the software that Northwind uses had left the field Credit Rating on the customer records as optional, Northwind now wants it to be mandatory when adding new customers as part of their new and improved risk management strategy.

Now, these rules can be expressed in many ways. Imagine that you have written a shrink-wrapped sales system which you sell from your website. You really don’t want your 2 411 customers to call you on the phone while you’re busy playing the beta of Mass Effect 2 just because they want some optional field to suddenly become a required field, or because they want to restrict the discount level for sales staff on Mondays if there’s a full moon and the employee has been with the company for less than six months and wears jeans to work. Yes, I know! Customers really do get the strangest and often misguided “requirements” into their heads.

And let’s be honest, you couldn’t possibly have imagined that your newest customer would [one month later] want to restrict the discount level that sales employees that wear jeans to work are allowed to give on Mondays when there’s a full moon! Yet, there they are, at your doorstep, in their tastefully chalk-streaked $2000 suits, pulling you away from pruning your blog and monitoring the Google analysis charts, wanting this very feature implemented!

If only you had supplied the sales software with a simple rules engine so that customers could take care of these things on their own!

Constraining business objects using a natural-like language

Using a domain specific language, the above bulleted rules could be expressed like this:

RuleSet Northwind
  // Max discount of 7%
  Add rule for Order that requires Discount to be 
    less than or equal to "0.07" unless 
    SalesPerson.Supervisor is true

  // Price must be gt cost
  Add rule for Order that requires TotalPrice to be 
    greater than TotalCost

  // Credit rating is now a required field
  Add rule for Customer that requires CreditRating 
    to not be empty
End RuleSet

It’s enough like plain English, that whoever writes these rules won’t need to learn another language and syntax, only some basic rules. That, and somehow be provided with a list of what fields there are. You could provide a user interface that adds the “Add rule for” text at the beginning of each new line as the user presses the enter key, and have a ListBox that shows a list of business objects such as Customer, Order, Product that the user can double-click on to insert into the text at the current cursor position. Well, you get the idea.

Now, in your code, you may have business objects (Linq to SQL or Entities or whatever), like this:

public class Customer
{
    public int Id { get; set; }
    public string Name { get; set; }
    public string CreditRating { get; set; }
}

You could then implement an extension method called IsValid that will take care of all the heavy lifting for you (yes, very heavy…):

public static class RuleSet
{
    public static bool IsValid(this object value)
    {
        return evaluate(value);
    }
}

And invoke it like this:

// Read business rules from file or blob field in database
RuleSet.Add(resourceStream);

// Create a customer
Customer c = new Customer
{
    Id = 10,
    Name = "Contoso",
    CreditRating = null // <- Will cause validation to fail!
};

// Will print out: Customer valid: False
Console.WriteLine("Customer valid: {0}", c.IsValid());

c.CreditRating = "Excellent";

// Will print out: Customer valid: True
Console.WriteLine("Customer valid: {0}", c.IsValid());

What I do, is feed the business rules along with the DSL specification through MGrammar and project the result into a class library that uses reflection to validate the rules.

The MGrammar needed to parse and understand the above business rules is quite simple, actually:

module ObjectContraints
{
    export ObjectRules;

    language ObjectRules
    {
        // Basic tokens
        token Whitespace = (' ' | 'r' | 'n' );
        token Digit = ('0'..'9');
        token Identifier = 
            ('A'..'Z' | 'a'..'z' | '.' | '_')+;
        token Linebreak = 'n' | 'r' | 'rn';

        // Keywords  
        @{Classification["Keyword"]}
        token RuleSet = 'RuleSet';
        @{Classification["Keyword"]}
        token End = 'End' | 'end';
        @{Classification["Keyword"]}
        token AddRuleFor = 'Add rule for';
        @{Classification["Keyword"]}
        token ThatRequires = 'that requires';
        @{Classification["Keyword"]} 
        token ToBe = 'to be';
        @{Classification["Keyword"]} 
        token ToNotBe = 'to not be';
        @{Classification["Keyword"]} 
        token Is = 'is';
        @{Classification["Keyword"]} 
        token IsNot = 'is not';
        @{Classification["Keyword"]} 
        token When = 'when';
        @{Classification["Keyword"]} 
        token Unless = 'unless';

        // Operators
        @{Classification["Keyword"]} 
        token EqualTo = 'equal to';
        @{Classification["Keyword"]} 
        token GreaterThan = 'greater than';
        @{Classification["Keyword"]} 
        token GreaterThanOrEqualTo = 'greater than or equal to';
        @{Classification["Keyword"]} 
        token LessThan = 'less than';
        @{Classification["Keyword"]} 
        token LessThanOrEqualTo = 'less than or equal to';
        @{Classification["Keyword"]} 
        token Empty = 'empty';
        @{Classification["Keyword"]} 
        token True = 'true';
        @{Classification["Keyword"]} 
        token False = 'false';

        // Comments        
        @{Classification["Comment"]}  
        token CommentToken = CommentDelimited | CommentLine;  
        token CommentDelimited = 
          "/*" CommentDelimitedContent* "*/";  
        token CommentDelimitedContent = ^('*') | '*'  ^('/');  

        token CommentLine = "//" CommentLineContent*;  
        token CommentLineContent  
          = ^(  
               'u000A' // New Line  
            |  'u000D' // Carriage Return  
            |  'u0085' // Next Line  
            |  'u2028' // Line Separator  
            |  'u2029' // Paragraph Separator  
            );  

        // Quoted values
        token QuoteDelimited = 
          '"' c:QuoteDelimitedContent* '"' => c;  
        token QuoteDelimitedContent = ^('"');              

        interleave Skippable = Whitespace | Comment;  
        interleave Comment = CommentToken;

        // Main syntax        
        syntax Main = RuleSet name:Identifier rules:Rule* End RuleSet 
            => { Name => name, Rules => rules };

        // Single rule syntax
        syntax Rule = AddRuleFor className:Identifier ThatRequires 
                expression:Expression1 condition:Condition?
            => { ClassName => className, Expression => expression, 
                    Condition => condition };

        syntax Expression1 = propertyName:Identifier inverse:ToBeOrNotToBe 
            operation:Operation => 
            { PropertyName => propertyName, Inverse => inverse, 
              Comparison => operation };

        syntax Expression2 = propertyName:Identifier inverse:IsOrIsNot 
            operation:Operation => 
            { PropertyName => propertyName, Inverse => inverse, 
              Comparison => operation };

        syntax Operation = 
                  EqualTo value:Value               => 
                        { Operator => "Eq", Value => value }
                | GreaterThanOrEqualTo value:Value  => 
                        { Operator => "GtEq", Value => value }
                | GreaterThan value:Value           => 
                        { Operator => "Gt", Value => value }
                | LessThanOrEqualTo value:Value     => 
                        { Operator => "LtEq", Value => value }
                | LessThan value:Value              => 
                        { Operator => "Lt", Value => value }
                | Empty                             => 
                        { Operator => "IsEmpty" }
                | True                              => 
                        { Operator => "IsTrue" }
                | False                             => 
                        { Operator => "IsFalse" };

        syntax Condition = c1:ConditionWhen => c1 | c2:ConditionUnless => c2;
        syntax ConditionWhen = When expression:Expression2 => expression;
        syntax ConditionUnless = Unless expression:Expression2 => expression;

        syntax ToBeOrNotToBe = ToNotBe => true | ToBe => false;
        syntax IsOrIsNot = IsNot => true | Is => false;

        syntax Value = value1:Identifier => Identifier { value1 }
                     | value2:QuoteDelimited => Literal { value2 };
    }   
}

That’s it!

The class library that evaluates the business rules is also quite simple. The core method is Evaluate which is invoked by the IsValid extension method:

public bool Evaluate(object value)
{
    if (value == null)
        return false;

    if (PropertyInfo == null)
        return false;

    object propertyValue = PropertyInfo.GetValue(value, null);
    Type propertyType = PropertyInfo.PropertyType;

    bool result = false;
    switch (Operator)
    {
        case RuleOperator.Eq: result = (compare(propertyValue, 
            getValue(value)) == 0); break;
        case RuleOperator.Gt: result = (compare(propertyValue, 
            getValue(value)) == -1); break;
        case RuleOperator.GtEq: result = (compare(propertyValue, 
            getValue(value)) >= 0); break;
        case RuleOperator.IsEmpty: result = 
            (compare(propertyValue, null) == 0); break;
        case RuleOperator.IsFalse: result =
            (compare(propertyValue, false) == 0); break;
        case RuleOperator.IsTrue: result = 
            (compare(propertyValue, true) == 0); break;
        case RuleOperator.Lt: result = 
            (compare(propertyValue, getValue(value)) == 1); break;
        case RuleOperator.LtEq: result = 
            (compare(propertyValue, getValue(value)) <= 0); break;
        default: return false;
    }
    return result ^ Inverse; 
}

In fact, instead of projecting it into an expression tree-like class library and use reflection to validate the object, you could actually emit real IL code!

– I’ll leave that part as an exercise for the reader :)

Well, for now, here’s a natural-language-like business rules engine that validates objects using reflection.

Source code is for Visual Studio 2010 B1 with May 2009 CTP of Oslo.

Download source code:

Disclaimer: The source code has some limitations. It will not follow property paths (such as SalesPerson.Manager.IsSupervisor). It’s limited to simple literals and property names and the operators listed above. It is intended as a source of inspiration and not a third-party library that you can download and hook into your production code.

Implementing an XML Data Provider for Oxite, Part I

In my previous post, I gave some thoughts about Oxite. Never one to sit idle, I proceeded to tackle the first item on the list.

Oxite comes with an SQL data provider, which is great. But I wanted an XML data provider, so the content could be stored like in dasBlog.

As an experiment, I created an XSD schema from the database that ships with Oxite, and implemented a class called OxiteDbContext, which serves as a cache.

public static class OxiteDbContext
{
    private static OxiteDb mDatabase = new OxiteDb();
    private static string mFilename = 
      HttpContext.Current.Server.MapPath(
      "~/App_Data/oxite.xml");

    static OxiteDbContext()
    {
        mDatabase.ReadXml(mFilename);
    }

    public static void Save()
    {
        lock (mDatabase)
        {
            mDatabase.AcceptChanges();
            mDatabase.WriteXml(mFilename);
        }
    }

    public static OxiteDb Data
    {
        get
        {
            return mDatabase;
        }
    }        
}

I then copy/pasted the code from the SQL provider and re-mapped the LINQ to SQL code to the new typed DataSet, like so:

public Area GetArea(string areaName)
{
    return (from a in OxiteDbContext.Data.oxite_Area
            where a.SiteID == siteID && string.Compare(
            a.AreaName, areaName, true) == 0
            select ProjectArea(a)).FirstOrDefault();
}

It’s not exactly impressive, as it stores the entire dataset in a single XML file, and whenever it saves anything, it re-writes the entire XML file.

Imagine that you have 200 MB of posts on your Blog, someone adds a comment, and the blog re-writes 200 MB on the disk. Not exactly “best practice”, but it will suffice as a first experiment.

A better approach would be to make more use of the disk structure, and split the schema into separate XML files, to minimize disk access.

One plausible structure could be like this:

tree_3

Much better!

Stay tuned for the next part in the series!

As always, source code included:

This thing called Oxite

I’ve been fiddling around with Oxite recently. Although I find it highly interesting and a lot of fun, it being based on ASP.NET MVC and all, it’s not yet mature enough to be an out-of-the-box blogging engine.

None of the shortcomings are show-stoppers as such, and it’s not intended to compete with the major actors out there—yet.

Most of the issues with Oxite are little things like lack of ready-made themes, plug-ins, statistics and the ability to post images and videos to the blog via the metaweblog API, instead requiring the user to mess around with FTP settings.

Despite the shortcomings, I’m quite excited. I’ve already picked it apart and put most of it back together (hence the lack of blog posts the past couple of weeks).

The first thing I did was create my own theme, tweak the styles and add menus and such. This required a whole bunch of CSS coding and some actual C# plumbing.

oxite-theme_2

What struck me as a bit quaint, was that even though you can add your own content pages (applause!), those pages don’t automatically appear in any menus or anything. You actually need to hand-edit the C# code to add links to those pages in the menus.

One thing that I really like about dasBlog, is that it runs without  a relational database engine, such as Microsoft SQL Server, or MySQL. It operates purely on disk-based XML files. That right there just cut off 50% of the hosting fee.

Luckily, Oxite has a lot of extension points. Adding a data provider that is based on XML files instead of Microsoft SQL Server, shouldn’t be too much of a problem.

Deploying ASP.NET MVC applications on an Apache web server

I struggled a bit to get the live demo up and running for my previous article on ASP.NET MVC because my hosting provider runs on the Apache web platform, which wasn’t all too keen on the MVC URL rewriting.

One of the problems is that on Apache (same as on IIS6), the document being served has to have a particular file extension in order for the mod_aspdotnet module to be invoked at all. Since the URL scheme in ASP.NET MVC by default does not have an extension, ie. www.wheresmymovie.net/home/about, this results in a 404 generated in Apache before the ASP.NET module ever gets invoked, and thus before any URL rewriting occurs.

If you have direct access to the server configuration, you can add a wildcard pattern to the AddHandler directive in the httd.conf file. This has the disadvantage of serving all content (including images and style sheets) through the ASP.NET module. Not to mention that it requires you to have direct access to the httpd.conf file, which you might not have in a shared hosting environment.

A better solution is to modify the routes in the Global.asax.cs file, and add an extension that the module will intercept, so that the links will look like this:

http://www.wheresmymovie.net/home.aspx/about

The .aspx extension on the controller will do the trick, causing Apache to invoke mod_aspdotnet to serve the page, and then the URL rewriting will kick in.

Here’s a copy of the URL route from Global.asax.cs running on the live sample which is hosted on an Apache web server:

public static void RegisterRoutes(RouteCollection routes)
{
  routes.IgnoreRoute("{resource}.axd/{*pathInfo}");

  routes.MapRoute(
    "Default.aspx",
    "{controller}.aspx/{action}/{id}",
    new { controller = "Home", action = "Index", id = "" },
    new { controller = @"[^.]*" }
  );

  routes.MapRoute(
    "Default",
    "{controller}/{action}/{id}",
    new { controller = "Home", action = "Index", id = "" },
    new { controller = @"[^.]*" }
  );
}

Big thanks to bia securities, which is where I got the above code snippet.

Note that the “Default.aspx” route, which adds the .aspx extension to the controller comes before the “Default” route. This is so that calls to Html.ActionLink will generate links conforming to this scheme, since it will pick the first in the list.

This is basically the same thing that dasBlog does. It you have a look at the address bar above (this blog runs on dasBlog), you’ll notice that the .aspx extension is there, at the end of the URL routing scheme, which is actually something like {year}/{month}/{day}/{title}.aspx.

If you want pretty URLs and you don’t want the .aspx extension in there, you can either change it to some other extension, like .mvc, and add it to the AddHandler directive in the httpd.conf file. That, or upgrade to IIS7.

For more information about deploying ASP.NET MVC, check out this great article over at asp.net.

ASP.NET MVC RULES SURPREME!

Seldom do I use all-caps for titles. This time, it’s merited. I’ve been circling around ASP.NET MVC like a suspicious lion for months. Since I don’t work much with web projects, this has slipped down on my priority list in favor of technologies more relevant to my current projects. But since it recently hit official release, I made some time last weekend and got around to it.

The web is a fantastic tool. Plain and simple. With all the new technologies that have been elbowing their way into our lives to make them simpler, easier, faster and richer, the web has never before been such a valuable resource as it is today. One of those technologies that I really like is Silverlight. Another, is ASP.NET MVC.

I cannot lavish enough praise upon this new technology. Because I’d run out of disk space.

There are many levels of programming. You can code in machine-code, for the ultimate control over the computer, or you can point and click for a bit more productivity at the expense of control. ASP.NET MVC has poked up from nowhere right smack in the middle of the two.

To quote  The Gu:

MVC is a framework methodology that divides an application’s implementation into three component roles: models, views, and controllers.

  • “Models” in a MVC based application are the components of the application that are responsible for maintaining state.  Often this state is persisted inside a database (for example: we might have a Product class that is used to represent order data from the Products table inside SQL).
  • “Views” in a MVC based application are the components responsible for displaying the application’s user interface.  Typically this UI is created off of the model data (for example: we might create an Product “Edit” view that surfaces textboxes, dropdowns and checkboxes based on the current state of a Product object).
  • “Controllers” in a MVC based application are the components responsible for handling end user interaction, manipulating the model, and ultimately choosing a view to render to display UI.  In a MVC application the view is only about displaying information – it is the controller that handles and responds to user input and interaction.
    One of the benefits of using a MVC methodology is that it helps enforce a clean separation of concerns between the models, views and controllers within an application.  Maintaining a clean separation of concerns makes the testing of applications much easier, since the contract between different application components are more clearly defined and articulated.

I’m a big fan of productivity. And ASP.NET MVC increases productivity like few other web technologies I’ve ever seen. It allows me to do complex things in very little time. It’s also very extensible and comes with full source code, so if I run into a wall, I can simply rewrite the wall to conveniently have a gate in it.

The first thing I did (after I downloaded the ASP.NET MVC kit), was to have a look at a video tutorial. Normally, I’m one of those who gleefully tears off the wrapping and plugs things in and starts pushing buttons without a single thought to the manual. But I was tired and wanted a 5 minute crash course in what it is, and what it does.

I highly recommend you watch that very same video, the “Creating a movie database tutorial” on www.asp.net. They say that a picture says more than a thousand words, and a movie with maybe 20 frames per second for twelve minutes and four seconds, must then (obviously) say more than 14,480,000 words. And since I would not think of clogging you down with 14.5 million words, I’ll simply leave you to watch that fantastic introduction to ASP.NET MVC and wait right here while you go have your socks knocked off.

Right, so after I had watched the tutorial (which I highly recommend you do before proceeding any further. Go on, I’ll still be here when you’re done!), I instantly fired up Visual Studio and created my very first MVC web application.

I started to do my own movie database like the tutorial, but quickly got a better idea and opted for a tiny bit of originality. I created a movie site ranking database. A site with a list of online movie sites with ranking. I also found the ASP.NET MVC design gallery and downloaded a nice enough looking theme to use.

The whole thing took about an hour.  Yep, that’s it, an hour. And that includes the time spent designing the database, going back to the tutorial and Googling for solutions, tweaking the style sheet as well as creating a few icons and graphical elements. And this was my first time using ASP.NET MVC! Imagine what you could be doing nine hours from now, after you have made nine complete and functional websites just like this one! Why … there’s no end to the possibilities!

dude-wheres-my-movie_2

The ASP.NET MVC application I made is complete with user registration, a little news feed (no RSS support), a list of sites with description, commenting, ratings and ranking based on their total scores.

commenting_2

It has a functional administrative back-end to create, update and delete sites, score categories, news and studios (which sites can be affiliated with). The create/edit movie site views have some interesting master-multiple-detail relations with synchronization to remove unselected items and add newly selected items in a many-to-many relationship (the checkboxes and the scores).

admin_4

When a site is affiliated with a list of studios, their logos appear beneath the site description like this:

studios_2

All that, in an hour!

And it’s all so very easy. Each action you can take on the website, such as create a new movie site, add a comment, and so on are methods on the controller class. Here for instance is the code to route the visitor from the root URL ~/ to a view that will render the list of movie sites, sorted by total score:

public class HomeController : Controller
{
    private MovieSitesDbEntities mEntities = 
        new MovieSitesDbEntities();

    //
    // GET: /        
    public ActionResult Index()
    {
        var sites = mEntities.SiteSet.
                Include("Scores").
                Include("Scores.ScoreCategory").
                Include("Studios").
                Include("Comments").
            OrderByDescending(s => 
                s.Scores.Sum(score => score.Score)).ToList();

        ViewData.Model = sites;
        return View();
    }
}

And the contents of the /Views/Home/Index.aspx which contains the code to render that particular view contains simple asp.net syntax like this:

<asp:Content ID="Content2" ContentPlaceHolderID="MainContent" 
    runat="server">

<h1>Movie Site Rankings</h1>

<% int rankNo = 1;
   foreach (var item in Model) { %>
  <div class="rating">
    <div class="siteDescription">
      <h2>#<%= rankNo++%> 
      <a href="<%= item.URI %>"><%= item.Name %></a></h2>
      <p>
        <img class="float-left" 
          src="/Content/Screenshots/<%= item.ScreenshotURI %>" 
          alt="Site screenshot" />
        <%= item.Description %>
      </p>
...

 

I’m impressed.

Working with Certificates – Part IV, Custom X509 validation

In my previous article, I showed how you could embed a certificate as a managed resource in your application.

Today, I will demonstrate how you can implement your own custom certificate validation for WCF.

There are many scenarios where you would want to implement your own certificate validation mechanics. For instance, it might not be enough that the certificate is just trusted, it might be required to have a specific subject name, or issuer.

To implement your own custom X509 certificate validation, you inherit from the class X509CertificateValidator in the System.IdentityModel.Selectors namespace, like this:

/// <summary>
/// Implements the validator for X509 certificates.
/// </summary>
internal class MyX509Validator : X509CertificateValidator
{
  /// <summary>
  /// Validates a certificate.
  /// </summary>
  /// <param name="certificate">The certificate to
  /// validate.</param>
  public override void Validate(X509Certificate2 certificate)
  {
      // validate argument
      if (certificate == null)
        throw new ArgumentNullException("certificate");

      // check if the name of the certifcate matches
      if (certificate.SubjectName.Name != "CN=Tempus")
        throw new SecurityTokenValidationException(
          "Certificated was not issued by trusted issuer");
    }
}

Then, you hook it up on the proxy instance like this:

SampleClient client = new SampleClient();
client.ClientCredentials.ServiceCertificate.Authentication.
  CertificateValidationMode =
    X509CertificateValidationMode.Custom;
client.ClientCredentials.ServiceCertificate.Authentication.
  CustomCertificateValidator = new MyX509Validator();

// Make a service call
client.Foo("bar");

You can, of course. hook it up in the .config file instead, like this:

<behaviors>
  <endpointBehaviors>
    <behavior name="CustomX509">
      <clientCredentials>
        <serviceCertificate>
          <authentication certificateValidationMode="Custom"
            customCertificateValidatorType=
            "SampleConsumer.MyX509Validator, SampleConsumer"/>
        </serviceCertificate>
      </clientCredentials>
    </behavior>
  </endpointBehaviors>
</behaviors>

That’s all there’s to it!

Working with Certificates – Part III, Embedding the certificate as a resource and providing authorization for custom username and password authentication

In my previous article, I showed how to implement custom username and password validation for WCF and how to use a certificate to encrypt the communication (including the username and password).

In this article, I will extend that sample and embed the certificate as a resource in the service library as well as create a IPrincipal implementation from the authenticated username and password.

Embedding the certificate in the service library instead of hooking it up in the configuration file is quite easy.

The first thing you need to do is add the certificate to your project and set the Build Action to Embedded Resource.

Then, we’ll implement a behavior extension for WCF, like this:

public class EmbeddedCertificateAttribute : Attribute,
    IServiceBehavior
{
    private string mResourceName;
    private SecureString mCertificatePassword;

    public EmbeddedCertificateAttribute(string resourceName,
        string certificatePassword)
        : base()
    {
        this.mResourceName = resourceName;

        // SecureString is not very secure, but provides better
        // security than no protection at all.
        // For more information, look at
        // http://www.hexadecimal.se/2009/02/14/NotSoProtectedMemory.aspx
        this.mCertificatePassword = new SecureString();
        foreach (char c in certificatePassword)
            mCertificatePassword.AppendChar(c);
        mCertificatePassword.MakeReadOnly();
    }

    public void AddBindingParameters(
        ServiceDescription serviceDescription,
        ServiceHostBase serviceHostBase,
        Collection<ServiceEndpoint> endpoints,
        BindingParameterCollection bindingParameters)
    {
        // Read the raw certificate data from an embedded
        // resource
        Stream stream = Assembly.GetExecutingAssembly().GetManifestResourceStream(mResourceName);
        byte[] certData = new byte[stream.Length];
        stream.Read(certData, 0, certData.Length);            

        // Decrypt the password
        IntPtr ptr = Marshal.SecureStringToBSTR(
            mCertificatePassword);
        string password = Marshal.PtrToStringUni(ptr);

        // Load the certificate into a X509 certificate
        // instance
        X509Certificate2 cert = new X509Certificate2(certData,
            password);                

        // Add the certificate to our service credentials
        serviceHostBase.Credentials.ServiceCertificate.Certificate = cert;
    }

    public void ApplyDispatchBehavior(
        ServiceDescription serviceDescription,
        ServiceHostBase serviceHostBase)
    {
        // Do nothing
    }

    public void Validate(ServiceDescription serviceDescription,
        ServiceHostBase serviceHostBase)
    {
        // Do nothing
    }
}

Then you just add the attribute we just created to your service implementation like this:

[EmbeddedCertificate("Hexadecimal.SampleService.SampleCertificate.p12", "")]
public class SecureService : ISecureService
{
    public string GetData(int value)
    {
        return string.Format("You entered: {0}", value);
    }

    public CompositeType GetDataUsingDataContract(
      CompositeType composite)
    {
        if (composite.BoolValue)
        {
            composite.StringValue += "Suffix";
        }
        return composite;
    }
}

Change the resource name and password for the certificate and that’s it!

Your certificate will now be embedded in the assembly, and extracted by the service extension and dynamically added to your service whenever it’s instantiated.

Check out my previous post Automating WCF headers and safe-guarding against unhandled exceptions in WCF services for more WCF behavior extension examples.

To add authorization to our sample, I’ll first create an IPrincipal implementation to hold the authentication details.

public class SamplePrincipal : IPrincipal
{
    /// <summary>
    /// Holds a typed reference to a SampleIdentity instance
    /// </summary>
    private SampleIdentity mIdentity;

    /// <summary>
    /// Initializes a new SamplePrincipal
    /// </summary>
    /// <param name="identity">Identity associated with the
    /// principal</param>
    public SamplePrincipal(SampleIdentity identity)
        : base()
    {
        this.mIdentity = identity;
    }

    /// <summary>
    /// Gets the Identity of the current principal
    /// </summary>
    public SampleIdentity SampleIdentity
    {
        get { return mIdentity; }
    }

    /// <summary>
    /// Gets the current SampleIdentity, if any
    /// </summary>
    public static SamplePrincipal Current
    {
        get
        {
            if (Thread.CurrentPrincipal is SamplePrincipal)
                return Thread.CurrentPrincipal as
                    SamplePrincipal;
            else
                return null;
        }
    }

    #region IPrincipal Members

    public IIdentity Identity
    {
        get {  return mIdentity; }
    }

    public bool IsInRole(string role)
    {
        return false;
    }

    #endregion
}

public class SampleIdentity : IIdentity
{
    /// <summary>
    /// Holds the name of the identity
    /// </summary>
    private string mName;

    /// <summary>
    /// Initializes a new SampleIdentity
    /// </summary>
    /// <param name="name">Name of the identity</param>
    public SampleIdentity(string name)
    {
        this.mName = name;
    }

    /// <summary>
    /// Gets the current SampleIdentity, if any
    /// </summary>
    public static SampleIdentity Current
    {
        get
        {
            SamplePrincipal principal =
                SamplePrincipal.Current;
            if (principal == null)
                return null;
            return principal.SampleIdentity;
        }
    }

    #region IIdentity Members

    public string AuthenticationType
    {
        get { return typeof(SampleAuthenticator).Name; }
    }

    public bool IsAuthenticated
    {
        get { return true; }
    }

    public string Name
    {
        get { return mName; }
    }

    #endregion
}

And this is where it gets interesting! WCF operates with claim sets, which are as the name implies, a bunch of claims made by either the client or the service. The service will be claiming to have a certificate that the service and client can use to talk privately, and the client will claim to have a valid username and password. They can claim all sorts of things, like being on the local subnet (zone security). In other words, all sorts of meaningful affiliations which make them more trustworthy to one another.

To provide authorization, we need to implement the IAuthorizationPolicy interface, sort through all the claims made by the user and find the one that holds the username used to authenticate the user, using our authenticator class which we implemented in my previous article.

The code is not overly complex, but has a potential to do a lot of complex things. We could for instance operate on more than just the username here, we could also restrict access based on if the user is located on the local subnet or way out on the internet, so the user might be able to do administrative tasks if the user is an administrator, but only if using the client on the subnet, not from the internet.

/// <summary>
/// A custom authorizor implementation that creates a
/// SamplePrincipal instance if the authorization succeeds.
/// </summary>
public class SampleAuthorizor : IAuthorizationPolicy
{
    /// <summary>
    /// Extracts identity claims from an EvaluationContext
    /// </summary>
    /// <param name="evaluationContext">EvaluationContext to
    /// process</param>
    /// <returns>A list of IIdentity claims made by the
    /// client</returns>
    private IList<IIdentity> getIdentities(
        EvaluationContext evaluationContext)
    {
        object obj;
        if (evaluationContext.Properties.TryGetValue(
            "Identities", out obj) && obj != null)
            return obj as IList<IIdentity>;
        return null;
    }

    /// <summary>
    /// Evaluates if a user meets the requirements for this
    /// policy
    /// </summary>
    /// <param name="evaluationContext">
    /// Claim set to evaluate</param>
    /// <param name="state">Custom state</param>
    /// <returns>Returns true if successful, otherwise
    /// false</returns>
    public bool Evaluate(EvaluationContext evaluationContext,
        ref object state)
    {
        // Extract identity claims made by client
        var identities = getIdentities(evaluationContext);

        // Fault if no identity claims were made by the client
        if (identities == null)
            throw new SecurityTokenException(
                "No identity claims were made.");

        // Iterate the identity claims made by client
        foreach (var identity in identities)
        {
            // If this claim was authenticated using our
            // sample authenticator...
            if (identity.AuthenticationType ==
                typeof(SampleAuthenticator).Name)
            {
                // Then create a new SamplePrincipal instance
                // holding the authentication data.
                SamplePrincipal principal =
                    new SamplePrincipal(
                    new SampleIdentity(identity.Name));

                // Associate it with the context
                evaluationContext.Properties["Principal"] =
                  principal;

                // And return success
                return true;
            }
        }
        // Failure, unable to authorize the user
        return false;
    }

    /// <summary>
    /// Get a claim set representing the issuer of the
    /// authorization policy
    /// </summary>
    public ClaimSet Issuer
    {
        get
        {
            // System specific, as opposed to Windows based
            return ClaimSet.System;
        }
    }

    /// <summary>
    /// Gets the Id of this authorization component
    /// </summary>
    public string Id
    {
        get
        {
            return "SampleAuthorizor";
        }
    }
}

That’s it!

We can now go ahead use declarative security with PrincipalPermission and all that nifty out-of-the-box functionality we have in .NET.

Let’s try it out by modifying the service implementation.

public string GetData(int value)
{
    // Return current username to the caller
    string currentUserName;

    SampleIdentity identity = SampleIdentity.Current;
    if (identity != null)
        currentUserName = identity.Name;
    else
        currentUserName = "You are not logged in.";

    return string.Format("User name: {0}", currentUserName);
}

// Require that the user be in the role Manager to call
// this method.
[PrincipalPermission(SecurityAction.Demand, Role="Manager")]
public CompositeType GetDataUsingDataContract(
    CompositeType composite)
{
    if (composite.BoolValue)
    {
        composite.StringValue += "Suffix";
    }
    return composite;
}

And make a few calls from the client:

// Make a service call
Console.WriteLine(proxy.GetData(15));

try
{
    // This will cause an exception, since we're not in
    // the 'Manager' role.
    proxy.GetDataUsingDataContract(new CompositeType());
}
catch (Exception error)
{
    Console.WriteLine("The service call faulted with the " +
      "following error: {0}", error.Message);
}

Running the sample now will yield this output:

(click the image for a larger view)

consoleoutput_2

Download source: Certificates, Part III.zip (30 kb)

Working with Certificates – Part II, Securing a WCF service using custom username and password authentication

One of the many common security scenarios when programming WCF, is using custom username and password authentication.

The custom authentication part is no big hassle in itself, but as a security precaution, WCF refuses to send usernames and passwords in clear-text, instead requiring that the communication be encrypted in some way.

Attempting to use custom authentication without first encrypting the communication will cause an exception, usually this one:

System.InvalidOperationException: The service certificate is not provided. Specify a service certificate in ServiceCredentials.

There are several ways to secure the communications in WCF, which basically boil down to using either Windows-specific security (Windows username and password), federated security (involving a third party) such as Windows Live Id, or using certificates.

See this article on MSDN for more information.

Since we want to use a custom username and password, that rules out the Windows-specific security, which obviously has its own authentication mechanism (either Kerberos or NTLM). You could implement your own federated security provider, but most who have opted for this solution have given up half-way due to the amount of code you need to write. Don’t get me wrong, federated security is awesome, when you need federated security and nothing else will do.

But since we just want a simple username and password validation, we’ll go for the certificate solution.

You can do this in two ways. You can either hook up your service to use SSL, either via HTTPS or a TCP endpoint, or you can load the certificate into your service and use message level security.

For simplicity, we’ll use message level security over HTTP. This way we don’t need to involve HTTPS in our demonstration.

In my previous article, I briefly discussed how certificates work and how you can set up your own Root Certificate Authority and issue your own certificates, so we’ll skip the part where you create a certificate and jump right to the part where you start using it.

For this article, I’ve created a self-signed SSL server certificate with the subject name localhost. That will suffice to test the code locally. If you want to put the service on another computer than the client, you’ll need to issue a certificate with a subject name equal to the DNS name of the computer where the service is hosted. Such as MyOtherMachine, or www.myserver.local or something like that.

Once you have your certificate created, you need to import the certificate with the private key, since it will be used to decrypt communications from the client. Any format that Windows understands will do, such as a personal information exchange file (.pfx) or PKCS #12 (.p12).

Import the certificate along with the private key to the Trusted People store. To do this, just double click the certificate file and follow the “Import Certificate” wizard until you are prompted to specify which store you want to put it in. Click the “Browse” button and select Trusted People. We want it in the Trust People store so we can tell WCF to use Peer Trust (explained in the code below).

The first thing we’ll do is create a blank solution and add two projects to it, a WCF Service Library and a Console Application.

You should have something like this:

Rename files_2

Next, add a service reference to the service from the client. Then, add some testing code to the client:

static void Main(string[] args)
{
    using (SecureServiceClient proxy =
        new SecureServiceClient())
    {
        Console.WriteLine(proxy.GetData(15));
    }

    Console.WriteLine();
    Console.Write("Press any key to exit. . .");
    Console.ReadKey();
}

Run it once and make sure it works without any security, before complicating things.

Then, we add the class that will validate the custom username and password to the service:

public class SampleAuthenticator : UserNamePasswordValidator
{
    public override void Validate(string userName,
        string password)
    {
        // Validate arguments
        if (userName == null)
            throw new ArgumentNullException("userName");
        if (password == null)
            throw new ArgumentNullException("password");

        // Validate username and password
        if (userName != "test1" || password != "1tset")
        {
            throw new SecurityTokenException(
                "Invalid username or password.");
        }
    }
}

And all we need to do now is configure the service to use a certificate, hook up the new validation class, and provide the username and password in the client.

First off is the app.config file for the service:

<services>
  <service behaviorConfiguration="SampleX509Behavior"
       name="Hexadecimal.SampleService.SecureService">
    <endpoint address="" binding="wsHttpBinding"
      bindingConfiguration="SampleX509Binding"
      contract="Hexadecimal.SampleService.ISecureService">
      <identity>
        <dns value="localhost" />
      </identity>
  </endpoint>
  <endpoint address="mex" binding="mexHttpBinding"
      contract="IMetadataExchange" />
    <host>
      <baseAddresses>
        <add baseAddress="http://localhost:8731/Design_Time_Addresses/hexadecimal/Samples/SecureService/" />
      </baseAddresses>
    </host>
  </service>
</services>
<bindings>
  <wsHttpBinding>
    <binding name="SampleX509Binding">
      <security mode="Message">
        <message clientCredentialType="UserName" />
      </security>
    </binding>
  </wsHttpBinding>
</bindings>
<behaviors>
    <serviceBehaviors>
      <behavior name="SampleX509Behavior">
        <serviceMetadata httpGetEnabled="True"/>
        <serviceDebug includeExceptionDetailInFaults="False" />
        <serviceCredentials>
          <serviceCertificate findValue="localhost"
            storeLocation="CurrentUser"
            storeName="TrustedPeople"
            x509FindType="FindBySubjectName" />
          <userNameAuthentication
            userNamePasswordValidationMode="Custom"
            customUserNamePasswordValidatorType="Hexadecimal.SampleService.SampleAuthenticator, SampleService" />
        </serviceCredentials>
      </behavior>
    </serviceBehaviors>
</behaviors>

As you can see, it’s fairly basic. The only additions are the binding which specifies message security mode and the behavior which indicates that we want to use custom username and password validation, and provides a reference to the class that will perform the actual validation, as well as provide information on how WCF will find the certificate.

Basically, we have told WCF to go look for our certificate in the Trusted People store, specific to the current user (which the service process runs as), and search for a certificate that has “localhost” as subject name. You will run into exceptions if WCF finds more than one certificate which fulfill the search criteria, which can sometimes happen if you are experimenting a lot with certificates and importing them left and right.

Note that the subject name and the <dns> identity of the service must match! If they don’t you will get a very non-descriptive exception on the client, since it’s the client that requires this, not the service, unless you turn off certificate validation on the client side.

Next, we refresh the service reference on the client, so that the client’s app.config file looks something like this:

(I tidied it up a bit for clarity)

<bindings>
  <wsHttpBinding>
    <binding name="SampleBinding">
      <security mode="Message">
        <message clientCredentialType="UserName"
                 negotiateServiceCredential="true"
                 algorithmSuite="Default"
                 establishSecurityContext="true" />
      </security>
    </binding>
  </wsHttpBinding>
</bindings>
<client>
  <endpoint address="http://localhost:8731/Design_Time_Addresses/hexadecimal/Samples/SecureService/"
            binding="wsHttpBinding"
            bindingConfiguration="SampleBinding"
            contract="SampleService.ISecureService"
            name="WSHttpBinding_ISecureService">
    <identity>
      <dns value="localhost" />
    </identity>
  </endpoint>
</client>

Ok, so we’ve told both the client and the service to use a custom username and password for authentication and a certificate to secure the confidentiality of the communications.

Next, we just need to add the actual username and password to the client, and tell WCF not to be so strict when checking our home-grown test certificate:

static void Main(string[] args)
{
    using (SecureServiceClient proxy = new SecureServiceClient())
    {
        // Provide authentication details
        proxy.ClientCredentials.UserName.UserName = "test1";
        proxy.ClientCredentials.UserName.Password = "1tset";

        // Using peer trust will make WCF accept the
        // certificate from the service if the certificate
        // is also installed on the client computer in
        // the Trusted People store (peer = trusted person)
        // Even if it isn't signed by a valid Root CA
        proxy.ClientCredentials.ServiceCertificate.Authentication.CertificateValidationMode =
            X509CertificateValidationMode.PeerTrust;

        // Dot not go online and check if the SSL certificate
        // has been revoked since our demo certificate does
        // not have a valid CRL pointer.
        proxy.ClientCredentials.ServiceCertificate.Authentication.RevocationMode =
            X509RevocationMode.Offline;

        // Make a service call
        Console.WriteLine(proxy.GetData(15));
    }

    Console.WriteLine();
    Console.Write("Press any key to exit. . .");
    Console.ReadKey();
}

That’s it! Run it and see for yourself!

If you have a certificate which can be validated, you can remove the two lines of code which lowers the certificate validation requirements of WCF.

Typically, this is what a service would look like when I am developing and testing, and then for actual release, I’ll reconfigure it to use a real certificate with stricter validation.

Download source: Certificates, Part II_1.zip (27 kb)