Themes and ASP.NET MVC

I wanted to easily add theme support to yonkly, so that others can install it and modify its look and feel as they please.  I also wanted it to be as easy as installing a theme in wordpress.

I created a themes folder under the content folder

image

Then I referenced my css file in the master page using a helper method

<%= ThemeHelper.GetCss() %>

I also use a helper method for images

<img src='<%=ThemeHelper.GetImageUrl("reply.png")%>’ alt="reply" class="icon" />

But most of my images are set in the stylesheet, which makes it easier to manipulate different skins

The helper methods above look at the defined theme in the config file (or database or wherever you store your settings) and then return the path to the correct resource.

Take a look at these live samples:

They are all using the same codebase but have a different theme defined.  The trick is in making your HTML css-friendly by naming elements and assigning them classes as well as using Divs and avoiding tables.  This allows you to create a stylesheet that radically changes the look of the site.  Think of the element ids and classes as an API to your view, that the css can manipulate.

I also added a feature that lets you upload a folder theme as a zip file and have the application unzip it into the themes folder.

image

It would be cool if we can define a "virtual folder" in our application, so I wouldn’t have to use helper methods.  Imagine if you could just say /content/theme/logo.gif and it would just work.  The theme folder doesn’t really exist but instead it would route to the correct folder based on a setting.  I wonder if I can do that with current routing mechanism in asp.net mvc!!!  Anyone?

Mocking and Dependency Injection in ASP.NET MVC

Here is the situation, my controller constructors take multiple interfaces as parameters.  I do this in order to use constructor injection which allows me to inject the controllers with mocked objects in my unit tests.

For example, my AccountController takes IEmailService, IFormsAuthentication and MembershipProvider (abstract class) as parameters.

During my testing, I want to mock the email, authentication and membership calls.  For example when the user calls FormsAuthentication.Login, I don’t really care if actual call succeeded but rather that my login action works appropriately in the case FormstAuthentication.Login succeeds (or fails).  I just want to mock that call.

I started off creating a few tests and slowly they have grown to several.  There was a lot of repeated code in my unit tests and to be a good citizen of the DRY universe, I needed to refactor the code.

For IoC, I initially started with StructureMap but now I am using Ninject

I created this module to bind my interfaces to mocked instances.  It looks like this:

internal class TestModule : StandardModule
{
    public override void Load()
    {
        Bind<IEmailService>()
            .ToConstant(MyMocks.MockEmailService.Object);
        
        Bind<IFormsAuthentication>()
            .ToConstant(MyMocks.MockFormsAuthentication.Object);
        
        Bind<MembershipProvider>()
            .ToConstant(MyMocks.MockMembershipProvider.Object);
        
        Bind<IContactListService>()
            .ToConstant(MyMocks.MockContactListService.Object);
    }
}

Notice that I bind the interfaces to actual instances and not classes.  These instances are declared in a global static class that will be accessed from my unit tests.  As you can tell from the name, they are all mocked objects (I am using Moq).  Here is how the MockEmailService looks (all the others are declared the same way):

internal static class MyMocks
{
    private static Mock<IEmailService> _mockEmailService;
    public static Mock<IEmailService> MockEmailService
    {
        get
        {
            _mockEmailService = _mockEmailService ?? new Mock<IEmailService>();
            return _mockEmailService;
        }
    }

 

So all this is good to setup Ninject and create my mocks.  Now I want to easily and generically create a controller, so I can quickly create unit tests.  In order to do that, I created a TestControllerFactory class that basically creates a controller with all the appropriate dependencies injected.

   1: internal static class TestControllerFactory
   2: {
   3:     private static IKernel _kernel;
   4:     public static IKernel Kernel
   5:     {
   6:         get
   7:         {
   8:             if (_kernel == null)
   9:             {
  10:                 var modules = new IModule[] { new TestModule() };
  11:                 _kernel = new StandardKernel(modules);
  12:             }
  13:             return _kernel;
  14:         }
  15:         private set
  16:         {
  17:             _kernel = value;
  18:         }
  19:     }
  20:  
  21:     public static T GetControllerWithFakeContext<T>(string httpMethod) 
  22:         where T : Controller
  23:     {
  24:         var con = Kernel.Get<T>();
  25:         con.SetFakeControllerContext();
  26:         if (con != null) con.Request.SetHttpMethodResult(httpMethod);
  27:         return con;
  28:     }
  29:  
  30: }

In line #10, I use the TestModule class mentioned above to setup the Ninject Kernel.  In lines #21 to #28, I create an instance of T which must be of type Controller from the Kernel which will automatically create the Controller with all the mocked objects.  In line #25 and #26, I just set a fake/mocked context and the Http Method for the request (more info here).

Now my unit tests are very clean and easy to setup.    Using MbUnit as my unit test framework, here is a unit tests that tests the reset password functionality.

   1: [Test]
   2: public void ResetPasswordQuestion_Should_Send_Email_On_Success()
   3: {
   4:     var newpassword = "newpassword";
   5:     MyMocks.MockMembershipProvider
   6:          .Expect(p => p.ResetPassword(username, answer))
   7:          .Returns(newpassword);
   8:     MyMocks.MockEmailService
   9:          .Expect(m => m.SendPasswordReset(username, newpassword));
  10:  
  11:     var ac = TestControllerFactory
  12:                 .GetControllerWithFakeContext<AccountController>("POST");
  13:  
  14:     var results = ac.ResetPasswordQuestion(username, question, answer);
  15:     //write some asserts in here to make sure things worked
  16:  
  17:     //verify all mocks
  18:     MyMocks.MockMembershipProvider.VerifyAll();
  19:     MyMocks.MockEmailService.VerifyAll();
  20: }

Line #5: I mock the ResetPassword call on the membership provider and tell it to return the new password

Line #8: I mock the SendPasswordReset method on the email service

Line #11: Get an instance of AccountController from the Ninject Kernel

I just write some code to make sure the expected results took place and that my mocks were properly exercised and that’s pretty much it.  No need to have an SMTP server working to test this, no need to have a database, no need to have an authentication method, no need to implement the interfaces or write dummy methods.

I am like a kid in a candy store with all these things: mocking, dependency injection, inversion of control, unit testing…  I am loving it.

So what do you think?  Is this a good way to go about it?  Is there a better way and what is it?

Ninject: Killer IoC

In my previous post, The Best IoC Container, I decided to go with StructureMap as the framework of choice.  I received a comment telling me to check out Ninject and then a day or two after, I saw Corey Gaudin’s post on using Ninject with MVC, so I decided to try it out.

It wasn’t too hard to get up and running in asp.net mvc.  Corey’s post was a good starting point but I was too lazy to type all his code in, I wrote my own.  It was pretty easy to get Ninject to work.

I basically created a NinjecteControllerFactory class that inherits from the DefaultControllerFactory and overrode a couple of methods.  The class looks like this:

public class NinjectControllerFactory : DefaultControllerFactory
{
    private IKernel _kernel;
    public NinjectControllerFactory(params IModule[] modules)
    {
        _kernel = new StandardKernel(modules);
    }

    protected override IController GetControllerInstance(Type controllerType)
    {
        return _kernel.Get(controllerType) as IController;
    }
}

 

Then in my Gloval.asax.cs file, I called this code to setup Ninject.

IModule[] modules = new IModule[] { new WebModule() };
ControllerBuilder.Current.SetControllerFactory(new NinjectControllerFactory(modules));

 

The WebModule module has my configuration and looks like this:

public class WebModule : StandardModule
{
    public override void Load()
    {
        Bind<IAppService>().To<AspAppService>();
        Bind<IEmailService>().To<EmailService>();
        Bind<IContactService>().To<ContactService>();
        Bind<IContactRepository>().To<SqlContactRepository>();
        Bind<IFormsAuthentication>().To<FormsAuthenticationWrapper>();
        Bind<MembershipProvider>().ToConstant(Membership.Provider);
    }
}

 

Of course, you can create as many modules as you want to configure your application and pass them in the controller Factory.

This code was working fine for me and then Nate Kohari mentioned that there is a Ninject.Framework.Mvc extension that allows me to easily integrate Ninject into the MVC pipeline.  So, I decided to download the code, build it and use it.  I initially had some issues because it was referencing a different version of the core dll, so I had to rebuild that as well.

I changed my Global.asax.cs file to the following:

public class GlobalApplication : NinjectHttpApplication
{
    protected override void RegisterRoutes(RouteCollection routes)
    {
        routes.IgnoreRoute("{resource}.axd/{*pathInfo}");

        routes.MapRoute(
            "Default",                                              // Route name
            "{controller}/{action}/{id}",                           // URL with parameters
            new { controller = "Home", action = "Index", id = "" }  // Parameter defaults
        );
    }

    protected override IKernel CreateKernel()
    {
        IModule[] modules = new IModule[] { new AutoControllerModule(Assembly.GetExecutingAssembly()), new WebModule() };
        return new StandardKernel(modules);
    }
}

Initially I was getting errors when navigating to an injected controller, which was fixed by add the AutoControllerModule to my modules and passing it the current assembly.

So far Ninject is looking great, the API is fluent and very discoverable and the contextual binding is very slick.  So I am going to stick with it for now and see how it goes.  The documentation seems pretty good and the Nate is very responsive in the Google Group.

The Best IoC Container?

As I previously mentioned in my post “The Best JavaScript Library“, I am in the process of developing an application/writing a book.  I will be using asp.net MVC and a TDD approach to the application and book.  As I have done with the JavaScript framework selection, I decided to look around and evaluate/review my options for an Inversion of Control (IoC) Container.

Naturally, my research lead me to a post by Scott Hanselman (see it here) which lists some of the more popular IoC and dependency injection frameworks out there.

Spring.Net

I started of looking at Spring.Net and was very impressed by its features, samples/tutorials and documentation but it felt like it would be too much for this project and the learning curve seemed somewhat steep.  Its configuration syntax also looked very verbose.  But if you want to learn more there is a good article over here.

Castle Windsor

It looked easier to learn/use than spring.net but there getting started section was incomplete even though Scott says that it is well documented.  So that was a little discouraging.

Autofac

I really liked there syntax and looked really easy to use and figure out, but the documentation was very limited.  It was just a bunch of wiki pages in Google Code.  Though, they did have instructions on how to integrate it with MVC

StructureMap

Initially, I learned about structure map from a post by Phil Haack’s and I kind of liked it right away.  It was easy to pick up and figure out and Phil’s example helped to get me started quickly.  I checked there website and it has an impressive list of features and is well documented.

PostSharp

PostSharp is really cool but is not an IoC container.  It is a policy injector and a really easy way to do Aspect Oriented Programming (AOP).  Rather than trying to explain what exactly it does and screw it up, take a look at the “About PostSharp” page.  Even better check out this “getting started” walkthrough – you will be very impressed.

Must Pick One

I know there is a lot more IoC containers out there (which I glanced over), but these were sufficient for me.  Initially, I thought about using AutoFac but when I started to actually use it and ran into some issues, the documentation was not helpful at all.

I have decided to go with StructureMap as my IoC Container and dependency injector.  I might also use PostSharp to implement logging and tracing as aspects – there is no cleaner way.

Helpful Links

For a good explanation of IoC Containers and the Dependency Injection patter, read this article by Martin Fowler.

This also a good explanation that might help you understand IoC Containers.

You should definilty take a look Matthew Podwysocki’s comparison of the different IoC containers out there and their different (or rather similar) configuration and syntax.

Books you must Read

Applying Domain-Driven Design and Patterns

And Martin Fowler’s indispensable reference for software patterns Patterns of Enterprise Application Architecture

Validate my Choice

What do you think of my choice?  Does it really matter which one I go with?  Do you prefer a different IoC container and why?

Time to choose a unit testing framework… I love choices!!!

[poll id=”3″]

Database Schema Compare & Upgrade

I spent a few days playing with Ruby on Rails a while back.  During the learning experience, there was one particular feature that I really liked.  It was the database migration scripts that get automatically generated for you.  I always wished I had something like this in the windows (asp.net) world.  It turns out there is something out there and it is right there within Visual Studio.

When upgrading my production applications, I have always struggled with trying to update my production database schema to match the latest schema.  This has always been a manual, error-prone and time consuming task.  One that I always dreaded and postponed to the last minute.  It usually involved crossing my figures, praying to the SQL Gods and running a hand-made migration script against the production database.  I am not a DB guy, so you can imagine how much un-fun this was.

Long story short, you can do this with a few clicks in Visual Studio 2008. 

Start a new schema comparison

clip_image001

Select your source database (e.g. development database) and your target database (e.g. production database)

clip_image001[4]

Click Ok.  Visual studio will compare the two schemas and display the results in a grid, showing you what objects (tables, views, procs, etc…) have changed and the action you want to take.

clip_image001[6]

Select any item that has changed and you will see the differences between source and target.

clip_image001[8]

The last pane at the bottom contains the update (migration) script that will run against the target to make it identical to the source.  You can quickly scan it to make sure you are not wiping out your entire production database (not recommended).

clip_image001[12]

You can also customize the update by clicking the drop downs in the grid to customize the script

clip_image001[14]

Once everything looks good to go, just hit the button "Write Updates" and you are done.

clip_image001[16]

This has been a sore in my side for a long time and I am glad I discovered this.  I am actually kind of pissed off because I have always seen that menu and never really tried to click it.  Oh well!!!

Hmm…  What to do with all the time I just freed up???

NOTE: According to msdn this feature is only available in the Database Edition and Team Suite versions.  I am running Team Suite (click Help > About Microsoft Visual Studio to find out your version)

image

Unit Test Private Methods in Visual Studio

I am working on a feature that will let me import twitter messages to yonkly and wanted to write a test for it.  The method is private and I couldn’t get the unit test to see it.  I also didn’t want to use the private accessor class generate by Visual Studio because I was mocking some functionality in the actual class and didn’t really fell like re-mocking it on the private accessor.

So, after several minutes of googling, I found several solutions that I didn’t like.  John Hann uses reflection to test private methods.  Tim Stall has a similar solution at the code project.  Andrew Stopford suggested that I don’t test private methods and use code coverage to make sure that they are being exercised.  I was about to use the methods suggested by John Hann and Tim Stall to test my private methods, but then I accidentally (thanks to IntelliSense) discovered PrivateObject.

The PrivateObject class is part of the Team Test API.

Allows test code to call methods and properties on the code under test that would be inaccessible because they are not public.
from msdn

It turned out to be pretty easy to test private methods and the code looked like this:

var myController = new TwitterController();
var po = new PrivateObject(myController);
var page = 1;
var count = 25;
po.Invoke("ImportTweets", new object[] { page, count });

 

The code above will call the private method ImportTweets and pass it two integer parameters.  This is the equivalent of calling

myController.ImportTweets( page, count);

 

Note that this is essentially what John Hann and Tim Stall suggested but why use extra code when PrivateObject is already available for you.

Dynamically Build LINQ Queries Using Reflection

I was recently working on a project where there was an option to export data from the database.  The export function simply exported 2 hard-coded columns that were returned by a stored procedure into a tab-delimited text file. 

The sproc looked something like select id, name from mytable (I am over simplifying of course).  I was tasked to customize the export process, so that the user can select the columns/fields to be exported.  There were several constraints though:

  1. Stored procedure should not change
  2. Work within the existing export framework

The problem was that the existing framework used an Export function defined in a base class that all Exporters inherited.  The Export function expected a DataReader and simply exported whatever columns were in the reader.

One solution was to dynamically generate the SQL string in code and then execute it.  Another better solution was to use Dynamic LINQ.  Wait, but there is no such thing as dynamic LINQ, you say.  You are half right.  There is no built-in dynamic LINQ query generation into the framework but SOME GENIOUS created a bunch of extension methods to facilitate such a miraculous feat.

What is Dynamic LINQ? 

I hear you ask.  Let’s start by looking at regular (or static) LINQ, which looks like this:

var query = from person in people
            where person.City == "Arlington"
            select new
            {
                Id = person.Id,
                FirstName = person.FirstName
            };

This is static because there is no way to change the select so that it dynamically selects different attributes from the person class based on criteria specified by the user.  Also, for example, if you were creating an advanced search page and wanted the user to specify custom filter criteria, you won’t be able to dynamically generate the where conditions.

Dynamic LINQ, lets you do all these things and more.  Let’s take a look at a sample dynamic LINQ query:

people.Where("city = @0", "Arlington")
      .Select("new (Id, FirstName, LastName, State)");
 
Note that the select and where are strings.  That means you can dynamically generate the strings based on user input.

So, I dynamically created the select using a collection of columns.  This is just a string collection that contains all the columns selected by the user to be exported.  The code looked like this:

string dynamicClass = string.Empty;
for (int i = 0; i < ColumnsToExport.Count; i++)
{
    string col = ColumnsToExport[i];
    if (i == 0)
    {
        dynamicClass = "new(" + col;
    }
    else
    {
        dynamicClass += "," + col;
    }
    if (i == ColumnsToExport.Count - 1)
    {
        dynamicClass += ")";
    }
}

All I have to do now is pass the dynamicClass string variable to the Select method.

Get Dynamic

To get dynamic LINQ in your project:

  1. Copy Dynamic.cs file from C:Program FilesMicrosoft Visual Studio 9.0Samples1033CSharpSamplesLinqSamplesDynamicQuery
  2. If you don’t have that folder, click Help > Samples in Visual Studio and follow the instructions to install the samples
  3. All you have to do is import the System.Linq.Dynamic namespace wherever you want to use Dynamic LINQ.

Let’s Reflect

I forgot to mention a few minor details.  The data to be exported is coming from a view.  The view is defined as a LINQ to SQL entity.

Ok, now that that’s out of the way, I needed to create a checkbox list of all the columns to allow the user to select the ones to export.  I obviously knew that this is doable using reflection, but had to dig around for the right calls to make.  All I needed to do is basically loop through all the properties of the LINQ to SQL entity class (the data class) for the view and bind them to the checkbox list control.  Here is the code:

   1: var props = typeof(PersonData).GetProperties();
   2: var columns = from prop in props
   3:               where prop.GetCustomAttributes(typeof(ExportableAttribute), false).Length > 0
   4:               select new ColumnCheckBoxItem
   5:               {
   6:                   DisplayName = (prop.GetCustomAttributes(typeof(ExportableAttribute), false)[0] as ExportableAttribute).DisplayName,
   7:                   PropertyName = prop.Name
   8:               };
   9:  
  10: clbColumns.DisplayMember = "DisplayName";
  11: clbColumns.ValueMember = "PropertyName";
  12: foreach (var column in columns)
  13: {
  14:     clbColumns.Items.Add(column);
  15: }

Let’s look at this code in more details.  In line 1, I get all the properties for my entity class PersonData.  Then I select only the properties that have the “Exportable” attribute.  This allowed me to filter out some properties from showing up in the checkbox list, such as timestamp and Guid columns.  For example the zip code is defined as a property called ZIP, but I wanted it to show up in the checkbox list as “Zip Code”, so I added a property to my ExportableAttribute class called DisplayName which allowed me to customize the diplay name.  So, the ZIP property looked like this:

[Column(Storage = "_ZIP", DbType = "NVarChar(100)")]
[Exportable(DisplayName = "Zip Code")]
public string ZIP
{
    get
    {
        return this._ZIP;
    }
    set
    {
        if ((this._ZIP != value))
        {
            this._ZIP = value;
        }
    }
}

 

In lines 4 to 8, I create an anonymous object that contains the display name and the actually property name which will respectively correspond to DisplayMember and ValueMember in the checkbox list (lines 10 and 11).

Note that in line 6, I retrieve all the attributes of the property that are of type ExportableAttribute.  This returns an array, so I use the first element to retrieve the DisplayName.  There is no need to test for null since the where condition will ensure that only properties with the Exportable attribute are included.

You are also probably wondering why I didn’t just set the checkbox list DataSource to the columns collection.  Well, I did an it didn’t bind correctly.  I am not sure why but the for loop worked and I didn’t want to waste too much time.  When I bound the list using the DataSource property, the checkboxes contained text like {DisplayMember = “Zip Code”, PropertyName=”ZIP”} instead of just containing Zip Code.  Does anyone know why?

Finally, the ExportableAttribute class looks like this:

public class ExportableAttribute : Attribute
{
    public string DisplayName;
}

 

The Export

Now that I have prompted the user to customize the export and I have dynamically built the LINQ query based of the user’s selection, all I have to do is export it.  As I mentioned above, the Export method in the base class needed a DataReader, so I had to convert my LINQ expression to a DataReader.  Here is how you get a DataReader out of a LINQ expression (note that this will work with both static and dynamic queries):

Database db = DatabaseFactory.CreateDatabase();
var cmd = context.GetCommand(query as IQueryable))
var myReader =  db.ExecuteReader(cmd);