jones.busy

technical musings of a caffeine converter

NAVIGATION - SEARCH

Enabling external testing of .Net code with Fitnesse

Unit Tests not only offer an excellent entry point into writing good code with a TDD approach but can also offer a health check in the form of automatic regression testing for subsequent changes that may have had an unintended impact elsewhere in the system. But the tests are only valid for what they cover and often are only visible to the dev team. CI and automated builds often come with the tools to increase the visibility of these tests but still only offer a signature-level view of what is being tested.

Whilst searching for a way to elevate are tests, I was pointed in the direction of Fitnesse. A wiki based web tool that not only allows non-technical users to view tests but, to some extent, to specify and run them. It’s not a wrapper around existing unit tests, although I found I could use existing unit tests to help drive the ‘published’ versions. You will have to write some code to enable the tests to be run, but I found it to be a healthy exercise as the code you have to write forces you to consider what variables are of interest in each scenario and this can be done in collaboration with a tester, business analyst, or end user.

To get started, head over to the Fitnesse Download Page and download the fitness-standalone.jar file (yes, it does run on java, but is perfectly capable of running .net code)

Browse to the downloaded file’s location in a command prompt (runas Admin). I already had the standard port 80 in use so needed to specify a different port for install:

java -jar fitnesse-standalone.jar -p 8080

It’s worth browsing then to your local Fitnesse home page to ensure all installed correctly (http://localhost:90 in my case)

Next up you need to install a runner for your framework. The runner is akin to a unit test runner such as nunit and is responsible for loading in your .net dll and running the tests against it. There are two approaches with Finesse to testing – Slim and Fit, but the FitSharp runner manages both for .net testing so head over to its download page, grab the right version for your framework and unzip to an appropriate directory (in my case, c:\apps\fitSharp).

Next up is to set up a new suite ready for testing in your Fitnesse wiki. You can either create and then add a link to it or, as I prefer, add a link first then user the ‘Create New’ shortcut to do it for you. Click on the Edit menu to open up your home page ‘source code’.

You’ll see some odd table notation defined by lots of these: ||

To add a new link into the table, paste the following in (change the names to suit your needs but be wary which element refers to what component etc.):

| [[My Test Suite][.TestSuite]] | ''Test Suite Link'' |

Click Save, and you should see the following in your table:

image

If you click on the [?] you will be taking to a create page form. Just click Save in that form then you can head back to the home page and see that ‘My Test Suite’ has now become a proper link to that new page. Navigate to your TestSuite page (you can change these names to suit your needs), click Tools from the menu, then Properties.

Select Suite in the top box under Page type, then click Save Properties

image

Now head over to your VS solution. Let’s assume you have the following setup

image

So, FitnesseExample is a class library containing some dtos and view models (it would obviously contain a lot more). FitnesseExample.Tests is my test project (a plain class library with Rhino Mock and NUnit add through nuget but feel free to head down which ever testing route you prefer)

Given a view model as follows:

public class ExampleViewModel { public DateTime BookingDate { get; set; } public int TicketNumber { get; set; } public decimal TicketPrice { get; set; } public decimal TotalAmount { get { return TicketNumber * TicketPrice; } } public bool CanCommit () { return BookingDate > DateTime.Today && TicketNumber > 0; } }

I might create the following test:

[Test] public void CannotCommitIfBookingDateIsNotInTheFuture() { // arrange var vm = new ExampleViewModel(); // act vm.BookingDate = DateTime.Today; vm.TicketNumber = 1; // assert Assert.IsFalse(vm.CanCommit()); }

For the purpose of Fitnesse, this is where you would most likely deviate from the traditional route of unit testing. The first exercise would be to define the test:

CannotCommitIfBookingDateIsNotInTheFuture

Then, given the following criteria:

“Bookings cannot be committed unless the booking date is in the future and at least one ticket has been selected”

Work out which variables should impact this decision

TicketNumber & BookingDate

And also which variable needs to be checked:

CanCommit()

I use the term ‘variable’ in a very loose sense as this conversation would not involve the code itself so variable could refer to a method’s return value, a property etc.

The next step is to create a Decision Table – this is a column/row based format to defining the test and goes a bit like this:

|TESTNAME|

|VARIABLE_IN_A | VARIABLE_IN_B| .. | VARIABLE_OUT_A? | VARIABLE_OUT_B? | ..||

|TEST_INPUT_A | TEST_INPUT_B| .. | EXPECTED_OUTPUT_A | EXPECTED_OUTPUT_B | .. ||

It’s a bit convoluted – especially to a non technical mind so you can use Excel to set it out instead if preferred. In the case described above, I would have the following:

image

Note that the output variables are defined by a suffixed question mark. Back in Visual Studio, you now have to write a fixture class that will allow these variables to be set, read and acted upon. Here’s the fixture class for the above test:

namespace FitnessExample.Test.Fixtures { using System; using System.Globalization; using FitnesseExample.ViewModels; public class ValidationMustBePassedBeforeCommitting { private readonly string[] dateFormats = {"dd/MM/yy", "dd/MM/yyyy"}; private ExampleViewModel viewModel; public void Reset() { viewModel = new ExampleViewModel(); } public void SetBookingDate(string bookingDate) { var date = DateTime.ParseExact(bookingDate, dateFormats, null, DateTimeStyles.None); viewModel.BookingDate = date; } public void SetTicketNumber(int number) { viewModel.TicketNumber = number; } public int TicketNumber() { return viewModel.TicketNumber; } public bool CanCommit() { return viewModel.CanCommit(); } } }

UPDATE: I have since discovered that you can actually use Properties instead of the SetX() and X() approach for macthing so, in the example above, you could have a Property TicketNumber with a getter and setter that replaces the two methods.

It’s probably self-evident how it fits but here’s the breakdown:

1. The class is effectively the text from the first row, without spaces (keeping to Camel case convention)

2. Each input variable must have a matching void method prefixed with Set so Booking date needs SetBookingDate… etc.

3. For DateTimes, you can take in a string and convert based on one or more formats

4. For output variables, it’s simply a method that returns the expected value in Camel Case

You need to build your test project and copy all dlls from its output target dir to a test directory – let’s assume we are going with “C:\Fitnesse\” for now.

Head over to this directory and create a new xml file, “example.config.xml”. Paste in the following text:

<?xml version="1.0" encoding="utf-8" ?>
<suiteConfig>
  <ApplicationUnderTest>
    <AddAssembly>c:\fitnesse\FitnesseExample.Test.dll</AddAssembly>
    <AddNamespace>FitnesseExample.Test.Fixtures</AddNamespace>   
</ApplicationUnderTest>
  <Settings>
    <Runner>fitSharp.Slim.Service.Runner</Runner>
  </Settings>

</suiteConfig>

What this is telling the Fitnesse runner is what dll you are using to test, what sort of runner you are using, and any namespaces you are using to allow the runner to locate the fixture classes.

Back in your Fitnesse wiki, navigate to your Test Suite page, click Edit, and paste the following in:

!define TEST_SYSTEM {slim}
!define slim.timeout {60}
!define COMMAND_PATTERN {%m -c "c:\fitnesse\example.config.xml" %p}
!define TEST_RUNNER {c:\apps\fitSharp\Runner.exe}
!path c:\fitnesse\FitnessExample.Test.dll

This tells the page where to find your config file, where to locate the runner executable and the path to your test dll (I did wonder why we have to set the path here as well as add it to the config but if I left out the path, the tests did not run).

Underneath this test, paste the following:

[[Booking Tests][.TestSuite.BookingTests]]

Save the page and then click on the [?] next to the new link to create a page. Go back to your spreadsheet and select the columns and rows containing the test and copy them to your clipboard. Back on the test page, paste in the spreadsheet contents and then click the Spreadsheet to Finesse button:

imageimage

This converts the pasted text into a Decision Table. Click Save and you’ll see this now appear in your test page:

image

Now Click Tools –> Properties, and set the Page type to Test, and then click Save Properties

image

You should now see a Test button appear at the top op the page. Give it a click and see what happens. With any luck, it’ll be something similar to below – the important thing now is that the tests have run, regardless of whether they pass or not:

image

 

 

 

 

 

 

 

 

 

 

If you do not get this (and trust me, I spent a number of rounds before I got to this stage), check, check and check again on your paths, config files etc.

Ordering array of objects in Select Dropdown with Angular

As a newbie to angular and, in some sense, to javascript, when populating a select dropdown from an array of objects returned from my api, I fell into the trap of expecting the “| orderBy” filter to work as described. I’d already come a cropper with dynamically populated selects and trying to get the correct property to be shown so I probably should have been less surprised than I was.

Here’s the unordered select, populated from an array, ‘tenantTypes’ which itself is populated from an api call returning an array of a .net class, TenantType with the properties Id: long and Name: string.

<select ng-model="controller.registration.organisationType" ng-options="key as value.name for (key, value) in controller.tenantTypes" class="form-control"> <option value="">Organisation Type...</option> </select>

The ng-options statement took me some time to work out, but it worked as expected so I hope to be able to simply add the following in:

| orderBy:’name’

Not happening. The ng-options statement needed some jiggery-pokery to work with the object array and so does the sorting. After looking around, I came across this excellent filter created by Justin Klemm:

http://justinklemm.com/angularjs-filter-ordering-objects-ngrepeat/

Looks bang on the money so I grabbed it and added the following in:

| orderObjectBy:’name’

Still not happening: Once I’d ruled out my usual issues (reference the script, injecting it into the app module etc.), I started scratching my head. The only difference I could see was that this filter used the ng-repeat approach in its example. I knew I could have declared my select using <option ng-repeat… instead of the ng-options route but I had expected them to work the same under the bonnet. Well, if they do, then the ng-options does not work well with the custom filter as I finally managed to get my select ordering nicely by re-coding the select design to the following:

<select ng-model="controller.registration.organisationType" class="form-control"> <option value="" ng-selected="true">Organisation Type...</option> <option ng-repeat="type in controller.tenantTypes | orderObjectBy:'name'" value="{{type.id}}">{{type.name}} </option> </select>

Again, seasoned angular devs may be able to point out some glaring mistakes or assumptions I have made and please do, but just in case anyone else has tried or is trying the same approach, this might at least save them a couple of hours of head scratching Smile

Extending Angular $http service

Recently, I’ve been working with AngularJs, developing an Azure targeted application with an Entity Framework (Code First) backend, Web Api v2.0 middle tier and an AngularJs front-end. I like AngularJS for a number of reasons, one of which is that, as a .Net, WPF developer for most of my time, having so many different options and approaches to consider with client-side web development, having a framework that minimises those options into one core approach is beneficial to me. I completely understand that this same reason could be seen as a negative and I am well aware of the impending re-write of Angular due this year but to move from the safety net of a strongly typed .net background to the javascript playground, AngularJS feels a little like I’ve got some stabilisers on my dev bike Smile

Using the $http service over the weekend, I stumbled across a couple of issues:

1. Calling a web api method with a simple string parameter:

public async Task<IHttpActionResult> PostRole([FromBody]string roleName)

The issue is that by sending the string across as a variable causes some misinterpretation of as it as a json literal so instead of the value finding its way to the web api method, you get a null object instead. So, instead of calling the following:

return $http.post(serviceBase + 'api/admin/roles', roleName);

You need to do this:

return $http.post(serviceBase + 'api/admin/roles', "'" + roleName + "'");

2. Calling a ‘get’ method from within IE:

public async Task<IHttpActionResult> Get()

For some unknown reason, no matter how hard I tried to stop this from caching when testing in IE, I had no luck. In the end, the only approach that worked for me was to randomise the request as follows:

return $http.get(serviceBase + 'api/admin/roles?rnd=' + new Date().getTime());

It’s not very pretty is it? Firstly, I must asking anyone reading that has any suggestions or comments about what I’ve done or the issues I have faced, please do get in touch as it is perfectly feasible I have completely crossed my wires with this and there is another approach I should have taken. But for now, these are the issues I hit, Googled, and then put in some workarounds.

Now, having found a workaround, I realised that these sorts of calls could be a regular occurrence and so I don’t want to have to remember to apply these fixes each time. My first thought was to see whether I could extend the $http service to wrap this functionality in a neat bundle for me. Coming from C# extension methods i looked at the following options:

1. prototyping

2. behaviours

3. providers

I’ve heard about most of these approaches but never seen any in practice. I really wanted to stick to passing in $http and just being able to either handle or extend the functionality to handle these requirements but could not get this to work with my limited javascript knowledge. In the end, I create a new service, extendedHttpService, which is actually a factory class that returns an extended version of the $http service:

(function (ng, app) { "use strict"; app.factory('extendedHttpService', ['$http', function($http) { var forceGet = function(url) { return $http.get(url + "?rnd=" + new Date().getTime()); }; var postString = function(url, str) { return $http.post(url, "\"" + str + "\""); }; $http['forceGet'] = forceGet; $http['postString'] = postString; return $http; } ]); })(angular, app);

Allowing me to then call the extended functions as follows:

(function (ng, app) { "use strict"; app.service('rolesService', ['extendedHttpService', 'appSettings', function (extendedHttpService, appSettings) { var serviceBase = appSettings.apiServiceBaseUri; this.getAllRoles = function () { return extendedHttpService.forceGet(serviceBase + "api/admin/roles/"); }; this.createRole = function (roleName) { return extendedHttpService.postString(serviceBase + 'api/admin/roles', roleName); }; this.updateRole = function (role) { return extendedHttpService.put(serviceBase + 'api/admin/roles', role); }; }]); })(angular, app);

This is much neater, but I’d love to hear on other approaches that I could have used in this scenario.

Separating responsibility client-side in MVC with RequireJs and RMP

I my previous post, I talked about splitting my Knockout view model from the rest of my JavaScript by using employing the  Revealing Module Pattern to my code. I mentioned at the end of the post that I could take it one step further and split the code into separate script files then use something like RequireJs to organise the dependencies. It turned out to be a bit more fiddly than I expected so here’s my experience in the hope that it may point other newbies in this area in the right direction (or indeed, prompt any JS veterans to point out my mistakes)

Firstly, I want to tackle the important issue of “why do it if it’s complicated?”. Granted, what I had before worked but, as with most of my development, my code is an evolutionary product and whilst I always aim for it to be complete, I always work under the expectation that either I am going to have to come back here to make further improvements or, more importantly, someone else might. With that in mind, whilst there may seem to be an unjustified overhead initially in spending time introducing another library and re-organising existing, working code, I do believe that it’s one worth paying. Secondly, it’s mostly only complicated because of the learning curve and that’s really a one-off that can benefit in the long term elsewhere.

I had already tidied up my ‘viewmodel’ script by separating out the various areas of concern into three different modules: viewmodel, view and what I referred as rendering which was responsible for manipulating the view based on user interactions and start up defaults. These modules were contained in a separate file leaving only the following script inside the actual view:

   1: @section Scripts



   2: {



   3:     <script src="~/scripts/app/page.details.js"></script>



   4:     <script type="text/javascript">



   5:         (function () {



   6:             var viewModel = details.initialiseViewModel(ko.mapping.fromJS(@Html.Raw(Json.Encode(Model))));



   7:             detailsView.initialiseView(viewModel);



   8:         })();



   9:     </script>



  10: }

The next step was about separating out the modules into individual script files and managing the dependencies between them. Before I did this, I revisited the responsibility question of each module and decided that I wanted to make some changes.

I wanted the viewmodel module solely responsible for view data, computed data and commands, but clean of actual view components (ids, classes etc.) etc. – those should be handled in the view module.

The rendering module was bugging me as it shared some of the view module’s responsibility. What I actually was missing was a module that acts as a sort of controller so I decide to clean up the rendering module and allow the view module to handle click events and manipulation of view model data and introduce a dataService module, solely responsible for conversing with a remote service. This did not need to be defined per page though so I created this under the RMP pattern within my scripts/apps folder as I wanted it initialised on startup.

What I ended up with is shown below – basically MVVM without the Model as we already have a domain model on the back end which is then converted into DTOs for the front end so I did not need another model for this. A Model can also act as the DAL but I don’t like this coupled approach and all of my DA goes through multiple layers leaving my web app completely clean of the DA technology.

image

The view model module declares the properties and functions I want my view to be able to bind to. I see the functions acting like the Command pattern does in WPF and Silverlight so where I need to talk to the data service, I use the following knockout notation to go through the view model:

   1: data-bind="click: deleteCompanyCommand"

Where I am only manipulating the view, I am using plain old event handling in my view module to control this:

   1: $("#createCompanyBtn").on("click", initialiseCreateCompanyDialog);

Both the View and ViewModel modules are hooked up via the Require library with the following notation. Note I opted to make the View dependent on the ViewModel module:

ViewModel:

   1: define(function () {



   2:



   3:     var initialiseViewModel = function (data) {



   4:     ...



   5:     return {



   6:         initialiseViewModel: initialiseViewModel



   7:     };



   8: });

View:

   1: define(['pageScripts/viewModel'], function (viewModel) {



   2:     var viewModel,



   3:     ...



   4:     initialiseView = function (data) {



   5:         viewModel = viewModel.initialiseViewModel(data);



   6:         viewSubscriptions();



   7:         wireEvents();



   8:     }



   9:



  10:     return {



  11:         initialiseView: initialiseView



  12:     };



  13: });

Then, inside the html, I have the following script:

   1: @section Scripts



   2: {



   3:     <script type="text/javascript">



   4:



   5:         require.config({



   6:             baseUrl: "/scripts",



   7:             paths: {



   8:                 pageScripts: "views/index"



   9:             }



  10:         });



  11:



  12:         require(['pageScripts/view'], function(view) {



  13:             view.initialiseView(ko.mapping.fromJS(@Html.Raw(Json.Encode(Model))));



  14:         });



  15:     </script>



  16: }

The config section sets up my base url and then provides me with an alias to easily refer to my scripts for that particular page – remember, i don’t want these scripts being loaded elsewhere, only for this page. The require loads in the dependency for my View which, in turn, has already declared its dependency on the view model.

I did find that trying to use explicit or relative paths inside the require seemed to result in undefined dependencies further down the chain but I would recommend setting up those paths for cleanliness in any case.

I also have the option of using the Require Minimiser to negate the impact of needing to load multiple resources as this will combine and minify all my script into one resource for download. Very nice indeed :)

I’ve found this whole app an interesting challenge in determining which web technologies, in particular, 3rd party javascript/ css packages work well together. I certainly found the using Twitter Bootstrap, which its attribute binding approach fits in nicely with Knockouts MVVM approach and both have enabled me to take a clean, separated line of attack when managing my javascript. Coming from a C# and Xaml background, that’s a really pleasant and familiar feel to how I like to code.

Separating Knockout Viewmodel from View

As mentioned in a previous post, I’m becoming a big fan of Knockout. I don’t favour MVVM over MVC per se, simply like the idea of being able to manipulate, and react to the model changing on the client side without the need for a return server trip every time.

Whilst it is perfectly normal for a Knockout view model to be declared inside the cshtml View to which it is bound, I have found that very quickly, the script can become quite bulky and difficult to maintain.

Take for instance the following script inside a View of mine:

   1: @section Scripts

   2: {

   3:     <script type="text/javascript">

   4:         /// <reference path="../jquery-1.9.1.js" />

   5:         /// <reference path="../knockout-2.2.1.js" />

   6:         /// <reference path="knockout.extensions.js" />

   7:         /// <reference path="../knockout.mapping-latest.js" />

   8:         /// <reference path="../jquery-ui-1.10.2.js" />

   9:         (function () {

  10:  

  11:             var viewModel = ko.mapping.fromJS(@Html.Raw(Json.Encode(Model)));

  12:             viewModel.currentView = ko.observable('company');

  13:             viewModel.editMode = ko.observable(false);

  14:             viewModel.editMode.subscribe(detailsRendering.toggleModelEdit);

  15:  

  16:             var btnActiveClass = 'btn-success';

  17:  

  18:             var manageSelection = function () {

  19:                 removeSelection();

  20:                 viewModel.currentView($(this).data("view-id"));

  21:                 addSelection($(this));

  22:             };

  23:  

  24:             var addSelection = function (element) {

  25:                 element.addClass("btn-primary");

  26:             };

  27:  

  28:             var removeSelection = function () {

  29:                 $(".detailsSelector").removeClass("btn-primary");                

  30:             };

  31:  

  32:             var toggleModelEdit = function (edit) {

  33:                 if (edit) {

  34:                     $("#lockModelBtn").removeClass(btnActiveClass);

  35:                     $("#unlockModelBtn").addClass(btnActiveClass);

  36:                 } else {

  37:                     $("#unlockModelBtn").removeClass(btnActiveClass);

  38:                     $("#lockModelBtn").addClass(btnActiveClass); 

  39:                 }

  40:             };

  41:  

  42:             $(".detailsSelector").on('click', manageSelection);

  43:             $("#lockModelBtn").on('click', function () { viewModel.editMode(false); });

  44:             $("#unlockModelBtn").on('click', function () { viewModel.editMode(true); });

  45:  

  46:             addSelection($("#companyDetailsSelector"));            

  47:  

  48:             toggleModelEdit(false);

  49:  

  50:             ko.applyBindings(viewModel);

  51:         })();

  52:     </script>   

  53: } 

It’s not very complicated but already I am finding it a little fiddly to follow. And, I’m not yet finished with the code, so it’ll only get bigger. I could separate out the areas responsible for initialising the view model from those responsible for reacting to user interaction with multiple script tags but a) that would add extra js code and b) I have a lot of overlap between the two areas of responsibility so scope is an important issue.

What I opted to go with is the Revealing Module Pattern (RMP) that provides a nice separation of concern in simplistic fashion. I created three RMP functions:

1. Initialise View Model

2. Initialise View

3. Handle User Interaction with the View

The resulting script is as follows:

   1: /// <reference path="../jquery-1.9.1.js" />

   2: /// <reference path="../knockout-2.2.1.js" />

   3: /// <reference path="knockout.extensions.js" />

   4: /// <reference path="../knockout.mapping-latest.js" />

   5: /// <reference path="../jquery-ui-1.10.2.js" />

   6: var details = function () {

   7:     

   8:     var initialiseViewModel = function (data) {

   9:         var viewModel = data;

  10:         viewModel.currentView = ko.observable('company');

  11:         viewModel.editMode = ko.observable(false);

  12:         viewModel.editMode.subscribe(detailsRendering.toggleModelEdit);

  13:  

  14:         ko.applyBindings(viewModel);

  15:  

  16:         return viewModel;

  17:     }

  18:  

  19:     return {

  20:         initialiseViewModel: initialiseViewModel

  21:     };

  22: }();

  23:  

  24: var detailsView = function () {

  25:  

  26:     wireEvents = function (vm) {

  27:         $(".detailsSelector").on('click', function () { detailsRendering.manageSelection(vm, $(this)); });

  28:         $("#lockModelBtn").on('click', function () { vm.editMode(false); });

  29:         $("#unlockModelBtn").on('click', function () { vm.editMode(true); });

  30:     },

  31:  

  32:     initialiseView = function (vm) {

  33:  

  34:         wireEvents(vm);

  35:         detailsRendering.manageSelection(vm, $("#companyDetailsSelector"));

  36:         detailsRendering.toggleModelEdit(false);

  37:     }

  38:  

  39:     return {

  40:         initialiseView: initialiseView

  41:     };

  42: }();

  43:  

  44: var detailsRendering = function () {

  45:  

  46:     var btnActiveClass = 'btn-success',

  47:     btnPrimaryClass = 'btn-primary',

  48:  

  49:     addSelection = function (element) {

  50:         element.addClass(btnPrimaryClass);

  51:     },

  52:  

  53:     removeSelection = function () {

  54:         $(".detailsSelector").removeClass(btnPrimaryClass);

  55:     }

  56:  

  57:     toggleModelEdit = function (edit) {

  58:         if (edit) {

  59:             $("#lockModelBtn").removeClass(btnActiveClass);

  60:             $("#unlockModelBtn").addClass(btnActiveClass);

  61:         } else {

  62:             $("#unlockModelBtn").removeClass(btnActiveClass);

  63:             $("#lockModelBtn").addClass(btnActiveClass);

  64:         }

  65:     },

  66:     manageSelection = function (viewModel, element) {

  67:          removeSelection();

  68:          viewModel.currentView(element.data("view-id"));

  69:          addSelection(element);

  70:      }

  71:  

  72:     return {        

  73:         toggleModelEdit: toggleModelEdit,

  74:         manageSelection: manageSelection

  75:     };

  76: }();

The first module, “details” (named after the view), providers a function for setting up the view model and returning. It does nothing else and knows of nothing else. The only dependency it has is on the initial input of the model which has come from the server.

The second module, “detailsView”, expects the view model and sets up the components in view from wiring the click events, to setting up the default view by calling into the third module, “detailsRendering”, which again expects the view model as an input and provides the functionality for manipulating the view based on either the user’s input or manually in the case of the detailsView initial setup. These three modules are in the same js file, but could quite easily be separated out, in which case, I’d be wise to use something like require to handle dependencies and also minimise the load.

To get the ball rolling, the html embedded script now looks like this:

   1: @section Scripts

   2: {

   3:     <script src="~/scripts/app/page.details.js"></script>   

   4:     

   5:     <script type="text/javascript">

   6:         (function () {

   7:             var viewModel = details.initialiseViewModel(ko.mapping.fromJS(@Html.Raw(Json.Encode(Model))));

   8:             detailsView.initialiseView(viewModel);

   9:         })();

  10:     </script>    

  11: }

Rather than returning the view model from the initial function, I could have called the initialiseView method from within but this way, I have the option of making further calls if needs be without the need to chain. Having spent most of my time in C# and Xaml, I’d love to hear from html and javascript guys on what their preferred approach to client side MVVM is.

Script List missing in IE10 F12 Debugger Tool

Recently had a “where the flip has that gone moment” with the IE debugger tools when I wanted to set a break point on some JavaScript code inside a separate file. Clicking on the Script tag brought up the html code and all the script embedded within in but when I hit “Start Debugging”, I was expected a Script List drop down to appear beside it:

IE10 F12 debugger window

I knew I had seen it there before as a brief search on msdn revealed (for IE8). After wasting looking ‘everywhere’ for it, I finally found the blighter in plain view, just a little more subtle:

image

Obvious once spotted but thought this might save anyone else a few precious minutes!

Creating DTOs with less code – a good thing?

As the old adage goes, more code equals more bugs. Many OO principles are either based upon or include some basis in writing less code. The reasons are obvious. So when it comes to DTOs, therein lies a little conundrum: writing a class that represents another class (or multiple classes) seems like you’re breaking the rules a little. Take for instance the following three domain model classes (and their base class):

image

The Company class mostly comprises complex properties (not all shown for brevity), whereas the Address class is all primitive types plus an enum. Contact is mostly primitive types. Just for background info, I’m using these classes within an EF Code First approach so they are not just my domain model but by data objects as well.

Suppose I have a service, CompanyService that delivers up a company (or companies). This service is consumed by an MVC controller. I have a view that displays the following information:

Company Name; Company Created Date, Company Modified Date, Company Primary Address, Principal Contact Full Name, Principal Contact Email Address

I could grab the company, use the whole object as my model and return the view. MVC calculates the required fields on the server side, EF lazy loading ensures that only the properties required are retrieved and only the necessary information gets posted into the View returned. That’s nice as I haven’t had to create any extra DTO classes to flatten my data. However, there are a couple of snags:

1. What happens when I am using the model for editing and I want to place validation attributes on my properties. Most attributes (Required, MaxLength etc.) fall into the System.ComponentModel.DataAnnotations namespace and perhaps it would be fine to use these – especially if you are already using them in a Code First manner to shape the database. But what if you want to use some of the MVC model metadata attributes? Such as ShowForDisplay, Order or IsReadOnly. You do not want a dependency on System.Web.Mvc on your Domain Model.

2. You decide you want to use an API Controller at some point – perhaps to serve some data to a client-side model instead. You grab the same company object and bam, you hit an exception:

The type System.Data.Entity.DynamicProxies.SomeClass_SomeGUID was not expected…

That clever lazy loading, change tracking Entity Framework has caught you out with its dynamic proxy classes. Your options are to disable proxy generation and eager load the child properties you want but how do you know which children you want for each scenario? You don’t. You can’t.

People can use the DRY principle to justify not writing out separate DTO classes but the simple argument is that only a ViewModel can know what needs to be shown, what can be edited and how something may need to be displayed. Granted there is an overlap with validation and certainly there will be times, often in read-only scenarios where there is a 100% mapping between the Domain Object and the DTO. This would be true in the case of the Address class above – an AddressDTO class would have the exact same properties. But in edit mode, I may need more flexibility with validation and other attributes. I may also decide to make all but Name, NameNumber and Postcode read-only and insist on using a postcode address finder utility. But I wouldn’t want the other fields to be read-only on my domain model.

If you are in a scenario where you have multiple user interfaces you may want to consider breaking out your DTO objects into a separate project. If your front-end apps transcend different technologies, it may even be worth using a fluent approach with validators/ behaviours that are specific to that technology should there be one, but separating out a DTO object from its intended target may seem like a DRY principle but caution must be paid as to who is using it and for what as developers will always look to find something that closely matches there needs and often fall into the habit of tacking their own requirements.

In short, I would recommend keeping you DTOs separate for each app. Use a mapping tool like AutoMapper to minimise coding and don’t be afraid to ‘break’ the DRY principle a little – using one DTO for returning search result and another for viewing details means that your search will be faster. If you can use inheritance to enforce a little DRYness to your DTOS, great but don’t get hung up on it.

Best Approach? EF5, Design Patterns, MVC and Knockout

One of the things I particularly like about Silverlight is that its rich client side model enables easy manipulation of data in a decoupled, data-bound fashion without the need to perform projection and/ or switching between to two different languages such as C# and JavaScript. What I don’t enjoy is the fact that in any distributed system that involves Silverlight, you will have to use some sort of asynchronous data-grab architecture so any lazy-loading or change tracking that may have been available on the server side gets lost. When working with MVC however, I find that even though I may have the full capabilities of EF at my disposal within the controller, the likelihood that I will push the same EF domain model classes into my view and few and far between. The main reasons:

1. Views usually only require a subset of the data so I don’t want to bulk up the response with more than I need.

2. JSON serialisation does not handle proxy version of classes designed to handle change tracking.

With point 2, you may come across this if you use Web API controllers and hit the error,

The type System.Data.Entity.DynamicProxies.SomeClass_SomeGUID was not expected…

The reason why point 2 is so important for me is because, coming back to Asp.Net from Silverlight, the desire to mimimise not just the amount of return trips to the server but to minimise the amount of data sent as well is too strong to ignore. With SL, the view is already on the client and all that happens is the data that is required is asked for and given. With vanilla MVC, you can make a request that returns the same view just with different data and sometimes, the data can be a subset of what you already had. So I wanted to get to grasps with an MVC approach that would satisfy these wishes and give me a chance to get to know MVC as it is. To this end, I opted to take a look at KnockoutJs as a) it offers what I am looking for and b) it seems to be the most prevalent of MVVM js libraries out there right now. I’m not saying it’s the best or that there aren’t other approaches but as contractor I need to think not just about keeping ahead of the curve, but, in this instance, getting myself up-to-date with it.

I have spent many an hour looking at various approaches covering all aspects of MVC. From the vanilla flavour to a full blown SPA (courtesy of Mr John Papa), I have settled on something in between – I don’t want a clunky ‘return a view every time’ approach, but I don’t feel ready for the single page approach where both the view and data is dynamically grabbed using JavaScript. I love JavaScript for its flexibility but struggle with it because of its flexibility! We C# developers do enjoy the seat-belt constraints of our strongly typed language :)

Before I had even settled on a web approach, I knew that I would want to be able to target more than one front-end and not just via the web so with that in mind, I wanted to abstract my DAL and not just by using EF. I opted to use a POCO domain model with a Code First approach and encapsulate that within a set of Repositories that themselves were house within a UnitOfWork class. I then created a set of services to manage specific areas of logic (but could still overlap), i.e. a CompanyService responsible for basic CRUD ops as well as more specific methods. It is these services that are inject into my controllers via Ninject. Whilst this abstraction provides me with more testable code and allows me to control what operations are performed and how they are performed, it does mean that I lose the more finer points of some of the EF functionality such as Eager Loading etc. However, I have found in development, if something is available, sooner or later, someone will come along and decide to take it, regardless of whether or not it is always needed. Forcing developers through the route of declaring specific business functionality in the service for their needs or simply allowing them to delay load the required resources as and when required minimises the risk of data bloat where each developer adds on their own specific data needs until everyone is complaining of performance issues.

So my layers up to the UI look a little like this:

Domain Model Entities + other common elements (Interfaces, enums etc.)
EF DbContext DbSet<Company>, Fluent API Configurations
Repositories Using generic DataEntityRepository<T>
Unit Of Work Combines all repositories and provides Commit () method
Services CompanyService – i.e. GetAllCompanies()

note: Services is analogous to BLL, not a web or windows service

So, on to the MVC app. I have taken an empty application and applied the Twitter Bootstrap templating to it. I created my own authentication mechanism through the services, based on Forms Authentication. I added KnockoutJs through Nuget and set about creating two initial views – one to list all companies, and one to view details.

Standard controller approach in MVC would be to call the company service, grab all companies, then either inject the result as the model into the View to return or project the results into a DTO first. But as this means that my model is now static on the client side, if I want to manipulate it, I need to return trip to the server.

Take for instance the following scenario:

1. My Index() method in the home controller returns a list of all companies represented as DTOs.

2. My index view has a search text box to allow the user to filter the companies based on a name value.

3. To filter the companies, the user must enter a value, then hit the filter/ search button.

4. The home controller has an Index (string q) method that gets all companies where the name contains the string represented by q. It then returns the Index view again but with a subset of the original data as the model.

It’s a little clunky, isn’t it?

What I really want is for the filter text box to immediately filter the current list as the user types. And for this, I need my model to be available client side.

There are a couple of approaches to do this with Knockout but first off, I went down a web API with an Ajax call approach:

1. HomeController.Index() returns nothing but the view.

2. A new ApiController is created, CompanyController which contains the method, GetAllCompanies and returns an IEnumerable of the DTO representation of a company

3. Index view references a JavaScript file: vm.index.js, that contains the code to call the web API and set up the view model:

   1: $(function () { 

   2:  $.getJSON("/API/company", function (data) {

   3:     var viewModel = 

   4:     {

   5:          //data

   6:          companies: ko.observableArray(ko.toProtectedObservableItemArray(data)),

   7:          filterText: ko.observable(""),

   8:     }; 

   9:     

  10:     viewModel.companyCount = ko.computed(function () {

  11:         if (this.companies() == null) {

  12:             return 0;

  13:         }

  14:         return this.companies().length;

  15:     }, viewModel); 

  16:  

  17:     viewModel.filteredCompanies = ko.computed(function () {

  18:         var filter = this.filterText().toLowerCase();

  19:          if (!filter) {

  20:              return this.companies();

  21:          } else {

  22:              return ko.utils.arrayFilter(this.companies(), function (company) {

  23:                 return company.name().toLowerCase().indexOf(filter) >= 0;

  24:              };

  25:          }

  26:      }, viewModel); 

  27:      ko.applyBindings(viewModel); 

  28:  }); 

  29: });

4. Set up Index View to use the Knockout databinding for both the company list:

   1: <table class="table table-striped">

   2:     <thead>

   3:         <tr>

   4:            <th>Name</th>

   5:             <th>Date Created</th>

   6:             <th>No. of Teams</th>

   7:             <th>Principal Contact</th>

   8:         </tr>

   9:     </thead>

  10:     <tbody data-bind="foreach: filteredCompanies">

  11:         <tr>

  12:             <td data-bind="text: name"></td>                                

  13:             <td data-bind="date: dateCreated, dateFormat: 'DD/MM/YY'"></td>                    

  14:             <td data-bind="text: teamCount"></td>                    

  15:             <td data-bind="text: principalContact"></td>                    

  16:         </tr>

  17:     </tbody>

  18: </table>

5. And the filter text:

   1: <input type="text" data-bind="value: filterText, valueUpdate: 'afterkeydown'" placeholder="filter by name" />

 

Nothing more is needed – the JavaScript grabs the initial data and populates the companies array which in turn populates the filteredCompanies array. The filter text then instigates the refresh of the filteredCompanies array.

This is a much better user experience, not to say more efficient. Granted, we must be careful about how much data we load on to the client side, but this then gives even more weight to using DTOs to minimise the overload on data that is not required.

An area I am not so certain of though is the best approach for getting the data to the client in the first place – here we are using an Ajax call to a web API but firstly, that involves a second trip to the server on top of the initial page request and what about actions where parameters are passed in? Take the details view for instance – it might look something like: /Home/Index/3. The id of 3 would be picked up by the method in the controller, i.e. Index(int id) but how do we get hold of this id were we to make a call through to the corresponding web API method (API/companies/id)?

We could scrape it out of the url I suppose but that seems a little nasty. So perhaps it would be better to allow controller methods to do the necessary work and return the model in the first call and then convert the model into JSON to be used within the client-side JSON:

   1: (function () {

   2:    var viewModel = ko.mapping.fromJS(@Html.Raw(Json.Encode(Model)));

   3:    ko.applyBindings(viewModel);

   4: })();

This allows us to have our client-side model to manipulate, minimises the trips to the server for the initial page load and handles the issue of controller action parameters.

The one downside I have found is that where previously I liked to house my view model code in a separate script file, this doesn’t work well with the above conversion code as it cannot process the server-side directive ‘@Html.Raw(….’ outside of the cshtml file itself.

I would love to hear from anyone else who has a preference on how best to approach this.

Format dates using Knockout JS Custom Bindings with Moment.js

I have spent the past 5 years predominantly within the Xaml arena and whilst I have appreciated the new skills I have learnt in terms of the specific technology, one of the greatest benefits to me has been the coupled approach towards design patterns synonymous with xaml. I’m talking about MVVM, composite applications, and how these, in turn, introduced me to the concepts of IOC, Repository and Unit Of work patterns. Whilst I realise these have been around for a lot longer than Xaml, the prevalence of them in the asp.net web forms arena was non-existent, at least for me at the time.

Fast forward 5 years and with MVC, I am fast rekindling my love of asp.net. Not only is the base framework a significant improvement on web forms, but the explosion of complementary JS frameworks from JQuery to Knockout have really enriched the choice for web developers to pick the right tools for the job at hand.

Recently, I have been looking at Knockout and Web Api with view to minimising the number of repeated page or partial page refreshes when updating data. Using JQuery Ajax to grab data is a great way to remove the overhead of returning page details on top of the actual data itself. Coupled with knockoutjs to provide a intuitive workflow is a natural choice but my first pass demonstrated immediately the need for some formatting:

image

These came about from the following Knockout syntax:

   1: <tbody data-bind="foreach: companies">

   2:     <tr>

   3:         <td data-bind="text: name"></td>                    

   4:         <td data-bind="text: dateCreated"></td> 

   5:     </tr>

   6: </tbody>

 

In xaml, where databinding is (relatively) automatic, a simple StringFormat would provide the option here to mould the date required but using knockoutjs, one way I found to manipulate data bindings is using the custom bindings feature coupled with a suitable 3rd party framework if required. Moment.js is an excellent library for manipulating dates. Using the custom binding with knockout and providing the option to pass in a custom format string, I came up with the following:

   1: ko.bindingHandlers.date = {

   2:     update: function (element, valueAccessor, allBindingsAccessor, viewModel) {

   3:         var value = valueAccessor();

   4:         var formatString = allBindingsAccessor().formatString;

   5:         var date = moment(value());

   6:         if (formatString == null) {

   7:             $(element).text(date.format('DD/MM/YY'));

   8:         }

   9:         else {

  10:             $(element).text(date.format(formatString));

  11:         }

  12:     }

  13: };

This will look for the formatString parameter and will default to the english format of dd/mm/yy if it is not present. In my markup, I now have the following:

   1: <td data-bind="date: dateCreated, formatString: 'MM-DD-YY'"></td>                    

   2: <td data-bind="date: dateModified"></td>    

Which gives me the following output:

imageMuch cleaner, although just a start as the next step would be to think about auto-detecting current UI culture for the browser and being able to apply that as the default.

MS Test missing tests

Just a quick post to highlight a bit of a ‘doh!’ moment I had earlier today that may catch out anyone else whilst in half-a-sleep mode. In adding a new service to my solution that encapsulates CRUD functionality for a new entity, I went about adding my new tests to cover all elements – unit tests and integration tests covering areas from validation, to unit of work and repositories. When running through these tests and doing the work required to make them pass, I ended up with one test failing well before I expected to be left with just the one. Digging into the other tests I came across a bunch of tests that were being missed out.

I have JustTest installed from Telerik that gives me the option of testing specific tests via the context menu so I tried this out and, sure enough, the test sprang to life in Telerik’s Unit Test console and promptly failed as it should have done. So why couldn’t MS Test see it?

The answer was simple – I had not declared by class as public. I know my tests have to be declared in a public class but it was such an easy oversight, had I not had another Unit Testing framework to compare against, I may not have spotted it as easily as I did.

Whilst the improvements to the built-in testing framework in VS 2012 Ultimate are welcome, I am surprised that the compiler could not have at least picked up that I had a class decorated with [TestClass] attribute that was not public and given me a warning. Operations that silently fail to run are dangerous to the integrity of your code – it’s like an empty catch block – the silent code killer!

So, if you are using MS Test (this may apply to other frameworks but Just Test from Telerik managed to discover the non public test class), just keep an eye out for your test class decorations.