Skip to main content

6 posts tagged with "unit testing"

View All Tags

Instant Stubs with JSON.Net (just add hot water)

I'd like you to close your eyes and imagine a scenario. You're handed a prototype system. You're told it works. It has no documentation. It has 0 unit tests. The hope is that you can take it on, refactor it, make it better and (crucially) not break it. Oh, and you don't really understand what the code does or why it does it either; information on that front is, alas, sorely lacking.

This has happened to me; it's alas not that unusual. The common advice handed out in this situation is: "add unit tests before you change it". That's good advice. We need to take the implementation that embodies the correctness of the system and create unit tests that set that implementation in stone. However, what say the system that you're hoping to add tests to takes a number of large and complex inputs from some external source and produces a similarly large and complex output?

You could start with integration tests. They're good but slow and crucially they depend upon the external inputs being available and unchanged (which is perhaps unlikely). What you could do (what I have done) is debug a working working system. At each point that an input is obtained I have painstakingly transcribed the data which allows me to subsequently hand code stub data. There comes a point when this is plainly untenable; it's just too much data to transcribe. At this point the temptation is to think "it's okay; I can live without the tests. I'll just be super careful with my refactoring... It'll be fine It'll be fine It'll be fine It'll be fine".

Actually, it probably won't be fine. And even if it is (miracles do happen) you're going to be fairly stressed as you wonder if you've been careful enough. What if there was another way? A way that wasn't quite so hard but that allowed you to add tests without requiring 3 months hand coding....

Instant Stubs#

What I've come up with is a super simple utility class for creating stubs / fakes. (I'm aware the naming of such things can be a little contentious.)

using Newtonsoft.Json;
using System;
using System.IO;
namespace MakeFakeData.UnitTests
{
public static class Stubs
{
private static JsonSerializer _serializer = new JsonSerializer { NullValueHandling = NullValueHandling.Ignore };
public static void Make<T>(string stubPath, T data)
{
try
{
if (string.IsNullOrEmpty(stubPath))
throw new ArgumentNullException(nameof(stubPath));
if (data == null)
throw new ArgumentNullException(nameof(data));
using (var sw = new StreamWriter(stubPath))
using (var writer = new JsonTextWriter(sw) {
Formatting = Formatting.Indented,
IndentChar = ' ',
Indentation = 2})
{
_serializer.Serialize(writer, data);
}
}
catch (Exception exc)
{
throw new Exception($"Failed to make {stubPath}", exc);
}
}
public static T Load<T>(string stubPath)
{
try
{
if (string.IsNullOrEmpty(stubPath))
throw new ArgumentNullException(nameof(stubPath));
using (var file = File.OpenText(stubPath))
using (var reader = new JsonTextReader(file))
{
return _serializer.Deserialize<T>(reader);
}
}
catch (Exception exc)
{
throw new Exception($"Failed to load {stubPath}", exc);
}
}
}
}

As you can see this class uses JSON.Net and exposes 2 methods:

Make
Takes a given piece of data and uses JSON.Net to serialise it as JSON to a file. (nb I choose to format the JSON for readability and exclude null values; both totally optional)
Load
Takes the given path and loads the associated JSON file and deserialises it back into an object.

The idea is this: we take our working implementation and, wherever it extracts data from an external source, we insert a temporary statement like this:

var data = _dataService.GetComplexData();
// Just inserted so we can generate the stub data...
Stubs.Make($"{System.AppDomain.CurrentDomain.BaseDirectory}\\data.json", data);

The next time you run the implementation you'll find the app generates a data.json file containing the complex data serialized to JSON. Strip out your Stubs.Make statements from the implementation and we're ready for the next stage.

Using your JSON#

What you need to do now is to take the new and shiny data.json file and move it to your unit test project. It needs to be included within the unit test project. Also, for each JSON file you have, the Build Action in VS needs to be set to Content and the Copy to Output Directory to Copy if newer.

Then within your unit tests you can write code like this:

var dummyData = Stubs.Load<ComplexDataType>("Stubs/data.json");

Which pulls in your data from the JSON file and deserialises it into the original types. With this in hand you can plug together a unit test based on an existing implementation which depends on external data much faster than the hand-cranked method of old.

Finally, before the wildebeest of TDD descend upon me howling and wailing, let me say again; I anticipate this being useful when you're trying to add tests to something that already exists but is untested. Clearly it would be better not to be in this situaion in the first place.

He tasks me; he heaps me.... I will wreak that MOQ upon him.

Enough with the horrific misquotes - this is about Moq and async (that's my slight justification for robbing Herman Melville).

It's pretty straightforward to use Moq to do async testing thanks to it's marvellous ReturnsAsync method. That means it's really easy to test a class that consumes an async API. Below is an example of a class that does just that: (it so happens that this class is a Web API controller but that's pretty irrelevant to be honest)

namespace Proverb.Web.Controllers
{
// ISageService included inline for ease of explanation
public interface ISageService
{
Task<int> DeleteAsync(int id);
}
public class SageController : ApiController
{
ISageService _sageService;
public SageController(ISageService userService)
{
_sageService = userService;
}
public async Task<IHttpActionResult> Delete(int id)
{
int deleteCount = await _sageService.DeleteAsync(id);
if (deleteCount == 0)
return NotFound();
else
return Ok();
}
}
}

To mock the _sageService.DeleteAsync method it's as easy as this:

namespace Proverb.Web.Tests.ASPNet.Controllers
{
[TestClass]
public class SageControllerTests
{
private Mock<ISageService> _sageServiceMock;
private SageController _controller;
[TestInitialize]
public void Initialise()
{
_sageServiceMock = new Mock<ISageService>();
_controller = new SageController(_sageServiceMock.Object);
}
[TestMethod]
public async Task Delete_returns_a_NotFound()
{
_sageServiceMock
.Setup(x => x.DeleteAsync(_sage.Id))
.ReturnsAsync(0); // This makes me *so* happy...
IHttpActionResult result = await _controller.Delete(_sage.Id);
var notFound = result as NotFoundResult;
Assert.IsNotNull(notFound);
_sageServiceMock.Verify(x => x.DeleteAsync(_sage.Id));
}
[TestMethod]
public async Task Delete_returns_an_Ok()
{
_sageServiceMock
.Setup(x => x.DeleteAsync(_sage.Id))
.ReturnsAsync(1); // I'm still excited now!
IHttpActionResult result = await _controller.Delete(_sage.Id);
var ok = result as OkResult;
Assert.IsNotNull(ok);
_sageServiceMock.Verify(x => x.DeleteAsync(_sage.Id));
}
}
}

But wait.... What if there's like... Nothing?#

Nope, I'm not getting into metaphysics. Something more simple. What if the async API you're consuming returns just a Task? Not a Task of int but a simple old humble Task.

So to take our example we're going from this:

public interface ISageService
{
Task<int> DeleteAsync(int id);
}

To this:

public interface ISageService
{
Task DeleteAsync(int id);
}

Your initial thought might be "well that's okay, I'll just lop off the ReturnsAsync statements and I'm home free". That's what I thought anyway.... And I was *WRONG*! A moments thought and you realise that there's still a return type - it's just Task now. What you want to do is something like this:

_sageServiceMock
.Setup(x => x.DeleteAsync(_sage.Id))
.ReturnsAsync(void); // This'll definitely work... Probably

No it won't - void is not a real type and much as you might like it to, this is not going to work.

So right now you're thinking, well Moq probably has my back - it'll have something like ReturnsTask, right? Wrong! It's intentional it turns out - there's a discussion on GitHub about the issue. And in that discussion there's just what we need. We can use Task.Delay or Task.FromResult alongside Moq's good old Returns method and we're home free!

Here's one I made earlier...#

namespace Proverb.Web.Controllers
{
// ISageService again included inline for ease of explanation
public interface ISageService
{
Task DeleteAsync(int id);
}
public class SageController : ApiController
{
ISageService _sageService;
public SageController(ISageService userService)
{
_sageService = userService;
}
public async Task<IHttpActionResult> Delete(int id)
{
await _sageService.DeleteAsync(id);
return Ok();
}
}
}
namespace Proverb.Web.Tests.ASPNet.Controllers
{
[TestClass]
public class SageControllerTests
{
private Mock<ISageService> _sageServiceMock;
private SageController _controller;
readonly Task TaskOfNowt = Task.Delay(0);
// Or you could use this equally valid but slightly more verbose approach:
//readonly Task TaskOfNowt = Task.FromResult<object>(null);
[TestInitialize]
public void Initialise()
{
_sageServiceMock = new Mock<ISageService>();
_controller = new SageController(_sageServiceMock.Object);
}
[TestMethod]
public async Task Delete_returns_an_Ok()
{
_sageServiceMock
.Setup(x => x.DeleteAsync(_sage.Id))
.Returns(TaskOfNowt); // Feels good doesn't it?
IHttpActionResult result = await _controller.Delete(_sage.Id);
var ok = result as OkResult;
Assert.IsNotNull(ok);
_sageServiceMock.Verify(x => x.DeleteAsync(_sage.Id));
}
}
}

The Surprisingly Happy Tale of Visual Studio Online, Continous Integration and Chutzpah

Going off piste#

The post that follows is a slightly rambly affair which is pretty much my journal of the first steps of getting up and running with JavaScript unit testing. I will not claim that much of this blog is down to me. In fact in large part is me working my way through Mathew Aniyan's excellent blog post on integrating Chutzpah with TFS. But a few deviations from this post have made me think it worth keeping hold of this record for my benefit (if no-one else's).

That's the disclaimers out of the way now...

...Try, try, try again...#

Getting started with JavaScript unit testing has not been the breeze I’d expected. Frankly I’ve found the docs out there not particularly helpful. But if at first you don't succeed then try, try, try again.

So after a number of failed attempts I’m going to give it another go. Rushaine McBean says Jasmine is easiest so I'm going to follow her lead and give it a go.

Let’s new up a new (empty) ASP.NET project. Yes, I know ASP.NET has nothing to do with JavaScript unit testing but my end goal is to be able to run JS unit tests in Visual Studio and as part of Continuous Integration. Further to that, I'm anticipating a future where I have a solution that contains JavaScript unit tests and C# unit tests as well. It is indeed a bold and visionary Brave New World. We'll see how far we get.

First up, download Jasmine from GitHub - I'll use v2.0. Unzip it and fire up SpecRunner.html and whaddya know... It works!

As well it might. I’d be worried if it didn’t. So I’ll move the contents of the release package into my empty project. Now let’s see if we can get those tests running inside Visual Studio. I’d heard of Chutzpah which describes itself thusly:

“Chutzpah is an open source JavaScript test runner which enables you to run unit tests using QUnit, Jasmine, Mocha, CoffeeScript and TypeScript.”

What I’m after is the Chutzpah test adapter for Visual Studio 2012/2013 which can be found here. I download the VSIX and install. Pretty painless. Once I restart Visual Studio I can see my unit tests in the test explorer. Nice! Run them and...

All fail. This makes me sad. All the errors say “Can’t find variable: Player in file”. Hmmm. Why? Dammit I’m actually going to have to read the documentation... It turns out the issue can be happily resolved by adding these 3 references to the top of PlayerSpec.js:

/// <reference path="../src/Player.js" />
/// <reference path="../src/Song.js" />
/// <reference path="SpecHelper.js" />

Now the tests pass:

The question is: can we get this working with Visual Studio Online?

Fortunately another has gone before me. Mathew Aniyan has written a superb blog post called "Javascript Unit Tests on Team Foundation Service with Chutzpah". Using this post as a guide (it was written 18 months ago which is frankly aeons in the world of the web) I'm hoping that I'll be able to, without too many tweaks, get Javascript unit tests running on Team Foundation Service / Visual Studio Online ( / insert this weeks rebranding here).

First of all in Visual Studio Online I’ll create a new project called "GettingStartedWithJavaScriptUnitTesting" (using all the default options). Apparently “Your project is created and your team is going to absolutely love this.” Hmmmm... I think I’ll be judge of that.

Let's navigate to the project. I'll fire up Visual Studio by clicking on the “Open in Visual Studio” link. Once fired up and all the workspace mapping is sorted I’ll move my project into the GettingStartedWithJavaScriptUnitTesting folder that now exists on my machine and add this to source control.

Back to Mathew's blog. It suggests renaming Chutzpah.VS2012.vsix to Chutzpah.VS2012.zip and checking certain files into TFS. I think Chutzpah has changed a certain amount since this was written. To be on the safe side I’ll create a new folder in the root of my project called Chutzpah.VS2012 and put the contents of Chutzpah.VS2012.zip in there and add it to TFS (being careful to ensure that no dll’s are excluded).

Then I'll follow steps 3 and 4 from the blog post:

*3. In Visual Studio, Open Team Explorer & connect to Team Foundation Service. Bring up the Manage Build Controllers dialog. [Build –> Manage Build Controllers] Select Hosted Build Controller Click on Properties button to bring up the Build Controller Properties dialog.

4. Change Version Control Path to custom Assemblies to refer to the folder where you checked in the binaries in step 2.

In step 5 the blog tells me to edit my build definition. Well I don’t have one in this new project so let’s click on “New Build Definition”, create one and then follow step 5:

*5. In Team Explorer, go to the Builds section and Edit your Build Definition which will run the javascript tests. Click on the Process tab Select the row named Automated Tests. Click on … button next to the value.

Rather than following step 6 (which essentially nukes the running of any .NET tests you might have) I chose to add another row by clicking "Add". In the dialog presented I changed the Test assembly specification to **\*.js and checked the "Fail build on test failure" checkbox.

Step 7 says:

*7. Create your Web application in Visual Studio and add your Qunit or Jasmine unit tests to them. Make sure that the js files (that contain the tests) are getting copied to the build output directory.

The picture below step 7 suggests that you should be setting your test / spec files to have a Copy to Output Directory setting of Copy always. This did not work for me!!! Instead, setting a Build Action of Content and a Copy to Output Directory setting of Do not copy did work.

Finally I checked everything into source control and queued a build. I honestly did not expect this to work. It couldn’t be this easy could it? And...

Wow! It did! Here’s me cynically expecting some kind of “permission denied” error and it actually works! Brilliant! Look up in the cloud it says the same thing!

Fantastic!

I realise that I haven’t yet written a single JavaScript unit test of my own and so I’ve still a way to go. What I have done is quietened those voices in my head that said “there’s not too much point having a unit test suite that isn’t plugged into continuous integration”. Although it's not documented here I'm happy to be able to report that I have been able to follow the self-same instructions for Team Foundation Service / Visual Studio Online and get CI working with TFS 2012 on our build server as well.

Having got up and running off the back of other peoples hard work I best try and write some of my own tests now....

Unit testing MVC controllers / Mocking UrlHelper

I have put a name to my pain...#

And it is unit testing ASP.Net MVC controllers.

Well perhaps that's unfair. I have no problem unit testing MVC controllers.... until it comes to making use of the "innards" of MVC. Let me be more specific. This week I had a controller action that I needed to test. It looked a little like this:

Looks fine right? It's an action that takes a simple object as an argument. That's ok. It returns a JsonResult. No worries. The JsonResult consists of an anonymous class. De nada. The anonymous class has one property that is driven by the controllers UrlHelper. Yeah that shouldn't be an issue... Hold your horses sunshine - you're going nowhere!

Getting disillusioned#

Yup, the minute you start pumping in asserts around that UrlHelper driven property you're going to be mighty disappointed. What, you didn't expect the result to be null? Damn shame.

Despite articles on MSDN about how the intention is for MVC to be deliberately testable the sad fact of the matter is that there is a yawning hole around the testing support for controllers in ASP.Net MVC. Whenever you try to test something that makes use of controller "gubbins" you have serious problems. And unfortunately I didn't find anyone out there who could offer the whole solution.

After what I can best describe as a day of pain I found a way to scratch my particular itch. I found a way to write unit tests for controllers that made use of UrlHelper. As a bonus I managed to include the unit testing of Routes and Areas (well kind of) too.

MvcMockControllers updated#

This solution is heavily based on the work of Scott Hanselman who wrote and blogged about MvcMockHelpers back in 2008. Essentially I've taken this and tweaked it so I could achieve my ends. My version of MvcMockHelpers looks a little like this:

What I want to test#

I want to be able to unit test the controller Edit method I mentioned earlier. This method calls the Action method on the controllers Url member (which is, in turn, a UrlHelper) to generate a URL for passing pack to the client. The URL generated should fit with the routing mechanism I have set up. In this case the route we expect a URL for was mapped by the following area registration:

Enough of the waffle - show me a unit test#

Now to the meat; here's a unit test which demonstrates how this is used:

Let's go through this unit test and breakdown what's happening:

  1. Arrange
  2. Act
  3. Assert

The most interesting thing you'll note is the controller's UrlHelper is now generating a URL as we might have hoped. The URL is generated making use of our routing, yay! Finally we're also managing to unit test a route registered by our area.

Unit Testing and Entity Framework: The Filth and the Fury

Just recently I've noticed that there appears to be something of a controversy around Unit Testing and Entity Framework. I first came across it as I was Googling around for useful posts on using MOQ in conjunction with EF. I've started to notice the topic more and more and as I have mixed feelings on the subject (that is to say I don't have a settled opinion) I thought I'd write about this and see if I came to any kind of conclusion...

The Setup#

It started as I was working on a new project. We were using ASP.NET MVC 3 and Entity Framework with DbContext as our persistence layer. Rather than crowbarring the tests in afterwards the intention was to write tests to support the ongoing development. Not quite test driven development but certainly test supported development. (Let's not get into the internecine conflict as to whether this is black belt testable code or not - it isn't but he who pays the piper etc.) Oh and we were planning to use MOQ as our mocking library.

It was the first time I'd used DbContext rather than ObjectContext and so I thought I'd do a little research on how people were using DbContext with regards to testability. I had expected to find that there was some kind of consensus and an advised way forwards. I didn't get that at all. Instead I found a number of conflicting opinions.

Using the Repository / Unit of Work Patterns#

One thread of advice that came out was that people advised using the Repository / Unit of Work patterns as wrappers when it came to making testable code. This is kind of interesting in itself as to the best of my understanding ObjectSet / ObjectContext and DbSet / DbContext are both in themselves implementations of the Repository / Unit of Work patterns. So the advice was to build a Repository / Unit of Work pattern to wrap an existing Repository / Unit of Work pattern.

Not as mad as it sounds. The reason for the extra abstraction is that ObjectContext / DbContext in the raw are not MOQ-able.

Or maybe I'm wrong, maybe you can MOQ DbContext?#

No you can't. Well that's not true. You can and it's documented here but there's a "but". You need to be using Entity Frameworks Code First approach; actually coding up your DbContext yourself. Before I'd got on board the project had already begun and we were already some way down the road of using the Database First approach. So this didn't seem to be a go-er really.

The best article I found on testability and Entity Framework was this one by K. Scott Allen which essentially detailed how you could implement the Repository / Unit of Work patterns on top of ObjectSet / ObjectContext. In the end I adapted this to do the same thing sat on top of DbSet / DbContext instead.

With this in place I had me my testable code. I was quite happy with this as it seemed quite intelligible. My new approach looked similar to the existing DbSet / DbContext code and so there wasn't a great deal of re-writing to do. Sorted, right?

Here come the nagging doubts...#

I did wonder, given that I found a number of articles about applying the Repository / Unit of Work patterns on top of ObjectSet / ObjectContext that there didn't seem to be many examples to do the same for DbSet / DbContext. (I did find a few examples of this but none that felt satisfactory to me for a variety of reasons.) This puzzled me.

I also started to notice that a 1 man war was being waged against the approach I was using by Ladislav Mrnka. Here are a couple of examples of his crusade:

Ladislav is quite strongly of the opinion that wrapping DbSet / DbContext (and I presume ObjectSet / ObjectContext too) in a further Repository / Unit of Work is an antipattern. To quote him: "The reason why I don’t like it is leaky abstraction in Linq-to-entities queries ... In your test you have Linq-to-Objects which is superset of Linq-to-entities and only subset of queries written in L2O is translatable to L2E". It's worth looking at Jon Skeets explanation of "leaky abstractions" which he did for TekPub.

As much as I didn't want to admit it - I have come to the conclusion Ladislav probably has a point for a number of reasons:

1. Just because it compiles and passes unit tests don't imagine that means it works...#

Unfortunately, a LINQ query that looks right, compiles and has passing unit tests written for it doesn't necessarily work. You can take a query that fails when executed against Entity Framework and come up with test data that will pass that unit test. As Ladislav rightly points out: LINQ-to-Objects != LINQ-to-Entities.

So in this case unit tests of this sort don't provide you with any security. What you need are **integration

** tests. Tests that run against an instance of the database and demonstrate that LINQ will actually translate queries / operations into valid SQL.

2. Complex queries#

You can write some pretty complex LINQ queries if you want. This is made particularly easy if you're using comprehension syntax. Whilst these queries may be simple to write it can be uphill work to generate test data to satisfy this. So much so that at times it can feel you've made a rod for your own back using this approach.

3. Lazy Loading#

By default Entity Framework employs lazy loading. This a useful approach which reduces the amount of data that is transported. Sometimes this approach forces you to specify up front if you require a particular entity through use of Include statements. This again doesn't lend itself to testing particularly well.

Where does this leave us?#

Having considered all of the above for a while and tried out various different approaches I think I'm coming to the conclusion that Ladislav is probably right. Implementing the Repository / Unit of Work patterns on top of ObjectSet / ObjectContext or DbSet / DbContext doesn't seem a worthwhile effort in the end.

So what's a better idea? I think that in the name of simplicity you might as well have a simple class which wraps all of your Entity Framework code. This class could implement an interface and hence be straightforwardly MOQ-able (or alternatively all methods could be virtual and you could forego the interface). Along with this you should have integration tests in place which test the execution of the actual Entity Framework code against a test database.

Now I should say this approach is not necessarily my final opinion. It seems sensible and practical. I think it is likely to simplify the tests that are written around a project. It will certainly be more reliable than just having unit tests in place.

In terms of the project I'm working on at the moment we're kind of doing this in a halfway house sense. That is to say, we're still using our Repository / Unit of Work wrappers for DbSet / DbContext but where things move away from simple operations we're adding extra methods to our Unit of Work class or Repository classes which wrap this functionality and then testing it using our integration tests.

I'm open to the possibility that my opinion may be modified further. And I'd be very interested to know what other people think on the subject.

Update#

It turns out that I'm not alone in thinking about this issue and indeed others have expressed this rather better than me - take a look at Jimmy Bogard's post for an example: http://lostechies.com/jimmybogard/2012/09/20/limiting-your-abstractions/.

Update 2#

I've also recently watched the following Pluralsight course by Julie Lerman: http://pluralsight.com/training/Courses/TableOfContents/efarchitecture#efarchitecture-m3-archrepo. In this course Julie talks about different implementations of the Repository and Unit of Work patterns in conjunction with Entity Framework. Julie is in favour of using this approach but in this module she elaborates on different "flavours" of these patterns that you might want to use for different reasons (bounded contexts / reference contexts etc). She makes a compelling case and helpfully she is open enough to say that this a point of contention in the community. At the end of watching this I think I felt happy that our "halfway house" approach seems to fit and seems to work. More than anything else Julie made clear that there isn't one definitively "true" approach. Rather many different but similar approaches for achieving the same goal. Good stuff Julie!

A Simple Technique for Initialising Properties with Internal Setters for Unit Testing

I was recently working with my colleagues on refactoring a legacy application. We didn't have an immense amount of time available for this but the plan was to try and improve what was there as much as possible. In its initial state the application had no unit tests in place at all and so the plan was to refactor the code base in such a way as to make testing it a realistic proposition. To that end the domain layer was being heavily adjusted and the GUI was being migrated from WebForms to MVC 3. The intention was to build up a pretty solid collection of unit tests. However, as we were working on this we realised we had a problem with properties on our models with internal setters...

Background#

The entities of the project in question used an approach which would store pertinent bits of normalised data for read-only purposes in related entities. I've re-read that sentence and realise it's as clear as mud. Here is an example to clarify:

public class Person
{
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public string Address { get; set; }
public DateTime DateOfBirth { get; set; }
/* Other fascinating properties... */
}
public class Order
{
public int Id { get; set; }
public string ProductOrdered { get; set; }
public string OrderedById { get; set; }
public string OrderedByFirstName { get; internal set; }
public string OrderedByLastName { get; internal set; }
}

In the example above you have 2 types of entity: Person and Order. The Order entity makes use of the the Id, FirstName and LastName properties of the Person entity in the properties OrderedById, OrderedByFirstName and OrderedByLastName. For persistence (ie saving to the database) purposes the only necessary Person property is OrderedById identity. OrderedByFirstName and OrderedByLastName are just "nice to haves" - essentially present to make implementing the GUI more straightforward.

To express this behaviour / intention in the object model the setters for OrderedByFirstName and OrderedByLastName are marked as internal. The implication of this is that properties like this can only be initialised within the current assembly - or any explicitly associated "friend" assemblies. In practice this meant that internally set properties were only populated when an object was read in from the database. It wasn't possible to set these properties in other assemblies which meant less code was written (a good thing

) - after all, why set a property when you don't need to?

Background explanation over. It may still be a little unclear but I hope you get the gist.

What's our problem?#

I was writing unit tests for the controllers in our main web application and was having problems with my arrangements. I was mocking the database calls in my controllers much in the manner that you might expect:

// Arrange
var orderDb = new Mock<IOrderDb>();
orderDb
.Setup(x => x.GetOrder(It.IsAny<int>()))
.Returns(new Order{
Id = 123,
ProductOrdered = "Packet of coffee",
OrderedById = 987456,
OrderedByFirstName = "John",
OrderedByLastName = "Reilly"
});
}

All looks fine doesn't it? It's not. Because OrderedByFirstName and OrderedByLastName have internal setters we are unable

to initialise them from within the context of our test project. So what to do?

We toyed with 3 approaches and since each has merits I thought it worth going through each of them:

  1. To the MOQumentation Batman!: http://code.google.com/p/moq/wiki/QuickStart! Looking at the MOQ documentation it states the following:

    Mocking internal types of another project: add the following assembly attributes (typically to the AssemblyInfo.cs) to the project containing the internal types:

    // This assembly is the default dynamic assembly generated Castle DynamicProxy,
    // used by Moq. Paste in a single line.
    [assembly:InternalsVisibleTo("DynamicProxyGenAssembly2,PublicKey=0024000004800000940000000602000000240000525341310004000001000100c547cac37abd99c8db225ef2f6c8a3602f3b3606cc9891605d02baa56104f4cfc0734aa39b93bf7852f7d9266654753cc297e7d2edfe0bac1cdcf9f717241550e0a7b191195b7667bb4f64bcb8e2121380fd1d9d46ad2d92d2d15605093924cceaf74c4861eff62abf69b9291ed0a340e113be11e6a7d3113e92484cf7045cc7")]
    [assembly: InternalsVisibleTo("The.NameSpace.Of.Your.Unit.Test")] //I'd hope it was shorter than that...

    This looked to be exactly what we needed and in most situations it would make sense to go with this. Unfortunately for us there was a gotcha. Certain core shared parts of our application platform were GAC'd. A requirement for GAC-ing an assembly is that it is signed.

    The upshot of this was that if we wanted to use the InternalsVisibleTo approach then we would need to sign our web application test project. We weren't particularly averse to that and initially did so without much thought. It was then we remembered that every assembly referenced by a signed assembly must also be signed as well. We didn't really want to sign our main web application purely for testing purposes. We could and if there weren't viable alternatives we well might have. But it just seemed like the wrong reason to be taking that decision. Like using a sledgehammer to crack a nut.

  2. The next approach we took was using mock objects. Instead of using our objects straight we would mock them as below:

    //Create mock and set internal properties
    var orderMock = new Mock<Order>();
    orderMock.SetupGet(x => x.OrderedByFirstName).Returns("John");
    orderMock.SetupGet(x => x.OrderedByLastName).Returns("Reilly");
    //Set up standard properties
    orderMock.SetupAllProperties();
    var orderStub = orderMock.Object;
    orderStub.Id = 123;
    orderStub.ProductOrdered = "Packet of coffee";
    orderStub.OrderedById = 987456;

    Now this approach worked fine but had a couple of snags:

    • As you can see it's pretty verbose and much less clear to read than it was previously.

    • It required that we add the virtual keyword to all our internally set properties like so:

      public class Order
      {
      // ....
      public virtual string OrderedByFirstName { get; internal set; }
      public virtual string OrderedByLastName { get; internal set; }
      // ...
      }
    • Our standard constructor already initialised the value of our internally set properties. So adding virtual to the internally set properties generated ReSharper warnings aplenty about virtual properties being initialised in the constructor. Fair enough.

    Because of the snags it still felt like we were in nutcracking territory...

  3. ... and this took us to the approach that we ended up adopting: a special mocking constructor for each class we wanted to test, for example: ```cs ///

    /// Mocking constructor used to initialise internal properties /// public Order(string orderedByFirstName = null, string orderedByLastName = null) : this() { OrderedByFirstName = orderedByFirstName; OrderedByLastName = orderedByLastName; }

    Thanks to the ever lovely [Named and Optional Arguments](<http://msdn.microsoft.com/en-us/library/dd264739.aspx>) feature of C# combined with [Object Initializers](<http://msdn.microsoft.com/en-us/library/bb397680.aspx>) it meant it was possible to write quite expressive, succinct code using this approach; for example:
    ```cs
    var order = new Order(
    orderedByFirstName: "John",
    orderedByLastName: "Reilly"
    )
    {
    Id = 123,
    ProductOrdered = "Packet of coffee",
    OrderedById = 987456
    };

    Here we're calling the mocking constructor to set the internally set properties and subsequently initialising the other properties using the object initialiser mechanism.

    Implementing these custom constructors wasn't a massive piece of work and so we ended up settling on this technique for initialising internal properties.