Skip to main content

2 posts tagged with "jest"

View All Tags

Azure Pipelines meet Jest

This post explains how to integrate the tremendous test runner Jest with the continuous integration platform Azure Pipelines. Perhaps we're setting up a new project and we've created a new React app with Create React App. This ships with Jest support out of the box. How do we get that plugged into Pipelines such that:

  1. Tests run as part of our pipeline
  2. A failing test fails the build
  3. Test results are reported in Azure Pipelines UI?

Tests run as part of our pipeline#

First of all, lets get the tests running. Crack open your azure-pipelines.yml file and, in the appropriate place add the following:

displayName: npm run test
inputs:
command: 'custom'
workingDir: 'src/client-app'
customCommand: 'run test'

The above will, when run, trigger a npm run test in the src/client-app folder of my project (it's here where my React app lives). You'd imagine this would just work™️ - but life is not that simple. This is because Jest, by default, runs in watch mode. This is blocking and so not appropriate for CI.

In our src/client-app/package.json let's create a new script that runs the tests but not in watch mode:

"test:ci": "npm run test -- --watchAll=false",

and switch our azure-pipelines.yml to use it:

displayName: npm run test
inputs:
command: 'custom'
workingDir: 'src/client-app'
customCommand: 'run test:ci'

Boom! We're now running tests as part of our pipeline. And also, failing tests will fail the build, because of Jest's default behaviour of exiting with status code 1 on failed tests.

Tests results are reported in Azure Pipelines UI#

Pipelines has a really nice UI for reporting test results. If you're using something like .NET then you'll find that test results just magically show up there. We'd like that for our Jest tests as well. And we can have it.

The way we achieve this is by:

  1. Producing test results in a format that can be subsequently processed
  2. Using those test results to publish to Azure Pipelines

The way that you configure Jest test output is through usage of reporters. However, Create React App doesn't support these. However that's not an issue, as the marvellous Dan Abramov demonstrates here.

We need to install the jest-junit package to our client-app:

npm install jest-junit --save-dev

And we'll tweak our test:ci script to use the jest-junit reporter as well:

"test:ci": "npm run test -- --watchAll=false --reporters=default --reporters=jest-junit",

We also need to add some configuration to our package.json in the form of a jest-junit element:

"jest-junit": {
"suiteNameTemplate": "{filepath}",
"outputDirectory": ".",
"outputName": "junit.xml"
}

The above configuration will use the name of the test file as the suite name in the results, which should speed up the tracking down of the failing test. The other values specify where the test results should be published to, in this case the root of our client-app with the filename junit.xml.

Now our CI is producing our test results, how do we get them into Pipelines? For that we need the Publish test results task and a new step in our azure-pipelines.yml after our npm run test step:

displayName: npm run test
inputs:
command: 'custom'
workingDir: 'src/client-app'
customCommand: 'run test:ci'
displayName: 'supply npm test results to pipelines'
condition: succeededOrFailed() # because otherwise we won't know what tests failed
inputs:
testResultsFiles: 'src/client-app/junit.xml'

This will read the test results from our src/client-app/junit.xml file and pump them into Pipelines. Do note that we're always running this step; so if the previous step failed (as it would in the case of a failing test) we still pump out the details of what that failure was. Like so:

And that's it! Azure Pipelines and Jest integrated.

Snapshot Testing for C#

If you're a user of Jest, you've no doubt heard of and perhaps made use of snapshot testing.

Snapshot testing is an awesome tool that is generally discussed in the context of JavaScript React UI testing. But snapshot testing has a wider application than that. Essentially it is profoundly useful where you have functions which produce a complex structured output. It could be a React UI, it could be a list of FX prices. The type of data is immaterial; it's the amount of it that's key.

Typically there's a direct correlation between the size and complexity of the output of a method and the length of the tests that will be written for it. Let's say you're outputting a class that contains 20 properties. Congratulations! You get to write 20 assertions in one form or another for each test case. Or a single assertion whereby you supply the expected output by hand specifying each of the 20 properties. Either way, that's not going to be fun. And just imagine the time it would take to update multiple test cases if you wanted to change the behaviour of the method in question. Ouchy.

Time is money kid. What you need is snapshot testing. Say goodbye to handcrafted assertions and hello to JSON serialised output checked into source control. Let's unpack that a little bit. The usefulness of snapshot testing that I want in C# is predominantly about removing the need to write and maintain multiple assertions. Instead you write tests that compare the output of a call to your method with JSON serialised output you've generated on a previous occasion.

This approach takes less time to write, less time to maintain and the solid readability of JSON makes it more likely you'll pick up on bugs. It's so much easier to scan JSON than it is a list of assertions.

Putting the Snapshot into C##

Now if you're writing tests in JavaScript or TypeScript then Jest already has your back with CLI snapshot generation and shouldMatchSnapshot. However getting to nearly the same place in C# is delightfully easy. What are we going to need?

First up, a serializer which can take your big bad data structures and render them as JSON. Also we'll use it to rehydrate our data structure into an object ready for comparison. We're going to use Json.NET.

Next up we need a way to compare our outputs with our rehydrated snapshots - we need a C# shouldMatchSnapshot. There's many choices out there, but for my money Fluent Assertions is king of the hill.

Finally we're going to need Snapshot, a little helper utility I put together:

using System;
using System.IO;
using Newtonsoft.Json;
using Newtonsoft.Json.Serialization;
namespace Test.Utilities {
public static class Snapshot {
private static readonly JsonSerializer StubSerializer = new JsonSerializer {
ContractResolver = new CamelCasePropertyNamesContractResolver(),
NullValueHandling = NullValueHandling.Ignore
};
private static JsonTextWriter MakeJsonTextWriter(TextWriter sw) => new JsonTextWriter(sw) {
Formatting = Formatting.Indented,
IndentChar = ' ',
Indentation = 2
};
/// <summary>
/// Make yourself some JSON! Usage looks like this:
/// Stubs.Make($"{System.AppDomain.CurrentDomain.BaseDirectory}..\\..\\..\\data.json", myData);
/// </summary>
public static void Make<T>(string stubPath, T data) {
try {
if (string.IsNullOrEmpty(stubPath))
throw new ArgumentNullException(nameof(stubPath));
if (data == null)
throw new ArgumentNullException(nameof(data));
using(var sw = new StreamWriter(stubPath))
using(var writer = MakeJsonTextWriter(sw)) {
StubSerializer.Serialize(writer, data);
}
} catch (Exception exc) {
throw new Exception($"Failed to make {stubPath}", exc);
}
}
public static string Serialize<T>(T data) {
using (var sw = new StringWriter())
using(var writer = MakeJsonTextWriter(sw)) {
StubSerializer.Serialize(writer, data);
return sw.ToString();
}
}
public static string Load(string filename) {
var content = new StreamReader(
File.OpenRead(filename)
).ReadToEnd();
return content;
}
}
}

Let's look at the methods: Make and Load. Make is what we're going to use to create our snapshots. Load is what we're going to use to, uh, load our snapshots.

What does usage look like? Great question. Let's go through the process of writing a C# snapshot test.

Taking Snapshot for a Spin#

First of all, we're going to need a method to test that outputs a data structure which is more than just a scalar value. Let's use this:

public class Leopard {
public string Name { get; set; }
public int Spots { get; set; }
}
public class LeopardService {
public Leopard[] GetTheLeopards() {
return new Leopard[] {
new Leopard { Spots = 42, Name = "Nimoy" },
new Leopard { Spots = 900, Name = "Dotty" }
};
}
}

Yes - our trusty LeopardService. As you can see, the GetTheLeopards method returns an array of Leopards. For now, let's write a test using Snapshot: (ours is an XUnit test; but Snapshot is agnostic of this)

[Fact]
public void GetTheLeopards_should_return_expected_Leopards() {
// Arrange
var leopardService = new LeopardService();
// Act
var leopards = leopardService.GetTheLeopards();
// UNCOMMENT THE LINE BELOW *ONLY* WHEN YOU WANT TO GENERATE THE SNAPSHOT
Snapshot.Make($"{System.AppDomain.CurrentDomain.BaseDirectory}..\\..\\..\\Snapshots\\leopardsSnapshot.json", leopards);
// Assert
var snapshotLeopards = JsonConvert.DeserializeObject<leopard[]>(Snapshot.Load("Snapshots/leopardsSnapshot.json"));
snapshotLeopards.Should().BeEquivalentTo(leopards);
}
</leopard[]>

Before we run this for the first time we need to setup our testing project to be ready for snapshots. First of all we add a Snapshot folder to the test project. The we also add the following to the .csproj:

<ItemGroup>
<Content Include="Snapshots\**">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</Content>
</ItemGroup>

This includes the snapshots in the compile output for when tests are being run.

Now let's run the test. It will generate a leopardsSnapshot.json file:

[
{
"name": "Nimoy",
"spots": 42
},
{
"name": "Dotty",
"spots": 900
}
]

With our snapshot in place, we comment out the Snapshot.Make... line and we have a passing test. Let's commit our code, push and go about our business.

Time Passes...#

Someone decides that the implementation of GetTheLeopards needs to change. Defying expectations it seems that Dotty the leopard should now have 90 spots. I know... Business requirements, right?

If we make that change we'd ideally expect our trusty test to fail. Let's see what happens:

----- Test Execution Summary -----
Leopard.Tests.Services.LeopardServiceTests.GetTheLeopards_should_return_expected_Leopards:
Outcome: Failed
Error Message:
Expected item[1].Spots to be 90, but found 900.

Boom! We are protected!

Since this is a change we're completely happy with we want to update our leopardsSnapshot.json file. We could make our test pass by manually updating the JSON. That'd be fine. But why work when you don't have to? Let's uncomment our Snapshot.Make... line and run the test the once.

[
{
"name": "Nimoy",
"spots": 42
},
{
"name": "Dotty",
"spots": 90
}
]

That's right, we have an updated snapshot! Minimal effort.

Next Steps#

This is a basic approach to getting the goodness of snapshot testing in C#. It could be refined further. To my mind the uncommenting / commenting of code is not the most elegant way to approach this and so there's some work that could be done around this area.

Happy snapshotting!