Monday, 19 December 2016

Using ts-loader with webpack 2

Hands up, despite being one of the maintainers of ts-loader (a TypeScript loader for webpack) I have not been tracking webpack v2. My reasons? Well, I'm keen on cutting edge but bleeding edge is often not a ton of fun as dealing with regularly breaking changes is frustrating. I'm generally happy to wait for things to settle down a bit before leaping aboard. However, webpack 2 RC'd last week and so it's time to take a look!

Porting our example

Let's take ts-loader's webpack 1 example and try and port it to webpack 2. Will it work? Probably; I'm aware of other people using ts-loader with webpack 2. It'll be a voyage of discovery. Like Darwin on the Beagle, I shall document our voyage for a couple of reasons:

  • I'm probably going to get some stuff wrong. That's fine; one of the best ways to learn is to make mistakes. So do let me know where I go wrong.
  • I'm doing this based on what I've read in the new docs; they're very much a work in progress and the mistakes I make here may lead to those docs improving even more. That matters; documentation matters. I'll be leaning heavily on the Migrating from v1 to v2 guide.

So here we go. Our example is one which uses TypeScript for static typing and uses Babel to transpile from ES-super-modern (yes - it's a thing) to ES-older-than-that. Our example also uses React; but that's somewhat incidental. It only uses webpack for typescript / javascript and karma. It uses gulp to perform various other tasks; so if you're reliant on webpack for less / sass compilation etc then I have no idea whether that works.

First of all, let's install the latest RC of webpack:

npm install [email protected] --save-dev

webpack.config.js

Let's look at our existing webpack.config.js:

'use strict';

var path = require('path');

module.exports = {
  cache: true,
  entry: {
    main: './src/main.tsx',
    vendor: [
      'babel-polyfill',
      'fbemitter',
      'flux',
      'react',
      'react-dom'
    ]
  },
  output: {
    path: path.resolve(__dirname, './dist/scripts'),
    filename: '[name].js',
    chunkFilename: '[chunkhash].js'
  },
  module: {
    loaders: [{
      test: /\.ts(x?)$/,
      exclude: /node_modules/,
      loader: 'babel-loader?presets[]=es2016&presets[]=es2015&presets[]=react!ts-loader'
    }, {
      test: /\.js$/,
      exclude: /node_modules/,
      loader: 'babel',
      query: {
        presets: ['es2016', 'es2015', 'react']
      }
    }]
  },
  plugins: [
  ],
  resolve: {
    extensions: ['', '.webpack.js', '.web.js', '.ts', '.tsx', '.js']
  },
};

There's a number of things we need to do here. First of all, we can get rid of the empty extension under resolve; I understand that's unnecessary now. Also, I'm going to get rid of '.webpack.js' and '.web.js'; I never used them anyway. Also, just having 'babel' as a loader won't fly anymore. We need that suffix as well.

Now I could start renaming loaders to rules as the terminology is changing. But I'd like to deal with that later since I know the old school names are still supported at present. More interestingly, I seem to remember hearing that one of the super exciting things about webpack is that it supports modules directly now. (I think that's supposed to be good for tree-shaking but I'm not totally certain.)

Initially I thought I was supposed to switch to a custom babel preset called babel-preset-es2015-webpack. However it has a big "DEPRECATED" mark at the top and it says I should just use babel-preset-es2015 (which I already am) with the following option specified:

{
    "presets": [
        [
            "es2015",
            {
                "modules": false
            }
        ]
    ]
}

Looking at our existing config you'll note that for js files we're using query (options in the new world I understand) to configure babel usage. We're using query parameters for ts files. I have zero idea how to configure preset options using query parameters. Fiddling with query / options didn't seem to work. So, I've decided to abandon using query entirely and drop in a .babelrc file using our presets combined with the modules setting:

{
   "presets": [
      "react", 
      [
         "es2015",
         {
            "modules": false
         }
      ],
      "es2016"
   ]
}

As an aside; apparently these are applied in reverse order. So es2016 is applied first, es2015 second and react third. I'm not totally certain this is correct; the .babelrc docs are a little unclear.

With our query options extracted we're down to a simpler webpack.config.js:

'use strict';

var path = require('path');

module.exports = {
  cache: true,
  entry: {
    main: './src/main.tsx',
    vendor: [
      'babel-polyfill',
      'fbemitter',
      'flux',
      'react',
      'react-dom'
    ]
  },
  output: {
    path: path.resolve(__dirname, './dist/scripts'),
    filename: '[name].js',
    chunkFilename: '[chunkhash].js'
  },
  module: {
    loaders: [{
      test: /\.ts(x?)$/,
      exclude: /node_modules/,
      loader: 'babel-loader!ts-loader'
    }, {
      test: /\.js$/,
      exclude: /node_modules/,
      loader: 'babel-loader'
    }]
  },
  plugins: [
  ],
  resolve: {
    extensions: ['.ts', '.tsx', '.js']
  },
};

plugins

In our example the plugins section of our webpack.config.js is extended in a separate process. Whilst we're developing we also set the debug flag to be true. It seems we need to introduce a LoaderOptionsPlugin to do this for us.

As we introduce our LoaderOptionsPlugin we also need to make sure that we provide it with options. How do I know this? Well someone raised an issue against ts-loader. I don't think this is actually an issue with ts-loader; I think it's just a webpack 2 thing. I could be wrong; answers on a postcard please.

Either way, to get up and running we just need the LoaderOptionsPlugin in play. Consequently, most of what follows in our webpack.js file is unchanged:

// .....

var webpackConfig = require('../webpack.config.js');
var packageJson = require('../package.json');

// .....

function buildProduction(done) {

   // .....

   myProdConfig.plugins = myProdConfig.plugins.concat(
      // .....

      // new webpack.optimize.DedupePlugin(), Not a thing anymore apparently
      new webpack.optimize.UglifyJsPlugin(),

      // I understand this here matters...
      // but it doesn't seem to make any difference; perhaps I'm missing something?
      new webpack.LoaderOptionsPlugin({
        minimize: true,
        debug: false
      }),

      failPlugin
   );

   // .....
}

function createDevCompiler() {
   var myDevConfig = webpackConfig;
   myDevConfig.devtool = 'inline-source-map';
   // myDevConfig.debug = true; - not allowed in webpack 2

   myDevConfig.plugins = myDevConfig.plugins.concat(
      new webpack.optimize.CommonsChunkPlugin({ name: 'vendor', filename: 'vendor.js' }),
      new WebpackNotifierPlugin({ title: 'Webpack build', excludeWarnings: true }),

      // this is the Webpack 2 hotness!
      new webpack.LoaderOptionsPlugin({
         debug: true,
         options: myDevConfig
      })
      // it ends here - there wasn't much really....

   );

   // create a single instance of the compiler to allow caching
   return webpack(myDevConfig);
}

// .....

LoaderOptionsPlugin we hardly new ya

After a little more experimentation it seems that the LoaderOptionsPlugin is not necessary at all for our own use case. In fact it's probably not best practice to get used to using it as it's only intended to live a short while whilst people move from webpack 1 to webpack 2. In that vein let's tweak our webpack.js file once more:

function buildProduction(done) {

   // .....

   myProdConfig.plugins = myProdConfig.plugins.concat(
      // .....

      new webpack.optimize.UglifyJsPlugin({
         compress: {
            warnings: true
         }
      }),

      failPlugin
   );

   // .....
}

function createDevCompiler() {
   var myDevConfig = webpackConfig;
   myDevConfig.devtool = 'inline-source-map';

   myDevConfig.plugins = myDevConfig.plugins.concat(
      new webpack.optimize.CommonsChunkPlugin({ name: 'vendor', filename: 'vendor.js' }),
      new WebpackNotifierPlugin({ title: 'Webpack build', excludeWarnings: true }),
   );

   // create a single instance of the compiler to allow caching
   return webpack(myDevConfig);
}

// .....

karma.conf.js

Finally Karma. Our karma.conf.js with webpack 1 looked like this:

/* eslint-disable no-var, strict */
'use strict';

var webpackConfig = require('./webpack.config.js');

module.exports = function(config) {
  // Documentation: https://karma-runner.github.io/0.13/config/configuration-file.html
  config.set({
    browsers: [ 'PhantomJS' ],

    files: [
      // This ensures we have the es6 shims in place and then loads all the tests
      'test/main.js'
    ],

    port: 9876,

    frameworks: [ 'jasmine' ],

    logLevel: config.LOG_INFO, //config.LOG_DEBUG

    preprocessors: {
      'test/main.js': [ 'webpack', 'sourcemap' ]
    },

    webpack: {
      devtool: 'inline-source-map',
      debug: true,
      module: webpackConfig.module,
      resolve: webpackConfig.resolve
    },

    webpackMiddleware: {
      quiet: true,
      stats: {
        colors: true
      }
    },

    // reporter options
    mochaReporter: {
      colors: {
        success: 'bgGreen',
        info: 'cyan',
        warning: 'bgBlue',
        error: 'bgRed'
      }
    }
  });
};

We just need to chop out the debug statement from the webpack section like so:

module.exports = function(config) {

  // .....

    webpack: {
      devtool: 'inline-source-map',
      module: webpackConfig.module,
      resolve: webpackConfig.resolve
    },

  // .....

  });
};

Compare and contrast

We now have a repo that works with webpack 2 rc 1. Yay! If you'd like to see it then take a look here.

I thought I'd compare performance / output size of compiling with webpack 1 to webpack 2. First of all in debug / development mode:

// webpack 1

Version: webpack 1.14.0
Time: 5063ms
    Asset     Size  Chunks             Chunk Names
  main.js  37.2 kB       0  [emitted]  main
vendor.js  2.65 MB       1  [emitted]  vendor

// webpack 2

Version: webpack 2.2.0-rc.1
Time: 5820ms
    Asset     Size  Chunks                    Chunk Names
  main.js  38.7 kB       0  [emitted]         main
vendor.js  2.63 MB       1  [emitted]  [big]  vendor

Size and compilation time is not massively different from webpack 1 to webpack 2. It's all about the same. I'm not sure if that's to be expected or not.... Though I've a feeling in production mode I'm supposed to feel the benefits of tree shaking so let's have a go:

// webpack 1

Version: webpack 1.14.0
Time: 5788ms
                         Asset     Size  Chunks             Chunk Names
  main.269c66e1bc13b7426cee.js  10.5 kB       0  [emitted]  main
vendor.269c66e1bc13b7426cee.js   231 kB       1  [emitted]  vendor

// webpack 2

Version: webpack 2.2.0-rc.1
Time: 5659ms
                         Asset     Size  Chunks             Chunk Names
  main.33e0d70eeec29206e9b6.js  9.22 kB       0  [emitted]  main
vendor.33e0d70eeec29206e9b6.js   233 kB       1  [emitted]  vendor

To my surprise this looks pretty much unchanged before and after as well. This may be a sign I have missed something crucial out. Or maybe that's to be expected. Do give me a heads up if I've missed something...

Sunday, 11 December 2016

webpack: syncing the enhanced-resolve

Like Captain Ahab I resolve to sync the white whale that is webpack's enhanced-resolve... English you say? Let me start again:

So, you're working on a webpack loader. (In my case the typescript loader; ts-loader) You have need of webpack's resolve capabilities. You dig around and you discover that that superpower is lodged in the very heart of the enhanced-resolve package. Fantastic. But wait, there's more: your needs are custom. You need a sync, not an async resolver. (Try saying that quickly.) You regard the description of enhanced-resolve with some concern:

"Offers an async require.resolve function. It's highly configurable."

Well that doesn't sound too promising. Let's have a look at the docs. Ah. Hmmm. You know how it goes with webpack. Why document anything clearly when people could just guess wildly until they near insanity and gibber? Right? It's well established that webpack's attitude to docs has been traditionally akin to Gordon Gecko's view on lunch.

In all fairness, things are beginning to change on that front. In fact the new docs look very promising. But regrettably, the docs on the enhanced-resolve repo are old school. Which is to say: opaque. However, I'm here to tell you that if a sync resolver is your baby then, contrary to appearances, enhanced-resolve has your back.

Sync, for lack of a better word, is good

Nestled inside enhanced-resolve is the ResolverFactory.js which can be used to make a resolver. However, you can supply it with a million options and that's just like giving someone a gun with a predilection for feet.

What you want is an example of how you could make a sync resolver. Well, surprise surprise it's right in front of your nose. Tucked away in node.js (I do *not* get the name) is exactly what you're after. It contains a number of factory functions which will construct a ready-made resolver for you; sync or async. Perfect! So here's how I'm rolling:

const node = require("enhanced-resolve/lib/node");

function makeSyncResolver(options) {
    return node.create.sync(options.resolve);
}

const resolveSync = makeSyncResolver(loader.options);

The loader options used above you'll be familiar with as the resolve section of your webpack.config.js. You can read more about them here and here.

What you're left with at this point is a function; a resolveSync function if you will that takes 3 arguments:

context
I don't know what this is. So when using the function I just supply undefined; and that seems to be OK. Weird, right?
path
This is the path to your code (I think). So, a valid value to supply - handily lifted from the ts-loader test pack - would be: C:\source\ts-loader\.test\babel-issue92
request
The actual module you're interested in; so using the same test the relevant value would be ./submodule/submodule

Put it all together and what have you got?

const resolvedFileName = resolveSync(
    undefined,
    'C:\source\ts-loader\.test\babel-issue92',
    './submodule/submodule'
);

// resolvedFileName: C:\source\ts-loader\.test\babel-issue92\submodule\submodule.tsx

Boom.

Saturday, 12 November 2016

My Subconscious is a Better Developer Than I Am

Occasionally I flatter myself that I'm alright at this development lark. Such egotistical talk is foolish. What makes me pause even more when I consider the proposition is this: my subconscious is a better developer than I am.

What's this fellow talking about?

There's 2 of me. Not identical twins; masquerading as a single man (spoiler: I am not a Christopher Nolan movie). No. There's me, the chap who's tapping away at his keyboard and solving a problem. And there's the other chap too.

I have days when I'm working away at something and I'll hit a brick wall. I produce solutions that work but are not elegant. I'm not proud of them. Or worse, I fail to come up with something that solves the problem I'm facing. So I go home. I see my family, I have some food, I do something else. I context switch. I go to sleep.

When I awake, sometimes (not always) I'll have waiting in my head a better solution. I can see the solution in my head. I can turn it over and compare it to what, if anything, I currently have and see the reasons the new approach is better. Great, right? Up to a point.

What concerns me is this: I didn't work this out from first principles. The idea arrived sight unseen in my head. It totally works but whose work actually is it? I feel like I'm taking credit for someone else's graft. This is probably why I'm so keen on the MIT License. Don't want to be caught out.

I think I'd like it better if I was a better developer than my subconscious. I'd come up with the gold and mock the half baked ideas he shows me in the morning. Alas it is not to be.

I draw some comfort from the knowledge that I'm not alone in my experience. I've chatted to other devs in the same boat. There's probably 2 of you as well. Amarite? There's probably 3 of Jon Skeet; each more brilliant than the last...

PS I posted this to Hacker News and the comments left by people are pretty fascinating.

Tuesday, 1 November 2016

But you can't die... I love you!

That's how I was feeling on the morning of October 6th 2016. I'd been feeling that way for some time. The target of my concern? ts-loader. ts-loader is a loader for webpack; the module bundler. ts-loader allows you use TypeScript with webpack. I'd been a merry user of it for at least a year or so. But, at that point, all was not well in the land of ts-loader. Come with me and I'll tell you a story...

Going Red

At some point, I became a member of the TypeStrong organisation on GitHub. I'm honestly not entirely sure how. I think it may have been down to the very excellent Basarat (he of ALM / atom-typescript / the list goes on fame) but I couldn't clearly say.

Either way, James Brantly's ts-loader was also one of TypeStrong's projects. Since I used it, I occasionally contributed. Not much to be honest; mostly it was documentation tweaks. I mean I never really looked at the main code at all. It worked (thanks to other people). I just plugged it into my projects and ploughed on my merry way. I liked it. It was well established; with friendly maintainers. It had a continuous integration test pack that ran against multiple versions of TypeScript on both Windows and Linux. I trusted it. Then one day the continuous integration tests went red. And stayed red.

This is where we came in. On the morning of October 6th I was mulling what to do about this. I knew there was another alternative out there (awesome-typescript-loader) but I was a little wary of it. My understanding of ATL was that it targeted webpack 2.0 which has long been in beta. Where I ply my trade (mostly developing software for the financial sector in the City of London) beta is not a word that people trust. They don't do beta. What's more I was quite happy with ts-loader; I didn't want to switch if I didn't have to. I also rather suspected (rightly) that there wasn't much wrong; ts-loader just needed a little bit of love. So I thought: I bet I can help here.

The Statement of Intent

So that evening I raised an issue against ts-loader. Not a "sort it out chap" issue. No. That wouldn't be terribly helpful. I raised a "here's how I can help" issue. I present an abridged version below:

Okay here's the deal; I've been using ts-loader for a long time but my contributions up until now have mostly been documentation. Fixing of tests etc. As the commit history shows this is @jbrantly's baby and kudos to him.

He's not been able to contribute much of late and since he's the main person who's worked on ts-loader not much has happened for a while; the code is a bit stale. As I'm a member of TypeStrong I'm going to have a go at improving the state of the project. I'm going to do this as carefully as I can. This issue is intended as a meta issue to make it visible what I'm plannning to do / doing.

My immediate goal is to get a newer version of ts-loader built and shipped. Essentially all the bug fixes / tweaks since the last release should ship.

...

I don't have npm publish rights for ts-loader. Fortunately both @jbrantly and @blakeembrey do - and hopefully one of them will either be able to help out with a publish or let me have the requisite rights to do it.

I can't promise this is all going to work; I've got a limited amount of spare time I'm afraid. Whatever happens it's going to take me a little while. But I'm going to see where I can take this. Best foot forward! Please bear with me...

I did wonder what would happen next. This happened next:

Caretaker, not BDFL

So that's how it came to pass that I became the present main caretaker of ts-loader. James very kindly gave me the rights to publish to npm and soon enough I did. I fixed up the existing integration test pack; made it less brittle. I wrote a new integration test pack (that performs a different sort of testing; execution rather than comparison). I merged pull requests, I closed issues. I introduced a regression (whoops!), a community member helped me fix it (thanks Mike Mazmanyan!). In the last month ts-loader has shipped 6 times.

The thing that matters most in the last paragraph are the phrases "I merged pull requests" and "a community member helped me fix it". I'm wary of one man bands; you should be to. I want projects to be a thing communally built and maintained. If I go under a bus I want someone else to be able to carry on without me. So be part of this; I want you to help!

I've got plans to do a lot more. I'm in the process of refactoring ts-loader to make it more modular and hence easier for others to contribute. (Also it must be said, refactoring something is an excellent way to try and learn a codebase.) Version 1.0 of ts-loader should ship this week.

I'm working with Herrington Darkholme (awesome name BTW!) to add a hook-in point that will allow ts-loader to support vuejs. Stuff is happening and will continue to. But don't be shy; be part of this! ts-loader awaits your PRs and is happy to have as many caretakers as possible!

Wednesday, 5 October 2016

React Component Curry

Everyone loves curry don't they? I don't know about you but I'm going for one on Friday.

When React 0.14 shipped, it came with a new way to write React components. Rather than as an ES2015 class or using React.createClass there was now another way: stateless functional components.

These are components which have no state (the name gives it away) and a simple syntax; they are a function which takes your component props as a single parameter and they return JSX. Think of them as the render method of a standard component just with props as a parameter.

The advantage of these components is that they can reduce the amount of code you have to write for a component which requires no state. This is even more true if you're using ES2015 syntax as you have arrow functions and destructuring to help.Embrace the terseness!

Mine's a Balti

There is another advantage of this syntax. If you have a number of components which share similar implementation you can easily make component factories by currying:

function iconMaker(fontAwesomeClassName: string) {
   return props => <i className={ `fa ${ fontAwesomeClassName }` }/>;
}

const ThumbsUpIcon = iconMaker("fa-thumbs-up");
const TrophyIcon = iconMaker("fa-trophy");

// Somewhere in else inside a render function:

<p>This is totally <ThumbsUpIcon />.... You should win a <TrophyIcon /></p>

So our iconMaker is a function which, when called with a Font Awesome class name produces a function which, when invoked, will return a the HTML required to render that icon. This is a super simple example, a bhaji if you will, but you can imagine how useful this technique can be when you've more of a banquet in mind.

Thursday, 22 September 2016

TypeScript 2.0, ES2016 and Babel

TypeScript 2.0 has shipped! Naturally I'm excited. For some time I've been using TypeScript to emit ES2015 code which I pass onto Babel to transpile to ES "old school". You can see how here.

Merely upgrading my package.json to use "typescript": "^2.0.3" from "typescript": "^1.8.10" was painless. TypeScript now supports ES2016 (the previous major release 1.8 supported ES2015). I wanted to move on from writing ES2015 to writing ES2016 using my chosen build process. Fortunately, it's supported. Phew. However, due to some advances in ecmascript feature modularisation within the TypeScript compiler the upgrade path is slightly different. I figured that I'd just be able to update the target in my tsconfig.json to "es2016" from "es2015", add in the ES2016 preset for Babel and jobs a good 'un. Not so. There were a few more steps to follow. Here's the recipe:

tsconfig.json changes

Well, there's no "es2016" target for TypeScript. You carry on with a target of "es2015". What I need is a new entry: "lib": ["dom", "es2015", "es2016"]. This tells the compiler that we're expecting to be emitting to an environment which supports a browser ("dom"), and both ES2016 and ES2015. Our "environment" is Babel and it's going to pick up the baton from this point. My complete tsconfig.json looks like this:

{
  "compileOnSave": false,
  "compilerOptions": {
    "allowSyntheticDefaultImports": true,
    "lib": ["dom", "es2015", "es2016"],
    "jsx": "preserve",
    "module": "es2015",
    "moduleResolution": "node",
    "noEmitOnError": false,
    "noImplicitAny": true,
    "preserveConstEnums": true,
    "removeComments": false,
    "suppressImplicitAnyIndexErrors": true,
    "target": "es2015"
  }
}

Babel changes

I needed the Babel preset for ES2016; with a quick npm install --save-dev babel-preset-es2016 that was sorted. Now just to kick Webpack into gear...

Webpack changes

My webpack config plugs together TypeScript and Babel with the help of ts-loader and babel-loader. It allows the transpilation of my (few) JavaScript files so I can write ES2015. However, mainly it allows the transpilation of my (many) TypeScript files so I can write ES2015-flavoured TypeScript. I'll now tweak the loaders so they cater for ES2016 as well.

var webpack = require('webpack');

module.exports = {
  // ....

  module: {
    loaders: [{
      // Now transpiling ES2016 TS
      test: /\.ts(x?)$/,
      exclude: /node_modules/,
      loader: 'babel-loader?presets[]=es2016&presets[]=es2015&presets[]=react!ts-loader'
    }, {
      // Now transpiling ES2016 JS
      test: /\.js$/,
      exclude: /node_modules/,
      loader: 'babel',
      query: {
        presets: ['es2016', 'es2015', 'react']
      }
    }]
  },

  // ....
};

Wake Up and Smell the Jasmine

And we're there; it works. How do I know? Well; here's the proof:

  it("Array.prototype.includes works", () => {
    const result = [1, 2, 3].includes(2);
    expect(result).toBe(true);
  });

  it("Exponentiation operator works", () => {
    expect(1 ** 2 === Math.pow(1, 2)).toBe(true);
  });

Much love to the TypeScript team for an awesome job; I can't wait to get stuck into some of the exciting new features of TypeScript 2.0. strictNullChecks FTW!

Monday, 12 September 2016

Integration Tests with SQL Server Database Snapshots

Once More With Feeling

This is a topic that I have written about before.... But not well. I recently had cause to dust down my notes on how to use snapshotting in your integration tests. To my dismay, referring back to my original blog post was less helpful than I'd hoped. Now I've cracked the enigma code that my original scribings turned out to be, it's time to turn my relearnings back into something genuinely useful.

What's the Scenario?

You have a test database. You want to write integration tests. So what's the problem? Well, these tests will add records, delete records, update records within the tables of the database. They will mutate the data. And that's exactly what they ought to do; they're testing that our code uses the database in the way we would hope and expect.

So how do we handle this? Well, we could handle this by writing code at the end of each test that is responsible for reverting the database back to the state that it was in at the start of the test. So if we had a test that added a record and tested it, we'd need the test to be responsible for removing that record before any subsequent tests run. Now that's a totally legitimate approach but it adds tax. Each test becomes more complicated and requires more code.

So what's another approach? Perhaps we could take a backup of our database before our first test runs. Then, at the end of each test, we could restore our backup to roll the database back to its initial state. Perfect, right? Less code to write, less scope for errors. So what's the downside? Backups are slowwwww. Restores likewise. We could be waiting minutes between each test that runs. That's not acceptable.

There is another way though: database snapshots - a feature that's been nestling inside SQL Server for a goodly number of years. For our use case, to all intents and purposes, database snapshots offers the same functionality as backups and restores. You can backup a database (take a snapshot of a database at a point in time), you can restore a database (roll back the database to the point of the snapshot). More importantly, you can do either operation in *under a second*. As it happens, Microsoft advocate using this approach themselves:

In a testing environment, it can be useful when repeatedly running a test protocol for the database to contain identical data at the start of each round of testing. Before running the first round, an application developer or tester can create a database snapshot on the test database. After each test run, the database can be quickly returned to its prior state by reverting the database snapshot.

Sold!

Talk is cheap, show me the code

In the end it comes down to 3 classes; DatabaseSnapshot.cs which does the actual snapshotting work and 2 classes that make use of it.

DatabaseSnapshot.cs

This is our DatabaseSnapshot class. Isn't it pretty?

using System.Data;
using System.Data.SqlClient;

namespace Testing.Shared
{
    public class DatabaseSnapshot
    {
        private readonly string _dbName;
        private readonly string _dbSnapShotPath;
        private readonly string _dbSnapShotName;
        private readonly string _dbConnectionString;

        public DatabaseSnapshot(string dbName, string dbSnapshotPath, string dbSnapshotName, string dbConnectionString)
        {
            _dbName = dbName;
            _dbSnapshotPath = dbSnapshotPath;
            _dbSnapshotName = dbSnapshotName;
            _dbConnectionString = dbConnectionString;
        }

        public void CreateSnapshot()
        {
            if (!System.IO.Directory.Exists(_dbSnapshotPath))
                System.IO.Directory.CreateDirectory(_dbSnapshotPath);

            var sql = $"CREATE DATABASE { _dbSnapshotName } ON (NAME=[{ _dbName }], FILENAME='{ _dbSnapshotPath }{ _dbSnapshotName }') AS SNAPSHOT OF [{_dbName }]";

            ExecuteSqlAgainstMaster(sql);
        }

        public void DeleteSnapshot()
        {
            var sql = $"DROP DATABASE { _dbSnapshotName }";

            ExecuteSqlAgainstMaster(sql);
        }

        public void RestoreSnapshot()
        {
            var sql = "USE master;\r\n" +

                $"ALTER DATABASE {_dbName} SET SINGLE_USER WITH ROLLBACK IMMEDIATE;\r\n" +

                $"RESTORE DATABASE {_dbName}\r\n" +
                $"FROM DATABASE_SNAPSHOT = '{ _dbSnapshotName }';\r\n" +

                $"ALTER DATABASE {_dbName} SET MULTI_USER;\r\n";

            ExecuteSqlAgainstMaster(sql);
        }

        private void ExecuteSqlAgainstMaster(string sql, params SqlParameter[] parameters)
        {
            using (var conn = new SqlConnection(_dbConnectionString))
            {
                conn.Open();
                var cmd = new SqlCommand(sql, conn) { CommandType = CommandType.Text };
                cmd.Parameters.AddRange(parameters);
                cmd.ExecuteNonQuery();
                conn.Close();
            }
        }
    }
}

It exposes 3 methods:

CreateSnapshot
This method creates the snapshot of the database. We will run this right at the start, before any of our tests run.
DeleteSnapshot
Deletes the snapshot we created. We will run this at the end, after all our tests have finished running.
RestoreSnapshot
Restores the database back to the snapshot we took earlier. We run this after each test has completed. This method relies on a connection to the database (perhaps unsurprisingly). It switches the database in use away from the database that is being restored prior to actually running the restore. It happens to shift to the master database (I believe that's entirely incidental; although I haven't tested).
SetupAndTeardown.cs

This class is responsible for setting up the snapshot we're going to use in our tests right before any of the tests have run (in the FixtureSetup method). It's also responsible for deleting the snapshot once all the tests have finished running (in the FixtureTearDown method). It should be noted that in this example I'm using NUnit and this class is written to depend on the hooks NUnit exposes for running code at the very beginning and end of the test cycle. All test frameworks have these hooks; if you're using something other than NUnit then it's just a case of swapping in the relevant attribute (everything tends to attribute driven in the test framework world).

using NUnit.Framework;

namespace Testing.Shared
{
   [SetUpFixture]
   public class SetupAndTeardown
   {
      public static DatabaseSnapshot DatabaseSnapshot;

      [SetUp]
      public void FixtureSetup()
      {
         DatabaseSnapshot = new DatabaseSnapshot("MyDbName", "C:\\", "MySnapshot", "Data Source=.;initial catalog=MyDbName;integrated security=True;");

         try
         {
            // Try to delete the snapshot in case it was left over from aborted test runs
            DatabaseSnapshot.DeleteSnapShot();
         }
         catch { /* this should fail with snapshot does not exist */ }

         DatabaseSnapshot.CreateSnapShot();
      }

      [TearDown]
      public void FixtureTearDown()
      {
         DatabaseSnapshot.DeleteSnapShot();
      }
   }
}
TestBase.cs

All of our test classes are made to inherit from this class:

using NUnit.Framework;

namespace Testing.Shared
{
   public class TestBase
   {
      [TearDown]
      public void TearDown()
      {
         SetupAndTeardown.DatabaseSnapshot.RestoreSnapShot();
      }
   }
}

Which restores the database back to the snapshot position at the end of each test. And that... Is that!

Friday, 19 August 2016

The Ternary Operator <3 Destructuring

I'm addicted to the ternary operator. For reasons I can't explain, I cannot get enough of:

const thisOrThat = (someCondition) ? "this" : "or that"

The occasion regularly arises where I need to turn my lovely terse code into an if statement in order to set 2 variables instead of 1. I've been heartbroken; I hate doing:

let legWear: string, coat: boolean;
if (weather === "good") {
  legWear = "shorts";
  coat = false;
}
else {
  legWear = "jeans";
  coat = true;
}

Just going from setting one variable to setting two has been really traumatic:

  • I've had do stop using const and moved to let. This has made my code less "truthful" in the sense that I never intend to reassign these variables again; they are intended to be immutable.
  • I've gone from 1 line of code to 9 lines of code. That's 9x the code for increasing the number of variables in play by 1. That's... heavy.
  • This third point only applies if you're using TypeScript (and I am): I have to specify the types of my variables up front if I want type safety.

ES2015 gives us another option. We can move back to the ternary operator if we change the return type of each branch to be an object sharing the same signature. Then, using destructuring, we can pull out those object properties into consts:

const { legWear, coat } = (weather === "good") 
  ? { legWear: "shorts", coat: false }
  : { legWear: "jeans", coat: true }

With this approach we're keeping usage of const instead of let and we're only marginally increasing the amount of code we're writing. If you're using TypeScript you're back to being able to rely on the compiler correctly inferring your types; you don't need to specify. Awesome.

Crowdfund You A Tuple

I thought I was done and then I saw this:

Daniel helpfully points out that there's an even terser syntax available to us:

const [ legWear, coat ] = (weather === "good") 
  ? [ "shorts", false ]
  : [ "jeans",  true  ]

The above is ES2015 array destructuring. We get exactly the same effect but it's a little terser as we don't have to repeat the prop names as we do when using object destructuring. From a TypeScript perspective the assignment side of the above is a Tuple which allows our type inference to flow through in the manner we'd hope.

Lovely. Thanks!

Saturday, 23 July 2016

Understanding Webpack's DefinePlugin (and using with TypeScript)

I've been searching for a way to describe what the DefinePlugin actually does. The docs say*:

Define free variables. Useful for having development builds with debug logging or adding global constants.

* Actually that should read "used to say". I've made some changes to the official docs.... (Surprisingly easy to do that by the way; it's just a wiki you can edit at will.)

I think I would describe it thusly: the DefinePlugin allows you to create global constants which can be configured at compile time. I find this very useful for allowing different behaviour between development builds and release builds. This post will demonstrate usage of this approach, talk about what's actually happening and how to get this working nicely with TypeScript.

If you just want to see this in action then take a look at this repo and keep your eyes open for usage of __VERSION__ and __IN_DEBUG__.

What Globals?

For our example we want to define 2 global constants; a string called __VERSION__ and a boolean called __IN_DEBUG__. The names are deliberately wacky to draw attention to the fact that these are not your everyday, common-or-garden variables. Them's "special". These constants will be initialised with different values depending on whether we are in a debug build or a production build. Usage of these constants in our code might look like this:

    if (__IN_DEBUG__) {
        console.log(`This app is version ${ __VERSION__ }`);
    }

So, if __IN_DEBUG__ is set to true this code would log out to the console the version of the app.

Configuring our Globals

To introduce these constants to webpack we're going to add this to our webpack configuration:

var webpack = require('webpack');

// ...

  plugins: [ 
      new webpack.DefinePlugin({
          __IN_DEBUG__: JSON.stringify(false),
          __VERSION__: JSON.stringify('1.0.0.' + Date.now())
      }),
      // ...
  ]
// ...

What's going on here? Well, each key of the object literal above represents one of our global constants. When you look at the value, just imagine each outer JSON.stringify( ... ) is not there. It's just noise. Imagine instead that you're seeing this:

          __IN_DEBUG__: false,
          __VERSION__: '1.0.0.' + Date.now()

A little clearer, right? __IN_DEBUG__ is given the boolean value false and __VERSION__ is given the string value of 1.0.0. plus the ticks off of Date.now(). What's happening here is well explained in Pete Hunt's excellent webpack howto: "definePlugin takes raw strings and inserts them". JSON.stringify facilitates this; it produces a string representation of a value that can be inlined into code. When the inlining takes place the actual output would be something like this:

    if (false) { // Because at compile time, __IN_DEBUG__ === false
        console.log(`This app is version ${ "1.0.0.1469268116580" }`); // And __VERSION__ === "1.0.0.1469268116580"
    }

And if you've got some UglifyJS or similar in the mix then, in the example above, this would actually strip out the statement above entirely since it's clearly a NOOP. Yay the dead code removal! If __IN_DEBUG__ was false then (perhaps obviously) this statement would be left in place as it wouldn't be dead code.

TypeScript and Define

The final piece of the puzzle is making TypeScript happy. It doesn't know anything about our global constants. So we need to tell it:

declare var __IN_DEBUG__: boolean;
declare var __VERSION__: string;

And that's it. Compile time constants are a go!

Thursday, 2 June 2016

Creating an ES2015 Map from an Array in TypeScript

I'm a great lover of ES2015's Map. However, just recently I tumbled over something I find a touch inconvenient about how you initialise a new Map from the contents of an Array in TypeScript.

This Doesn't Work

We're going try to something like this: (pilfered from the MDN docs)

var kvArray = [["key1", "value1"], ["key2", "value2"]];

// Use the regular Map constructor to transform a 2D key-value Array into a map
var myMap = new Map(kvArray);

Simple enough right? Well I'd rather assumed that I should be able to do something like this in TypeScript:

const iAmAnArray [
  { value: "value1", text: "hello" }
  { value: "value2", text: "map" }
];

const iAmAMap = new Map<string, string>(
  iAmAnArray.map(x => [x.value, x.text])
);

However, to my surprise this errored out with:

[ts] Argument of type 'string[][]' is not assignable to parameter of type 'Iterable<[string, string]>'.
  Types of property '[Symbol.iterator]' are incompatible.
    Type '() => IterableIterator<string[]>' is not assignable to type '() => Iterator<[string, string]>'.
      Type 'IterableIterator<string[]>' is not assignable to type 'Iterator<[string, string]>'.
        Types of property 'next' are incompatible.
          Type '(value?: any) => IteratorResult<string[]>' is not assignable to type '(value?: any) => IteratorResult<[string, string]>'.
            Type 'IteratorResult<string[]>' is not assignable to type 'IteratorResult<[string, string]>'.
              Type 'string[]' is not assignable to type '[string, string]'.
                Property '0' is missing in type 'string[]'.

Disappointing right? It's expecting Iterable<[string, string]> and an Array with 2 elements that are strings is not inferred to be that.

This Does

It emerges that there is a way to do this though; you just need to give the compiler a clue. You need to include a type assertion of as [string, string] which tells the compiler that what you've just declared is a Tuple of string and string. (Please note that [string, string] corresponds to the types of the Key and Value of your Map and should be set accordingly.)

So a working version of the code looks like this:

const iAmAnArray [
  { value: "value1", text: "hello" }
  { value: "value2", text: "map" }
];

const iAmAMap = new Map<string, string>(
  iAmAnArray.map(x => [x.value, x.text] as [string, string])
);

Or, to be terser, this:

const iAmAnArray [
  { value: "value1", text: "hello" }
  { value: "value2", text: "map" }
];

const iAmAMap = new Map( // Look Ma!  No type annotations
  iAmAnArray.map(x => [x.value, x.text] as [string, string])
);

I've raised this as an issue with the TypeScript team; you can find details here.

Tuesday, 24 May 2016

The Mysterious Case of Webpack, Angular and jQuery

You may know that Angular ships with a cutdown version of jQuery called jQLite. It's still possible to use the full-fat jQuery; to quote the docs:

To use jQuery, simply ensure it is loaded before the angular.js file.

Now the wording rather implies that you're not using any module loader / bundler. Rather that all files are being loaded via script tags and relies on the global variables that result from that. True enough, if you take a look at the Angular source you can see how this works:

  // bind to jQuery if present;
  var jqName = jq();
  jQuery = isUndefined(jqName) ? window.jQuery :   // use jQuery (if present)
           !jqName             ? undefined     :   // use jqLite
                                 window[jqName];   // use jQuery specified by `ngJq`

Amongst other things it looks for a jQuery variable which has been placed onto the window object. If it is found then jQuery is used; if it is not then it's jqLite all the way.

But wait! I'm using webpack

Me too! And one of the reasons is that we get to move away from reliance upon the global scope and towards proper modularisation. So how do we get Angular to use jQuery given the code we've seen above? Well, your first thought might be to npm install yourself some jQuery and then make sure you've got something like this in your entry file:

import "jquery"; // This'll fix it... Right?
import * as angular from "angular";

Wrong.

You need the ProvidePlugin

In your webpack.config.js you need to add the following entry to your plugins:

      new webpack.ProvidePlugin({
          "window.jQuery": "jquery"
      }),

This uses the webpack ProvidePlugin and, at the point of webpackification (© 2016 John Reilly) all references in the code to window.jQuery will be replaced with a reference to the webpack module that contains jQuery. So when you look at the bundled file you'll see that the code that checks the window object for jQuery has become this:

  jQuery = isUndefined(jqName) ? __webpack_provided_window_dot_jQuery :   // use jQuery (if present)
           !jqName             ? undefined     :   // use jqLite
                                 window[jqName];   // use jQuery specified by `ngJq`

That's right; webpack is providing Angular with jQuery whilst still not placing a jQuery variable onto the window. Neat huh?

Friday, 13 May 2016

Inlining Angular Templates with WebPack and TypeScript

This technique actually applies to pretty much any web stack where you have to supply templates; it just so happens that I'm using Angular 1.x in this case. Also I have an extra technique which is useful to handle the ng-include scenario.

Preamble

For some time I've been using webpack to bundle my front end. I write ES6 TypeScript; import statements and all. This is all sewn together using the glorious ts-loader to compile and emit ES6 code which is handed off to the wonderful babel-loader which transpiles it to ESold code. All with full source map support. It's wonderful.

However, up until now I've been leaving Angular to perform the relevant http requests at runtime when it needs to pull in templates. That works absolutely fine but my preference is to preload those templates. In fact I've written before about using the gulp angular template cache to achieve just that aim.

So I was wondering; in this modular world what would be the equivalent approach? Sure I could still use the gulp angular template cache approach but I would like something a little more deliberate and a little less magic. Also, I've discovered (to my cost) that when using the existing approach, it's possible to break the existing implementation without realising it; only finding out there's a problem in Production when unexpected http requests start happening. Finding these problems out at compile time rather than runtime is always to be strived for. So how?

raw-loader!

raw-loader allows you load file content using require statements. This works well with the use case of inlining html. So I drop it into my webpack.config.js like so:


var path = require('path');

module.exports = {
    cache: true,
    entry: {
        main: './src/main.ts',

        vendor: [
          'babel-polyfill',
          'angular',
          'angular-animate',
          'angular-sanitize',
          'angular-ui-bootstrap',
          'angular-ui-router'
        ]
    },
    output: {
        path: path.resolve(__dirname, './dist/scripts'),
        filename: '[name].js',
        chunkFilename: '[chunkhash].js'
    },
    module: {
        loaders: [{
            test: /\.ts(x?)$/,
            exclude: /node_modules/,
            loader: 'babel-loader?presets[]=es2015!ts-loader'
        }, {
            test: /\.js$/,
            exclude: /node_modules/,
            loader: 'babel',
            query: {
                presets: ['es2015']
            }
        }, { // THIS IS THE MAGIC!
            test: /\.html$/,
            exclude: /node_modules/,
            loader: 'raw'
        }]  // THAT WAS THE MAGIC!
    },
    plugins: [
      // ....
    ],
    resolve: {
        extensions: ['', '.ts', '.tsx', '.js']
    }
};

With this in place, if someone requires a file with the html suffix then raw-loader comes in. So now we can swap this:

  $stateProvider
    .state('state1', {
      url: "/state1",
      templateUrl: "partials/state1.html"
    })

For this:

  $stateProvider
    .state('state1', {
      url: "/state1",
      template: require("./partials/state1.html")
    })

Now initially TypeScript is going to complain about your require statement. That's fair; outside of node-land it doesn't know what require is. No bother, you just need to drop in a one line simple definition file to sort this out; let me present webpack-require.d.ts:

declare var require: (filename: string) => any;

You've now inlined your template. And for bonus points, if you were to make a mistake in your path then webpack would shout at you at compile time; which is a good, good thing.

ng-include

The one use case that this doesn't cover is where your templates import other templates through use of the ng-include directive. They will still trigger http requests as the templates are served. The simple way to prevent that is by priming the angular $templateCache like so:

  app.run(["$templateCache",
    ($templateCache: ng.ITemplateCacheService) => {
        $templateCache.put("justSome.html", require("./justSome.html"));
        // Other templates go here...
    }]);

Now when the app spins up it already has everything it needs pre-cached.

Monday, 25 April 2016

Instant Stubs with JSON.Net (just add hot water)

I'd like you to close your eyes and imagine a scenario. You're handed a prototype system. You're told it works. It has no documentation. It has 0 unit tests. The hope is that you can take it on, refactor it, make it better and (crucially) not break it. Oh, and you don't really understand what the code does or why it does it either; information on that front is, alas, sorely lacking.

This has happened to me; it's alas not that unusual. The common advice handed out in this situation is: "add unit tests before you change it". That's good advice. We need to take the implementation that embodies the correctness of the system and create unit tests that set that implementation in stone. However, what say the system that you're hoping to add tests to takes a number of large and complex inputs from some external source and produces a similarly large and complex output?

You could start with integration tests. They're good but slow and crucially they depend upon the external inputs being available and unchanged (which is perhaps unlikely). What you could do (what I have done) is debug a working working system. At each point that an input is obtained I have painstakingly transcribed the data which allows me to subsequently hand code stub data. There comes a point when this is plainly untenable; it's just too much data to transcribe. At this point the temptation is to think "it's okay; I can live without the tests. I'll just be super careful with my refactoring... It'll be fine It'll be fine It'll be fine It'll be fine".

Actually, it probably won't be fine. And even if it is (miracles do happen) you're going to be fairly stressed as you wonder if you've been careful enough. What if there was another way? A way that wasn't quite so hard but that allowed you to add tests without requiring 3 months hand coding....

Instant Stubs

What I've come up with is a super simple utility class for creating stubs / fakes. (I'm aware the naming of such things can be a little contentious.)

using Newtonsoft.Json;
using System;
using System.IO;

namespace MakeFakeData.UnitTests
{
  public static class Stubs
  {
    private static JsonSerializer _serializer = new JsonSerializer { NullValueHandling = NullValueHandling.Ignore };

    public static void Make<T>(string stubPath, T data)
    {
      try
      {
        if (string.IsNullOrEmpty(stubPath))
          throw new ArgumentNullException(nameof(stubPath));
        if (data == null)
          throw new ArgumentNullException(nameof(data));

        using (var sw = new StreamWriter(stubPath))
        using (var writer = new JsonTextWriter(sw) { 
            Formatting = Formatting.Indented, 
            IndentChar = ' ', 
            Indentation = 2})
        {
          _serializer.Serialize(writer, data);
        }
      }
      catch (Exception exc)
      {
        throw new Exception($"Failed to make {stubPath}", exc);
      }
    }

    public static T Load<T>(string stubPath)
    {
      try
      {
        if (string.IsNullOrEmpty(stubPath))
          throw new ArgumentNullException(nameof(stubPath));

        using (var file = File.OpenText(stubPath))
        using (var reader = new JsonTextReader(file))
        {
          return _serializer.Deserialize<T>(reader);
        }
      }
      catch (Exception exc)
      {
        throw new Exception($"Failed to load {stubPath}", exc);
      }
    }
  }
}

As you can see this class uses JSON.Net and exposes 2 methods:

Make
Takes a given piece of data and uses JSON.Net to serialise it as JSON to a file. (nb I choose to format the JSON for readability and exclude null values; both totally optional)
Load
Takes the given path and loads the associated JSON file and deserialises it back into an object.

The idea is this: we take our working implementation and, wherever it extracts data from an external source, we insert a temporary statement like this:

    var data = _dataService.GetComplexData();

    // Just inserted so we can generate the stub data...
    Stubs.Make($"{System.AppDomain.CurrentDomain.BaseDirectory}\\data.json", data);

The next time you run the implementation you'll find the app generates a data.json file containing the complex data serialized to JSON. Strip out your Stubs.Make statements from the implementation and we're ready for the next stage.

Using your JSON

What you need to do now is to take the new and shiny data.json file and move it to your unit test project. It needs to be included within the unit test project. Also, for each JSON file you have, the Build Action in VS needs to be set to Content and the Copy to Output Directory to Copy if newer.

Then within your unit tests you can write code like this:

    var dummyData = Stubs.Load<ComplexDataType>("Stubs/data.json");

Which pulls in your data from the JSON file and deserialises it into the original types. With this in hand you can plug together a unit test based on an existing implementation which depends on external data much faster than the hand-cranked method of old.

Finally, before the wildebeest of TDD descend upon me howling and wailing, let me say again; I anticipate this being useful when you're trying to add tests to something that already exists but is untested. Clearly it would be better not to be in this situaion in the first place.

Tuesday, 22 March 2016

Elvis and King Concat

I hate LINQ's Enumerable.Concat when bringing together IEnumerables. Not the behaviour (I love that!) but rather how code ends up looking when you use it. Consider this:

var concatenated = myCollection?.Select(x => new ConcatObj(x)) ?? new ConcatObj[0].Concat(
    myOtherCollection?.Select(x => new ConcatObj(x)) ?? new ConcatObj[0]
 );

In this example I'm bringing together 2 collections, either of which may be null (more on that later). I think we can all agree this doesn't represent a world of readability. I've also had to create a custom class ConcatObj because you can't create an empty array for an anonymous type in C#.

Attempt #1: ConcatMany

After toying around with a bunch of different ideas I created this extension method:

public static class FunctionalExtensions
{
    public static IEnumerable<T> ConcatMany<T>(
        this IEnumerable<T> original,
        params IEnumerable<T>[] enumerablesToConcat) => original.Concat(
            enumerablesToConcat.Where(e => e != null).SelectMany(c => c)
        );
}

Thanks to the joy of params this extension allows me to bring together multiple IEnumerables into a single one but has the advantage of considerably cleaner calling code:

var concatenated = Enumerable.Empty<ConcatObj>().ConcatMany(
    myCollection?.Select(x => new ConcatObj(x)),
    myOtherCollection?.Select(x => new ConcatObj(x))
    );

For my money this is more readable and intent is clearer. Particularly as the number of contributing IEnumerables goes up. The downside is that I can’t use anonymous objects because you need to tell the compiler what the type is when using Enumerable.Empty.

Wouldn't it be nice to have both:

  1. Readable code and
  2. Anonymous objects?

Attempt #2: EnumerableExtensions.Create

After batting round a few ideas (thanks Matt) I settled on this implementation:

public static class EnumerableExtensions
{
    public static IEnumerable<TSource> Create<TSource>(params IEnumerable<TSource>[] enumerables)
    {
        return Concat(enumerables.Where(e => e != null));
    }

    private static IEnumerable<TSource> Concat<TSource>(IEnumerable<IEnumerable<TSource>> enumerables)
    {
        foreach (var enumerable in enumerables)
        {
            foreach (var item in enumerable)
            {
                yield return item;
            }
        }
    }
}

Which allows for calling code like this:

var concatenated = EnumerableExtensions.Create(
    myCollection?.Select(x => new { Anonymous = x.Types }),
    myOtherCollection?.Select(x => new { Anonymous = x.Types })
    );

That's right; anonymous types are back! Strictly speaking the Concat method above could be converted into a single SelectMany (and boy does ReSharper like telling me) but I'm quite happy with it as is. And to be honest, I so rarely get to use yield in my own code; I thought it might be nice to give it a whirl 😊

What Gives Elvis?

If you look closely at the implementation you'll notice that I purge all nulls when I'm bringing together the Enumerables. For why? Some may legitimately argue this is a bad idea. However, there is method in my "bad practice".

I've chosen to treat null as "not important" for this use case. I'm doing this because it emerges that ASP.NET MVC deserialises empty collections as nulls. (See here and play spot the return null;) Which is a pain. But thanks to the null purging behaviour of EnumerableExtensions.Create I can trust in the null-conditional (Elvis) operator to not do me wrong.

Thursday, 17 March 2016

Atom - Recovering from Corrupted Packages

Every now and then when I try and update my packages in Atom I find this glaring back at me:

Ug. The problem is that my atom packages have become corrupt. Quite how I couldn't say. But that's the problem. Atom, as I know from bitter experience, will not recover from this. It just sits there feeling sorry for itself. However, getting back to where you belong is simpler than you imagine:

  1. Shutdown Atom
  2. In the file system go to [Your name]/.atom (and bear in mind this is Windows; Macs / Linux may be different)
  3. You'll see an .apm folder that contains all your packages. Delete this.

When you next fire up Atom these packages will automagically come back but this time they shouldn't be corrupt. Instead you should see the happiness of normality restored:

Friday, 4 March 2016

TFS 2012 meet PowerShell, Karma and BuildNumber

To my lasting regret, TFS 2012 has no direct support for PowerShell. Such a shame as PowerShell scripts can do a lot of heavy lifting in a build process. Well, here we're going to brute force TFS 2012 into running PowerShell scripts. And along the way we'll also get Karma test results publishing into TFS 2012 as an example usage. Nice huh? Let's go!

PowerShell via csproj

It's time to hack the csproj (or whatever project file you have) again. We're going to add an AfterBuild target to the end of the file. This target will be triggered after the build completes (as the name suggests):

  <Target Name="AfterBuild">
    <Message Importance="High" Text="AfterBuild: PublishUrl = $(PublishUrl), BuildUri = $(BuildUri), Configuration = $(Configuration), ProjectDir = $(ProjectDir), TargetDir = $(TargetDir), TargetFileName = $(TargetFileName), BuildNumber = $(BuildNumber), BuildDefinitionName = $(BuildDefinitionName)" />
    <Exec Command="powershell.exe -NonInteractive -ExecutionPolicy RemoteSigned "& '$(ProjectDir)AfterBuild.ps1' '$(Configuration)' '$(ProjectDir)' '$(TargetDir)' '$(PublishUrl)' '$(BuildNumber)' '$(BuildDefinitionName)'"" />
  </Target>

There's 2 things happening in this target:

  1. A message is printed out during compilation which contains details of the various compile time variables. This is nothing more than a console.log statement really; it's useful for debugging and so I keep it around. You'll notice one of them is called $(BuildNumber); more on that later.
  2. A command is executed; PowerShell! This invokes PowerShell with the -NonInteractive and -ExecutionPolicy RemoteSigned flags. It passes a script to be executed called AfterBuild.ps1 that lives in the root of the project directory. To that script a number of parameters are supplied; compile time variables that we may use in the script.

Where's my BuildNumber and BuildDefinitionName?

So you've checked in your changes and kicked off a build on the server. You're picking over the log messages and you're thinking: "Where's my BuildNumber?". Well, TFS 2012 does not have that set as a variable at compile time by default. This stumped me for a while but thankfully I wasn't the only person wondering... As ever, Stack Overflow had the answer. So, deep breath, it's time to hack the TFS build template file.

Checkout the DefaultTemplate.11.1.xaml file from TFS and open it in your text editor of choice. It's find and replace time! (There are probably 2 instances that need replacement.) Perform a find for the below

[String.Format(&quot;/p:SkipInvalidConfigurations=true {0}&quot;, MSBuildArguments)]

And replace it with this:

[String.Format("/p:SkipInvalidConfigurations=true /p:BuildNumber={1} /p:BuildDefinitionName={2} {0}", MSBuildArguments, BuildDetail.BuildNumber, BuildDetail.BuildDefinition.Name)]

Pretty long line eh? Let's try breaking that up to make it more readable: (but remember in the XAML it needs to be a one liner)

[String.Format("/p:SkipInvalidConfigurations=true 
    /p:BuildNumber={1} 
    /p:BuildDefinitionName={2} {0}", MSBuildArguments, BuildDetail.BuildNumber, BuildDetail.BuildDefinition.Name)]

We're just adding 2 extra parameters of BuildNumber and BuildDefinitionName to the invocation of MSBuild. Once we've checked this back in, BuildNumber and BuildDefinitionName will be available on future builds. Yay! Important! You must have a build name that does not feature spaces; probably there's a way to pass spaces here but I'm not sure what it is.

AfterBuild.ps1

You can use your AfterBuild.ps1 script to do any number of things. In my case I'm going to use MSTest to publish some test results which have been generated by Karma into TFS:

param ([string]$configuration, [string]$projectDir, [string]$targetDir, [string]$publishUrl, [string]$buildNumber, [string] $buildDefinitionName)

$ErrorActionPreference = 'Stop'
Clear

function PublishTestResults([string]$resultsFile) {
 Write-Host 'PublishTests'
 $mstest = 'C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\MSTest.exe'

 Write-Host "Using $mstest at $pwd"
 Write-Host "Publishing: $resultsFile"

 & $mstest /publishresultsfile:$resultsFile /publish:http://my-tfs-server:8080/tfs /teamproject:MyProject /publishbuild:$buildNumber /platform:'Any CPU' /flavor:Release
}

function FailBuildIfThereAreTestFailures([string]$resultsFile) {
 $results = [xml](GC $resultsFile)
 $outcome = $results.TestRun.ResultSummary.outcome
 $fgColor = if($outcome -eq "Failed") { "Red" } else { "Green" }
 $total = $results.TestRun.ResultSummary.Counters.total
 $passed = $results.TestRun.ResultSummary.Counters.passed
 $failed = $results.TestRun.ResultSummary.Counters.failed

 $failedTests = $results.TestRun.Results.UnitTestResult | Where-Object { $_.outcome -eq "Failed" }

 Write-Host Test Results: $outcome -ForegroundColor $fgColor -BackgroundColor "Black"
 Write-Host Total tests: $total
 Write-Host Passed: $passed
 Write-Host Failed: $failed
 Write-Host

 $failedTests | % { Write-Host Failed test: $_.testName
  Write-Host $_.Output.ErrorInfo.Message
  Write-Host $_.Output.ErrorInfo.StackTrace }

 Write-Host

 if($outcome -eq "Failed") { 
  Write-Host "Failing build as there are broken tests"
  $host.SetShouldExit(1)
 }
}

function Run() {
  Write-Host "Running AfterBuild.ps1 using Configuration: $configuration, projectDir: $projectDir, targetDir: $targetDir, publishUrl: $publishUrl, buildNumber: $buildNumber, buildDefinitionName: $buildDefinitionName"

 if($buildNumber) {
  $resultsFile = "$projectDir\test-results.trx"
  PublishTestResults $resultsFile
  FailBuildIfThereAreTestFailures $resultsFile
 }
}

# Off we go...
Run

Assuming we have a build number this script kicks off the PublishTestResults function above. So we won't attempt to publish test results when compiling in Visual Studio on our dev machine. The script looks for MSTest.exe in a certain location on disk (the default VS 2015 installation location in fact; it may be installed elsewhere on your build machine). MSTest is then invoked and passed a file called test-results.trx which is is expected to live in the root of the project. All being well, the test results will be registered with the running build and will be visible when you look at test results in TFS.

Finally in FailBuildIfThereAreTestFailures we parse the test-results.trx file (and for this I'm totally in the debt of David Robert's helpful Gist). We write out the results to the host so it'll show up in the MSBuild logs. Also, and this is crucial, if there are any failures we fail the build by exiting PowerShell with a code of 1. We are deliberately swallowing any error that Karma raises earlier when it detects failed tests. We do this because we want to publish the failed test results to TFS before we kill the build.

Bonus Karma: test-results.trx

If you've read a previous post of mine you'll be aware that it's possible to get MSBuild to kick off npm build tasks. Specifically I have MSBuild kicking off an npm run build. My package.json looks like this:

  "scripts": {
    "test": "karma start --reporters mocha,trx --single-run --browsers PhantomJS",
    "clean": "gulp delete-dist-contents",
    "watch": "gulp watch",
    "build": "gulp build",
    "postbuild": "npm run test"
  },

You can see that the postbuild hook kicks off the test script in turn. And that test script kicks off a single run of karma. I'm not going to go over setting up Karma at all here; there are other posts out there that cover that admirably. But I wanted to share news of the karma trx reporter. This reporter is the thing that produces our test-results.trx file; trx being the format that TFS likes to deal with.

So now we've got a PowerShell hook into our build process (something very useful in itself) which we are using to publish Karma test results into TFS 2012. They said it couldn't be done. They were wrong. Huzzah!!!!!!!

Monday, 29 February 2016

Creating Angular UI Routes in the Controller

So you're creating a link with the Angular UI Router. You're passing more than a few parameters and it's getting kinda big. Something like this:

    <a class="contains-icon" 
       ui-sref="Entity.Edit({ entityId: (vm.selectedEntityId ? vm.selectedEntityId: null), initialData: vm.initialData })">
         <i class="fa fa-pencil"></i>Edit
    </a>

See? It's too long to fit on the screen without wrapping. It's clearly mad and bad.

Generally I try to keep the logic in a view to a minimum. It makes the view harder to read, it makes behaviour of the app harder to reason about. Also, it's not testable and (if you're using some kind of static typing like TypeScript) it is entirely out of the realms that a compiler can catch. So what to do? Move the URL generation to the controller. That's what I decided to do after I had a typo in my view which I didn't catch until post-commit.

ui-sref in the Controller

Actually, that's not exactly what you want to do. If you look at the Angular UI Router docs you will see that ui-sref is:

...a directive that binds a link (<a> tag) to a state. If the state has an associated URL, the directive will automatically generate & update the href attribute via the $state.href() method.

So what we actually want to do is use the $state.href() method in our controller. To take our example above we'll create another method on our controller called getEditUrl

export class EntityController {

    $state: angular.ui.IStateService;

    static $inject = ["$state"];
    constructor($state: angular.ui.IStateService) {
        this.$state = $state;
    }

    //... Other stuff

    getEditUrl() {
        return this.$state.href("Entity.Edit", { 
            selectedEntityId: this.selectedEntityId ? this.selectedEntityId: null, 
            initialData: this.initialData 
        });
    }
}

You can see I'm using TypeScript here; but feel free to strip out the type annotations and go with raw ES6 classes; that'll still give you testability if not static typing.

Now we've added the getEditUrl method we just need to reference it in our view:

    <a class="contains-icon" ng-href="{{vm.getEditUrl()}}"><i class="fa fa-pencil"></i>Edit</a>

Note we've ditched usage of the ui-sref directive and gone with Angular's native ng-href. Within that directive we execute our getEditUrl as an expression which gives us our route. As a bonus, our view is much less cluttered and comprehensible as a result. How lovely.

Friday, 19 February 2016

Visual Studio, tsconfig.json and external TypeScript compilation

TypeScript first gained support for tsconfig.json back with the 1.5 release. However, to my lasting regret and surprise Visual Studio will not be gaining meaningful support for it until TypeScript 1.8 ships. However, if you want it now, it's already available to use in beta.

I've already leapt aboard. Whilst there's a number of bugs in the beta it's still totally usable. So use it.

External TypeScript Compilation and the VS build

Whilst tsconfig.json is useful and super cool it has limitations. It allows you to deactivate compilation upon file saving using compileOnSave. What it doesn't allow is deactivation of the TypeScript compilation that happens as part of a Visual Studio build. That may not matter for the vanilla workflow of just dropping TypeScript files in a Visual Studio web project and having VS invoke the TypeScript compilation. However it comes to matter when your workflow deviates from the norm, as mine does. Using external compilation of TypeScript within Visual Studio is a little tricky. My own use case is somewhat atypical but perhaps not uncommon.

I'm working on a project which has been built using TypeScript since TS 0.9. Not surprisingly, this started off using the default Visual Studio / TypeScript workflow. Active development on the project ceased around 9 months ago. Now it's starting up again. It's a reasonable sized web app and the existing functionality is, in the main, fine. But the users want to add some new screens.

Like any developer, I want to build with the latest and greatest. In my case, this means I want to write modular ES6 using TypeScript. With this approach my code can be leaner and I have less script ordering drama in my life. (Yay import statements!) This can be done by bringing together webpack, TypeScript (ts-loader) and Babel (babel-loader). There's an example of this approach here. Given the size of the existing codebase I'd rather leave the legacy TypeScript as is and isolate my new approach to the screens I'm going to build. Obviously I'd like to have a common build process for all the codebase at some point but I've got a deadline to meet and so a half-old / half-new approach is called for (at least for the time being).

Goodbye TypeScript Compilation in VS

Writing modular ES6 TypeScript which is fully transpiled to old-school JS is not possible using the Visual Studio tooling at present. For what it's worth I think that SystemJS compilation may make this more possible in the future but I don't really know enough about it to be sure. That's why I'm bringing webpack / Babel into the mix right now. I don't want Visual Studio to do anything for the ES6 code; I don't want it to compile. I want to deactivate my TypeScript compilation for the ES6 code. I can't do this from the tsconfig.json so I'm in a bit of a hole. What to do?

Well, as of (I think) TypeScript 1.7 it's possible to deactivate TypeScript compilation in Visual Studio. To quote:

there is an easier way to disable TypeScriptCompile:

Just add <TypeScriptCompileBlocked>true</TypeScriptCompileBlocked> to the .csproj, e.g. in the first <PropertyGroup>.

Awesomeness!

But wait, this means that the legacy TypeScript isn't being compiled any longer. Bear in mind, I'm totally happy with the existing / legacy TypeScript compilation. Nooooooooooooooo!!!!!!!!!!!!!!!

Hello TypeScript Compilation outside VS

Have no fear, I gotcha. What we're going to do is ensure that Visual Studio triggers 2 external TypeScript builds as part of its own build process:

  • The modular ES6 TypeScript (new)
  • The legacy TypeScript (old)

How do we do this? Through the magic of build targets. We need to add this to our .csproj: (I add it near the end; I'm not sure if location matters though)

  <PropertyGroup>
    <CompileDependsOn>
      $(CompileDependsOn);
      WebClientBuild;
    </CompileDependsOn>
    <CleanDependsOn>
      $(CleanDependsOn);
      WebClientClean
    </CleanDependsOn>
    <CopyAllFilesToSingleFolderForPackageDependsOn>
      CollectGulpOutput;
      CollectLegacyTypeScriptOutput;
      $(CopyAllFilesToSingleFolderForPackageDependsOn);
    </CopyAllFilesToSingleFolderForPackageDependsOn>
    <CopyAllFilesToSingleFolderForMsdeployDependsOn>
      CollectGulpOutput;
      CollectLegacyTypeScriptOutput;
      $(CopyAllFilesToSingleFolderForPackageDependsOn);
    </CopyAllFilesToSingleFolderForMsdeployDependsOn>
  </PropertyGroup>
  <Target Name="WebClientBuild">
    <Exec Command="npm install" />
    <Exec Command="npm run build-legacy-typescript" />
    <Exec Command="npm run build -- --mode $(ConfigurationName)" />
  </Target>
  <Target Name="WebClientClean">
    <Exec Command="npm run clean" />
  </Target>
  <Target Name="CollectGulpOutput">
    <ItemGroup>
      <_CustomFiles Include="dist\**\*" />
      <FilesForPackagingFromProject Include="%(_CustomFiles.Identity)">
        <DestinationRelativePath>dist\%(RecursiveDir)%(Filename)%(Extension)</DestinationRelativePath>
      </FilesForPackagingFromProject>
    </ItemGroup>
    <Message Text="CollectGulpOutput list: %(_CustomFiles.Identity)" />
  </Target>
  <Target Name="CollectLegacyTypeScriptOutput">
    <ItemGroup>
      <_CustomFiles Include="Scripts\**\*.js" />
      <FilesForPackagingFromProject Include="%(_CustomFiles.Identity)">
        <DestinationRelativePath>Scripts\%(RecursiveDir)%(Filename)%(Extension)</DestinationRelativePath>
      </FilesForPackagingFromProject>
    </ItemGroup>
    <Message Text="CollectLegacyTypeScriptOutput list: %(_CustomFiles.Identity)" />
  </Target>

There's a few things going on here; let's take them one by one.

The WebClientBuild Target

This target triggers our external builds. One by one it runs the following commands:

npm install
Installs the npm packages.
npm run build-legacy-typescript
Runs the "build-legacy-typescript" script in our package.json
npm run build -- --mode $(ConfigurationName)
Runs the "build" script in our package.json and passes through a mode parameter of either "Debug" or "Release" from MSBuild - indicating whether we're creating a debug or a release build.

As you've no doubt gathered, I'm following the convention of using the scripts element of my package.json as repository for the various build tasks I might have for a web project. It looks like this:

{
  // ...
  "scripts": {
    "test": "karma start --reporters mocha,junit --single-run --browsers PhantomJS",
    "build-legacy-typescript": "tsc -v&&tsc --project Scripts",
    "clean": "gulp delete-dist-contents",
    "watch": "gulp watch",
    "build": "gulp build"
  },
  // ...
}

As you can see, "build-legacy-typescript" invokes tsc (which is registered as a devDependency) to print out the version of the compiler. Then it invokes tsc again using the project flag directly on the Scripts directory. This is where the legacy TypeScript and its associated tsconfig.json resides. Et voilá, the old / existing TypeScript is compiled just as it was previously by VS itself.

Next, the "build" invokes a gulp task called, descriptively, "build". This task caters for our brand new codebase of modular ES6 TypeScript. When run, this task will invoke webpack, copy static files, build less etc. Quick digression: you can see there's a "watch" script that does the same thing on a file-watching basis; I use that during development.

The WebClientClean Target

The task that runs to clean up artefacts created by WebClientBuild.

The CollectLegacyTypeScriptOutput and CollectGulpOutput Targets

Since we're compiling our TypeScript outside of VS we need to tell MSBuild / MSDeploy about the externally compiled assets in order that they are included in the publish pipeline. Here I'm standing on the shoulders of Steve Cadwallader's excellent post. Thanks Steve!

CollectLegacyTypeScriptOutput and CollectGulpOutput respectively include all the built files contained in the "Scripts" and "dist" folders when a publish takes place. You don't need this for when you're building on your own machine but if you're looking to publish (either from your machine or from TFS) then you will need exactly this. Believe me that last sentence was typed with a memory of great pain and frustration.

So in the end, as far as TypeScript is concerned, I'm using Visual Studio solely as an editor. It's the hooks in the .csproj that ensure that compilation happens. It seems a little quirky that we still need to have the original TypeScript targets in the .csproj file as well; but it works. That's all that matters.

Monday, 1 February 2016

TFS 2012, .NET 4.5 and C# 6

So, you want to use C# 6 language features and you’re working on an older project that’s still rocking .NET 4.5. Well, with some caveats, you can.

The new compiler will compile targeting older framework versions. Well that’s all lovely; let’s all go home.

Now. What say you’ve got an old, old build server? It’s TFS 2012 Update 2, creaking away, still glad to alive and kind of wondering why it hasn’t been upgraded or retired. This is where you want to compile .NET 4.5 from C# 6. Well it can be done. Here’s how it’s done:

  1. Install Visual Studio 2015 on the build server (I’m told this can be achieved using Microsoft Build Tools 2015 but I haven’t tried it myelf so caveat emptor)
  2. set the MSBuild Arguments in the build definition to /p:VisualStudioVersion=14.0 (i.e. Visual Studio 2015 mode)
  3. in each project that uses C# 6 syntax, install the NuGet package Microsoft.Net.Compilers with a quick install-package Microsoft.Net.Compilers

That’s it; huzzah! String interpolation here I come…

Thursday, 14 January 2016

Coded UI and the Curse of the Docking Station

I’ve a love / hate relationship with Coded UI. Well hate / hate might be more accurate. Hate perhaps married with a very grudging respect still underpinned by a wary bitterness. Yes, that’s about the size of it. Why? Well, when Coded UI works, it’s fab. But it’s flaky as anything. Anybody who’s used the technology is presently nodding sagely and holding back the tears. It’s all a bit... tough.

I’ve recently discovered another quirk to add to the list. Docking stations. I was back working on a project which had a Coded UI test suite. I’d heard tell that there were problems with the tests and was just taking a look at them. The first hurdle I fell at was getting the tests to run locally. The tests had first been developed on a standard desktop build and, as much as this can ever be said of Coded UI tests, they worked. However, the future had happened. The company in question was no longer using the old school desktop towers. Nope, they’d reached for the sky and equipped the whole office with Surface Pro 3’s, hot desks, docking stations and big, big monitors. It looked terribly flash.

Coded UI was not happy.

The Mouse.Click behaviour wasn’t working. Most tests need the ability for users to click on buttons, dropdowns etc. That’s part of a normal UI. And so it was with these tests. This is where they fell over. The reason they fell over at this point didn’t become clear for a while. It wasn’t until we tried tweaking our implementation of the tests that we realised what was happening. The tests normally found buttons / dropdowns etc on the screen and then attempted to perform a Mouse.Click upon them. We changed the implementation to be subtly different. Instead of just clicking on the element we amended the test to move the mouse to the button and then perform the click.

Aha!

Rather than steadily moving towards an element and clicking, the pointer was swerving like a drunk man crossing the road at 3am. It completely missed the element it was aiming for and clicked upon a seemingly random area of the screen. This is Coded UI doing “pin the tail on the donkey”.

After more time than I'd like to admit I happened upon the solution. I tended to dock my Surface and then tune my monitor resolution to the one most optimal for coding. (ie really high res.) This is what messes with Coded UI's head; the resolution change. If I wanted to be able to run tests successfully all I had to do was switch back to the resolution I initially booted with. Alternately I could restart my computer so it launched with the resolution I was presently using.

Once you do follow this guidance Coded UI has a moment of clarity, gets sober and starts Mouse.Click-ing like a pro.