Skip to main content

5 posts tagged with "react"

View All Tags

Brand New Fonting Awesomeness

Love me some Font Awesome. Absolutely wonderful. However, I came a cropper when following the instructions on using the all new Font Awesome 5 with React. The instructions for standard icons work fine. But if you want to use brand icons then this does not help you out much. There's 2 problems:

  1. Font Awesome's brand icons are not part of <a href="">@fortawesome/free-solid-svg-icons</a> package
  2. The method of icon usage illustrated (i.e. with the FontAwesomeIcon component) doesn't work. It doesn't render owt.

Brand Me Up Buttercup#

You want brands? Well you need the <a href="">@fortawesome/free-brands-svg-icons</a>. Obvs, right?

yarn add @fortawesome/fontawesome-svg-core
yarn add @fortawesome/free-brands-svg-icons
yarn add @fortawesome/react-fontawesome

Now usage:

import * as React from 'react'
import { FontAwesomeIcon } from '@fortawesome/react-fontawesome';
import { faReact } from '@fortawesome/free-brands-svg-icons';
export const Framework = () => (
Favorite Framework: <FontAwesomeIcon icon={faReact} />

Here we've ditched the "library / magic-string" approach from the documentation for one which explicitly imports and uses the required icons. I suspect this will be good for tree-shaking as well but, hand-on-heart, I haven't rigorously tested that. I'm not sure why the approach I'm using isn't documented actually. Mysterious! I've seen no ill-effects from using it but perhaps YMMV. Proceed with caution...

Update: It is documented!#

Yup - information on this approach is out there; but it's less obvious than you might hope. Read all about it here. For what it's worth, the explicit import approach seems to be playing second fiddle to the library / magic-string one. I'm not too sure why. For my money, explicit imports are clearer, less prone to errors and better setup for optimisation. Go figure...

Feel free to set me straight in the comments!

Auth0, TypeScript and ASP.NET Core

Most applications I write have some need for authentication and perhaps authorisation too. In fact, most apps most people write fall into that bracket. Here's the thing: Auth done well is a *big* chunk of work. And the minute you start thinking about that you almost invariably lose focus on the thing you actually want to build and ship.

So this Christmas I decided it was time to take a look into offloading that particular problem onto someone else. I knew there were third parties who provided Auth-As-A-Service - time to give them a whirl. On the recommendation of a friend, I made Auth0 my first port of call. Lest you be expecting a full breakdown of the various players in this space, let me stop you now; I liked Auth0 so much I strayed no further. Auth0 kicks AAAS. (I'm so sorry)

What I wanted to build#

My criteria for "auth success" was this:

  • I want to build a SPA, specifically a React SPA. Ideally, I shouldn't need a back end of my own at all
  • I want to use TypeScript on my client.

But, for when I do implement a back end:

  • I want that to be able to use the client side's Auth tokens to allow access to Auth routes on my server.
  • 鈥嶪 want to able to identify the user, given the token, to provide targeted data
  • Oh, and I want to use .NET Core 2 for my server.

And in achieving all of the I want to add minimal code to my app. Not War and Peace. My code should remain focused on doing what it does.

Boil a Plate#

I ended up with unqualified ticks for all my criteria, but it took some work to find out. I will say that Auth0 do travel the extra mile in terms of getting you up and running. When you create a new Client in Auth0 you're given the option to download a quick start using the technology of your choice.

This was a massive plus for me. I took the quickstart provided and ran with it to get me to the point of meeting my own criteria. You can use this boilerplate for your own ends. Herewith, a walkthrough:

The Walkthrough#

Fork and clone the repo at this location:

What have we got? 2 folders, ClientApp contains the React app, Web contains the ASP.NET Core app. Now we need to get setup with Auth0 and customise our config.

Setup Auth0#

Here's how to get the app set up with Auth0; you're going to need to sign up for a (free) Auth0 account. Then login into Auth0 and go to the management portal.


  • Create a Client with the name of your choice and use the Single Page Web Applications template.
  • From the new Client Settings page take the Domain and Client ID and update the similarly named properties in the appsettings.Development.json and appsettings.Production.json files with these settings.
  • To the Allowed Callback URLs setting add the URLs: http://localhost:3000/callback,http://localhost:5000/callback - the first of these faciliates running in Debug mode, the second in Production mode. If you were to deploy this you'd need to add other callback URLs in here too.


  • Create an API with the name of your choice (I recommend the same as the Client to avoid confusion), an identifier which can be anything you like; I like to use the URL of my app but it's your call.
  • From the new API Settings page take the Identifier and update the Audience property in the appsettings.Development.json and appsettings.Production.json files with that value.

Running the App#

Production build#

Build the client app with yarn build in the ClientApp folder. (Don't forget to yarn install first.) Then, in the Web folder dotnet restore, dotnet run and open your browser to http://localhost:5000


Run the client app using webpack-dev-server using yarn start in the ClientApp folder. Fire up VS Code in the root of the repo and hit F5 to debug the server. Then open your browser to http://localhost:3000

The Tour#

When you fire up the app you're presented with "you are not logged in!" message and the option to login. Do it, it'll take you to the Auth0 "lock" screen where you can sign up / login. Once you do that you'll be asked to confirm access:

All this is powered by Auth0's auth0-js npm package. (Excellent type definition files are available from Definitely Typed; I'm using the @types/auth0-js package DT publishes.) Usage of which is super simple; it exposes an authorize method that when called triggers the Auth0 lock screen. Once you've "okayed" you'll be taken back to the app which will use the parseHash method to extract the access token that Auth0 has provided. Take a look at how our authStore makes use of auth0-js: (don't be scared; it uses mobx - but you could use anything)


import { Auth0UserProfile, WebAuth } from 'auth0-js';
import { action, computed, observable, runInAction } from 'mobx';
import { IAuth0Config } from '../../config';
import { StorageFacade } from '../storageFacade';
interface IStorageToken {
accessToken: string;
idToken: string;
expiresAt: number;
const STORAGE_TOKEN = 'storage_token';
export class AuthStore {
@observable.ref auth0: WebAuth;
@observable.ref userProfile: Auth0UserProfile;
@observable.ref token: IStorageToken;
constructor(config: IAuth0Config, private storage: StorageFacade) {
this.auth0 = new WebAuth({
domain: config.domain,
clientID: config.clientId,
redirectUri: config.redirectUri,
audience: config.audience,
responseType: 'token id_token',
scope: 'openid email profile do:admin:thing' // the do:admin:thing scope is custom and defined in the scopes section of our API in the Auth0 dashboard
initialise() {
const token = this.parseToken(;
if (token) {
parseToken(tokenString: string) {
const token = JSON.parse(tokenString || '{}');
return token;
onStorageChanged = (event: StorageEvent) => {
if (event.key === STORAGE_TOKEN) {
@computed get isAuthenticated() {
// Check whether the current time is past the
// access token's expiry time
return this.token && new Date().getTime() < this.token.expiresAt;
login = () => {
handleAuthentication = () => {
this.auth0.parseHash((err, authResult) => {
if (authResult && authResult.accessToken && authResult.idToken) {
const token = {
accessToken: authResult.accessToken,
idToken: authResult.idToken,
// Set the time that the access token will expire at
expiresAt: authResult.expiresIn * 1000 + new Date().getTime()
} else if (err) {
// tslint:disable-next-line:no-console
alert(`Error: ${err.error}. Check the console for further details.`);
setSession(token: IStorageToken) {
this.token = token;, JSON.stringify(token));
getAccessToken = () => {
const accessToken = this.token.accessToken;
if (!accessToken) {
throw new Error('No access token found');
return accessToken;
loadProfile = async () => {
const accessToken = this.token.accessToken;
if (!accessToken) {
this.auth0.client.userInfo(accessToken, (err, profile) => {
if (err) { throw err; }
if (profile) {
runInAction(() => this.userProfile = profile);
return profile;
return undefined;
logout = () => {
// Clear access token and ID token from local storage;
this.token = null;
this.userProfile = null;

Once you're logged in the app offers you more in the way of navigation options. A "Profile" screen shows you the details your React app has retrieved from Auth0 about you. This is backed by the client.userInfo method on auth0-js. There's also a "Ping" screen which is where your React app talks to your ASP.NET Core server. The screenshot below illustrates the result of hitting the "Get Private Data" button:

The "Get Server to Retrieve Profile Data" button is interesting as it illustrates that the server can get access to your profile data as well. There's nothing insecure here; it gets the details using the access token retrieved from Auth0 by the ClientApp and passed to the server. It's the API we set up in Auth0 that is in play here. The app uses the Domain and the access token to talk to Auth0 like so:


// Retrieve the access_token claim which we saved in the OnTokenValidated event
var accessToken = User.Claims.FirstOrDefault(c => c.Type == "access_token").Value;
// If we have an access_token, then retrieve the user's information
if (!string.IsNullOrEmpty(accessToken))
var domain = _config["Auth0:Domain"];
var apiClient = new AuthenticationApiClient(domain);
var userInfo = await apiClient.GetUserInfoAsync(accessToken);
return Ok(userInfo);

We can also access the sub claim, which uniquely identifies the user:


// We're not doing anything with this, but hey! It's useful to know where the user id lives
var userId = User.Claims.FirstOrDefault(c => c.Type == System.Security.Claims.ClaimTypes.NameIdentifier).Value; // our userId is the sub value

The reason our ASP.NET Core app works with Auth0 and that we have access to the access token here in the first place is because of our startup code:


public void ConfigureServices(IServiceCollection services)
var domain = $"https://{Configuration["Auth0:Domain"]}/";
services.AddAuthentication(options =>
options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
}).AddJwtBearer(options =>
options.Authority = domain;
options.Audience = Configuration["Auth0:Audience"];
options.Events = new JwtBearerEvents
OnTokenValidated = context =>
if (context.SecurityToken is JwtSecurityToken token)
if (context.Principal.Identity is ClaimsIdentity identity)
identity.AddClaim(new Claim("access_token", token.RawData));
return Task.FromResult(0);
// ....


We're pretty much done now; just one magic button to investigate: "Get Admin Data". If you presently try and access the admin data you'll get a 403 Forbidden. It's forbidden because that endpoint relies on the "do:admin:thing" scope in our claims:


public IActionResult GetUserDoAdminThing()
return Ok("Admin endpoint");


public static class Scopes
// the do:admin:thing scope is custom and defined in the scopes section of our API in the Auth0 dashboard
public const string DoAdminThing = "do:admin:thing";

This wired up in our ASP.NET Core app like so:


services.AddAuthorization(options =>
options.AddPolicy(Scopes.DoAdminThing, policy => policy.Requirements.Add(new HasScopeRequirement(Scopes.DoAdminThing, domain)));
// register the scope authorization handler
services.AddSingleton<iauthorizationhandler, hasscopehandler="">();


public class HasScopeHandler : AuthorizationHandler<hasscoperequirement>
protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, HasScopeRequirement requirement)
// If user does not have the scope claim, get out of here
if (!context.User.HasClaim(c => c.Type == "scope" && c.Issuer == requirement.Issuer))
return Task.CompletedTask;
// Split the scopes string into an array
var scopes = context.User.FindFirst(c => c.Type == "scope" && c.Issuer == requirement.Issuer).Value.Split(' ');
// Succeed if the scope array contains the required scope
if (scopes.Any(s => s == requirement.Scope))
return Task.CompletedTask;

The reason we're 403ing at present is because when our HasScopeHandler executes, requirement.Scope has the value of "do:admin:thing" and our scopes do not contain that value. To add it, go to your API in the Auth0 management console and add it:

Note that you can control how this scope is acquired using "Rules" in the Auth0 management portal.

You won't be able to access the admin endpoint yet because you're still rocking with the old access token; pre-newly-added scope. But when you next login to Auth0 you'll see a prompt like this:

Which demonstrates that you're being granted an extra scope. With your new shiny access token you can now access the oh-so-secret Admin endpoint.

I had some more questions about Auth0 as I'm still new to it myself. To see my question (and the very helpful answer!) go here:

React Component Curry

Everyone loves curry don't they? I don't know about you but I'm going for one on Friday.

When React 0.14 shipped, it came with a new way to write React components. Rather than as an ES2015 class or using React.createClass there was now another way: stateless functional components.

These are components which have no state (the name gives it away) and a simple syntax; they are a function which takes your component props as a single parameter and they return JSX. Think of them as the render method of a standard component just with props as a parameter.

The advantage of these components is that they can reduce the amount of code you have to write for a component which requires no state. This is even more true if you're using ES2015 syntax as you have arrow functions and destructuring to help.Embrace the terseness!

Mine's a Balti#

There is another advantage of this syntax. If you have a number of components which share similar implementation you can easily make component factories by currying:

function iconMaker(fontAwesomeClassName: string) {
return props => <i className={ `fa ${ fontAwesomeClassName }` }/>;
const ThumbsUpIcon = iconMaker("fa-thumbs-up");
const TrophyIcon = iconMaker("fa-trophy");
// Somewhere in else inside a render function:
<p>This is totally <ThumbsUpIcon />.... You should win a <TrophyIcon /></p>

So our iconMaker is a function which, when called with a Font Awesome class name produces a function which, when invoked, will return a the HTML required to render that icon. This is a super simple example, a bhaji if you will, but you can imagine how useful this technique can be when you've more of a banquet in mind.

ES6 + TypeScript + Babel + React + Flux + Karma: The Secret Recipe

I wrote a while ago about how I was using some different tools in a current project:

  • React with JSX
  • Flux
  • ES6 with Babel
  • Karma for unit testing

I have fully come to love and appreciate all of the above. I really like working with them. However. There was still an ache in my soul and a thorn in my side. Whilst I love the syntax of ES6 and even though I've come to appreciate the clarity of JSX, I have been missing something. Perhaps you can guess? It's static typing.

It's actually been really good to have the chance to work without it because it's made me realise what a productivity boost having static typing actually is. The number of silly mistakes burning time that a compiler could have told me.... Sigh.

But the pain is over. The dark days are gone. It's possible to have strong typing, courtesy of TypeScript, plugged into this workflow. It's yours for the taking. Take it. Take it now!

What a Guy Wants#

I decided a couple of months ago what I wanted to have in my setup:

  1. I want to be able to write React / JSX in TypeScript. Naturally I couldn't achieve that by myself but handily the TypeScript team decided to add support for JSX with TypeScript 1.6. Ooh yeah.
  2. I wanted to be able to write ES6. When I realised the approach for writing ES6 and having the transpilation handled by TypeScript wasn't clear I had another idea. I thought "what if I write ES6 and hand off the transpilation to Babel?" i.e. Use TypeScript for type checking, not for transpilation. I realised that James Brantly had my back here already. Enter Webpack and ts-loader.
  3. Debugging. Being able to debug my code is non-negotiable for me. If I can't debug it I'm less productive. (I'm also bitter and twisted inside.) I should say that I wanted to be able to debug my original source code. Thanks to the magic of sourcemaps, that mad thing is possible.
  4. Karma for unit testing. I've become accustomed to writing my tests in ES6 and running them on a continual basis with Karma. This allows for a rather good debugging story as well. I didn't want to lose this when I moved to TypeScript. I didn't.

So I've talked about what I want and I've alluded to some of the solutions that there are. The question now is how to bring them all together. This post is, for the most part, going to be about correctly orchestrating a number of gulp tasks to achieve the goals listed above. If you're after the Blue Peter "here's one I made earlier" moment then take a look at the es6-babel-react-flux-karma repo in the Microsoft/TypeScriptSamples repo on Github.


/* eslint-disable no-var, strict, prefer-arrow-callback */
'use strict';
var gulp = require('gulp');
var gutil = require('gulp-util');
var connect = require('gulp-connect');
var eslint = require('gulp-eslint');
var webpack = require('./gulp/webpack');
var staticFiles = require('./gulp/staticFiles');
var tests = require('./gulp/tests');
var clean = require('./gulp/clean');
var inject = require('./gulp/inject');
var lintSrcs = ['./gulp/**/*.js'];
gulp.task('delete-dist', function (done) {;
gulp.task('build-process.env.NODE_ENV', function () {
process.env.NODE_ENV = 'production';
gulp.task('build-js', ['delete-dist', 'build-process.env.NODE_ENV'], function(done) { { done(); });
gulp.task('build-other', ['delete-dist', 'build-process.env.NODE_ENV'], function() {;
gulp.task('build', ['build-js', 'build-other', 'lint'], function () {;
gulp.task('lint', function () {
return gulp.src(lintSrcs)
gulp.task('watch', ['delete-dist'], function() {
process.env.NODE_ENV = 'development';
]).then(function() {
gutil.log('Now that initial assets (js and css) are generated inject will start...');;
}).catch(function(error) {
gutil.log('Problem generating initial assets (js and css)', error);
});, ['lint']);;;
gulp.task('watch-and-serve', ['watch'], function() {
postInjectCb = stopAndStartServer;
var postInjectCb = null;
var serverStarted = false;
function stopAndStartServer() {
if (serverStarted) {
gutil.log('Stopping server');
serverStarted = false;
function startServer() {
gutil.log('Starting server');
root: './dist',
port: 8080
serverStarted = true;

Let's start picking this apart; what do we actually have here? Well, we have 2 gulp tasks that I want you to notice:


This is likely the task you would use when deploying. It takes all of your source code, builds it, provides cache-busting filenames (eg main.dd2fa20cd9eac9d1fb2f.js), injects your shell SPA page with references to the files and deploys everything to the ./dist/ directory. So that's TypeScript, static assets like images and CSS all made ready for Production.

The build task also implements this advice:

When deploying your app, set the NODE_ENV environment variable to production to use the production build of React which does not include the development warnings and runs significantly faster.

This task represents "development mode" or "debug mode". It's what you'll likely be running as you develop your app. It does the same as the build task but with some important distinctions.

  • As well as building your source it also runs your tests using Karma
  • This task is not triggered on a once-only basis, rather your files are watched and each tweak of a file will result in a new build and a fresh run of your tests. Nice eh?
  • It spins up a simple web server and serves up the contents of ./dist (i.e. your built code) in order that you can easily test out your app.
  • In addition, whilst it builds your source it does not minify your code and it emits sourcemaps. For why? For debugging! You can go to http://localhost:8080/ in your browser of choice, fire up the dev tools and you're off to the races; debugging like gangbusters. It also doesn't bother to provide cache-busting filenames as Chrome dev tools are smart enough to not cache localhost.
  • Oh and Karma.... If you've got problems with a failing test then head to http://localhost:9876/ and you can debug the tests in your dev tools.
  • Finally, it runs ESLint in the console. Not all of my files are TypeScript; essentially the build process (aka "gulp-y") files are all vanilla JS. So they're easily breakable. ESLint is there to provide a little reassurance on that front.

Now let's dig into each of these in a little more detail


Let's take a look at what's happening under the covers of and

WebPack with ts-loader and babel-loader is what we're using to compile our ES6 TypeScript. ts-loader uses the TypeScript compiler to, um, compile TypeScript and emit ES6 code. This is then passed on to the babel-loader which transpiles it from ES6 down to ES-old-school. It all gets brought together in 2 files; main.js which contains the compiled result of the code written by us and vendor.js which contains the compiled result of 3rd party / vendor files. The reason for this separation is that vendor files are likely to change fairly rarely whilst our own code will constantly be changing. This separation allows for quicker compile times upon file changes as, for the most part, the vendor files will not need to included in this process.

Our gulpfile.js above uses the following task:

'use strict';
var gulp = require('gulp');
var gutil = require('gulp-util');
var webpack = require('webpack');
var WebpackNotifierPlugin = require('webpack-notifier');
var webpackConfig = require('../webpack.config.js');
function buildProduction(done) {
// modify some webpack config options
var myProdConfig = Object.create(webpackConfig);
myProdConfig.output.filename = '[name].[hash].js';
myProdConfig.plugins = myProdConfig.plugins.concat(
// make the vendor.js file with cachebusting filename
new webpack.optimize.CommonsChunkPlugin({ name: 'vendor', filename: 'vendor.[hash].js' }),
new webpack.optimize.DedupePlugin(),
new webpack.optimize.UglifyJsPlugin()
// run webpack
webpack(myProdConfig, function(err, stats) {
if(err) { throw new gutil.PluginError('webpack:build', err); }
gutil.log('[webpack:build]', stats.toString({
colors: true
if (done) { done(); }
function createDevCompiler() {
// show me some sourcemap love people
var myDevConfig = Object.create(webpackConfig);
myDevConfig.devtool = 'inline-source-map';
myDevConfig.debug = true;
myDevConfig.plugins = myDevConfig.plugins.concat(
// Make the vendor.js file
new webpack.optimize.CommonsChunkPlugin({ name: 'vendor', filename: 'vendor.js' }),
new WebpackNotifierPlugin({ title: 'Webpack build', excludeWarnings: true })
// create a single instance of the compiler to allow caching
return webpack(myDevConfig);
function buildDevelopment(done, devCompiler) {
// run webpack, stats) {
if(err) { throw new gutil.PluginError('webpack:build-dev', err); }
gutil.log('[webpack:build-dev]', stats.toString({
chunks: false, // dial down the output from webpack (it can be noisy)
colors: true
if (done) { done(); }
function bundle(options) {
var devCompiler;
function build(done) {
if (options.shouldWatch) {
buildDevelopment(done, devCompiler);
} else {
if (options.shouldWatch) {
devCompiler = createDevCompiler();'src/**/*', function() { build(); });
return new Promise(function(resolve, reject) {
build(function (err) {
if (err) {
} else {
resolve('webpack built');
module.exports = {
build: function() { return bundle({ shouldWatch: false }); },
watch: function() { return bundle({ shouldWatch: true }); }

Hopefully this is fairly self-explanatory; essentially buildDevelopment performs the development build (providing sourcemap support) and buildProduction builds for Production (providing minification support). Both are driven by this webpack.config.js:

/* eslint-disable no-var, strict, prefer-arrow-callback */
'use strict';
var path = require('path');
module.exports = {
cache: true,
entry: {
// The entry point of our application; the script that imports all other scripts in our SPA
main: './src/main.tsx',
// The packages that are to be included in vendor.js
vendor: [
// Where the output of our compilation ends up
output: {
path: path.resolve(__dirname, './dist/scripts'),
filename: '[name].js',
chunkFilename: '[chunkhash].js'
module: {
loaders: [{
// The loader that handles ts and tsx files. These are compiled
// with the ts-loader and the output is then passed through to the
// babel-loader. The babel-loader uses the es2015 and react presets
// in order that jsx and es6 are processed.
test: /\.ts(x?)$/,
exclude: /node_modules/,
loader: 'babel-loader?presets[]=es2015&presets[]=react!ts-loader'
}, {
// The loader that handles any js files presented alone.
// It passes these to the babel-loader which (again) uses the es2015
// and react presets.
test: /\.js$/,
exclude: /node_modules/,
loader: 'babel',
query: {
presets: ['es2015', 'react']
plugins: [
resolve: {
// Files with the following extensions are fair game for webpack to process
extensions: ['', '.webpack.js', '.web.js', '.ts', '.tsx', '.js']


Your compiled output needs to be referenced from some kind of HTML page. So we've got this:

<!doctype html>
<html lang="en">
<meta charSet="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>ES6 + Babel + React + Flux + Karma: The Secret Recipe</title>
<!-- inject:css -->
<!-- endinject -->
<link rel="stylesheet" href="">
<div id="content"></div>
<!-- inject:js -->
<!-- endinject -->

Which is no more than a boilerplate HTML page with a couple of key features:

  • a single &lt;div /&gt; element in the &lt;body /&gt; which is where our React app is going to be rendered.
  • &lt;!-- inject:css --&gt; and &lt;!-- inject:js --&gt; placeholders where css and js is going to be injected by gulp-inject.
  • a single &lt;link /&gt; to the Bootstrap CDN. This sample app doesn't actually serve up any css generated as part of the project. It could but it doesn't. When it comes to injection time no css will actually be injected. This has been left in place as, more typically, a project would have some styling served up.

This is fed into our inject task in and They take css and javascript and, using our shell template, create a new page which has the css and javascript dropped into their respective placeholders:

'use strict';
var gulp = require('gulp');
var inject = require('gulp-inject');
var glob = require('glob');
function injectIndex(options) {
var postInjectCb = options.postInjectCb;
var postInjectCbTriggerId = null;
function run() {
var target = gulp.src('./src/index.html');
var sources = gulp.src([
], { read: false });
return target
.on('end', function() { // invoke postInjectCb after 1s
if (postInjectCbTriggerId || !postInjectCb) { return; }
postInjectCbTriggerId = setTimeout(function() {
postInjectCbTriggerId = null;
}, 1000);
.pipe(inject(sources, { ignorePath: '/dist/', addRootSlash: false, removeTags: true }))
var jsCssGlob = 'dist/**/*.{js,css}';
function checkForInitialFilesThenRun() {
glob(jsCssGlob, function (er, files) {
var filesWeNeed = ['dist/scripts/main', 'dist/scripts/vendor'/*, 'dist/styles/main'*/];
function fileIsPresent(fileWeNeed) {
return files.some(function(file) {
return file.indexOf(fileWeNeed) !== -1;
if (filesWeNeed.every(fileIsPresent)) {
run('initial build');
} else {
if (options.shouldWatch) {, function(evt) {
if (evt.path && evt.type === 'changed') {
module.exports = {
build: function() { return injectIndex({ shouldWatch: false }); },
watch: function(postInjectCb) { return injectIndex({ shouldWatch: true, postInjectCb: postInjectCb }); }

This also triggers the server to serve up the new content.

Static Files#

Your app will likely rely on a number of static assets; images, fonts and whatnot. This script picks up the static assets you've defined and places them in the dist folder ready for use:

'use strict';
var gulp = require('gulp');
var cache = require('gulp-cached');
var targets = [
// In my own example I don't use any of the targets below, they
// are included to give you more of a feel of how you might use this
{ description: 'FONTS', src: './fonts/*', dest: './dist/fonts' },
{ description: 'STYLES', src: './styles/*', dest: './dist/styles' },
{ description: 'FAVICON', src: './favicon.ico', dest: './dist' },
{ description: 'IMAGES', src: './images/*', dest: './dist/images' }
function copy(options) {
// Copy files from their source to their destination
function run(target) {
function watch(target) {, function() { run(target); });
if (options.shouldWatch) {
module.exports = {
build: function() { return copy({ shouldWatch: false }); },
watch: function() { return copy({ shouldWatch: true }); }


Finally, we're ready to get our tests set up to run continually with Karma. triggers the following task:

'use strict';
var Server = require('karma').Server;
var path = require('path');
var gutil = require('gulp-util');
module.exports = {
watch: function() {
// Documentation:
var karmaConfig = {
configFile: path.join(__dirname, '../karma.conf.js'),
singleRun: false,
plugins: ['karma-webpack', 'karma-jasmine', 'karma-mocha-reporter', 'karma-sourcemap-loader', 'karma-phantomjs-launcher', 'karma-phantomjs-shim'], // karma-phantomjs-shim only in place until PhantomJS hits 2.0 and has function.bind
reporters: ['mocha']
new Server(karmaConfig, karmaCompleted).start();
function karmaCompleted(exitCode) {
gutil.log('Karma has exited with:', exitCode);

When running in watch mode it's possible to debug the tests by going to: <a href="http://localhost:9876/">http://localhost:9876/</a>. It's also possible to run the tests standalone with a simple npm run test. Running them like this also outputs the results to an XML file in JUnit format; this can be useful for integrating into CI solutions that don't natively pick up test results.

Whichever approach we use for running tests, we use the following karma.conf.js file to configure Karma:

/* eslint-disable no-var, strict */
'use strict';
var webpackConfig = require('./webpack.config.js');
module.exports = function(config) {
// Documentation:
browsers: [ 'PhantomJS' ],
files: [
'test/import-babel-polyfill.js', // This ensures we have the es6 shims in place from babel
port: 9876,
frameworks: [ 'jasmine', 'phantomjs-shim' ],
logLevel: config.LOG_INFO, //config.LOG_DEBUG
preprocessors: {
'test/import-babel-polyfill.js': [ 'webpack', 'sourcemap' ],
'src/**/*.{ts,tsx}': [ 'webpack', 'sourcemap' ],
'test/**/*.tests.{ts,tsx}': [ 'webpack', 'sourcemap' ]
webpack: {
devtool: 'eval-source-map', //'inline-source-map', - inline-source-map doesn't work at present
debug: true,
module: webpackConfig.module,
resolve: webpackConfig.resolve
webpackMiddleware: {
quiet: true,
stats: {
colors: true
// reporter options
mochaReporter: {
colors: {
success: 'bgGreen',
info: 'cyan',
warning: 'bgBlue',
error: 'bgRed'
junitReporter: {
outputDir: 'test-results', // results will be saved as $outputDir/$browserName.xml
outputFile: undefined, // if included, results will be saved as $outputDir/$browserName/$outputFile
suite: ''

As you can see, we're still using our webpack configuration from earlier to configure much of how the transpilation takes place.

And that's it; we have a workflow for developing in TypeScript using React with tests running in an automated fashion. I appreciated this has been a rather long blog post but I hope I've clarified somewhat how this all plugs together and works. Do leave a comment if you think I've missed something.

Babel 5 -> Babel 6#

This post has actually been sat waiting to be published for some time. I'd got this solution up and running with Babel 5. Then they shipped Babel 6 and (as is the way with "breaking changes") broke sourcemap support and thus torpedoed this workflow. Happily that's now been resolved. But if you should experience any wonkiness - it's worth checking that you're using the latest and greatest of Babel 6.

Things Done Changed

Some people fear change. Most people actually. I'm not immune to that myself, but not in the key area of technology. Any developer that fears change when it comes to the tools and languages that he / she is using is in the wrong business. Because what you're using to cut code today will not last. The language will evolve, the tools and frameworks that you love will die out and be replaced by new ones that are different and strange. In time, the language you feel you write as a native will fall out of favour, replaced by a new upstart.

My first gig was writing telecoms software using Delphi. I haven't touched Delphi (or telecoms for that matter) for over 10 years now. Believe me, I grok that things change.

That is the developer's lot. If you're able to accept that then you'll be just fine. For my part I've always rather relished the new and so I embrace it. However, I've met a surprising number of devs that are outraged when they realise that the language and tools they have used since their first job are not going to last. They do not go gentle into that good dawn. They rage, rage against the death of WebForms. My apologies to Dylan Thomas.

I recently started a new contract. This always brings a certain amount of change. This is part of the fun of contracting. However, the change was more significant in this case. As a result, the tools that I've been using for the last couple of months have been rather different to those that I'm used to. I've been outside my comfort zone. I've loved it. And now I want to reflect upon it. Because, in the words of Socrates, "the unexamined life is not worth living".

The Shock of the New (Toys)#

I'd been brought in to work on a full stack ASP.Net project. However, I've initially been working on a separate project which is entirely different. A web client app which has nothing to do with ASP.Net at all. It's a greenfield app which is built using the following:

  1. React / Flux
  2. WebSockets / Protocol Buffers
  3. Browserify
  4. ES6 with Babel
  5. Karma
  6. Gulp
  7. Atom

Where to begin? Perhaps at the end - Atom.

How Does it Feel to be on Your Own?#

When all around you, as far as the eye can see, are monitors displaying Visual Studio in all its grey glory whilst I was hammering away on Atom. It felt pretty good actually.

The app I was working on was a React / Flux app. You know what that means? JSX! At the time the project began Visual Studio did not have good editor support for JSX (something that the shipping of VS 2015 may have remedied but I haven't checked). So, rather than endure a life dominated with red squigglies I jumped ship and moved over to using GitHub's Atom to edit code.

I rather like it. Whilst VS is a full blown IDE, Atom is a text editor. A very pretty one. And crucially, one that can be extended by plugins of which there is a rich ecosystem. You want JSX support? There's a plugin for that. You want something that formats JSON nicely for you? There's a plugin for that too.

My only criticism of Atom really is that it doesn't handle large files well and it crashes a lot. I'm quite surprised by both of these characteristics given that in contrast to VS it is so small. You'd think the basics would be better covered. Go figure. It still rocks though. It looks so sexy - how could I not like it?

Someone to watch over me#

I've been using Gulp for a while now. It's a great JavaScript task runner and incredibly powerful. Previously though, I've used it as part of a manual build step (even plumbing it into my csproj). With the current project I've moved over to using the watch functionality of gulp. So I'm scrapping triggering gulp tasks manually. Instead we have gulp configured to gaze lovingly at the source files and, when they change, re-run the build steps.

This is nice for a couple of reasons:

  • When I want to test out the app the build is already done - I don't have to wait for it to happen.
  • When I do bad things I find out faster. So I've got JSHint being triggered by my watch. If I write code that makes JSHint sad (and I haven't noticed the warnings from the atom plugin) then they'll appear in the console. Likewise, my unit tests are continuously running in response to file changes (in an ncrunch-y sorta style) and so I know straight away if I'm breaking tests. Rather invaluable in the dynamic world of JavaScript.

Karma, Karma, Karma, Chameleon#

In the past, when using Visual Studio, it made sense to use the mighty Chutzpah which allows you to run JS unit tests from within VS itself. I needed a new way to run my Jasmine unit tests. The obvious choice was Karma (built by the Angular team I think?). It's really flexible.

You're using Browserify? No bother. You're writing ES6 and transpiling with Babel? Not a problem. You want code coverage? That we can do. You want an integration for TeamCity? That too is sorted....

Karma is fantastic. Fun fact: originally it was called Testacular. I kind of get why they changed the name but the world is all the duller for it. A side effect of the name change is that due to invalid search results I know a lot more about Buddhism than I used to.

I cry Babel, Babel, look at me now#

Can you not wait for the future? Me neither. Even though it's 2015 and Back to the Future II takes place in only a month's time. So I'm not waiting for a million and one browsers to implement ES6 and IE 8 to finally die dammit. Nope, I have a plan and it's Babel. Transpile the tears away, write ES6 and have Babel spit out EStoday.

I've found this pretty addictive. Once you've started using ES6 features it's hard to go back. Take destructuring - I can't get enough of it.

Whilst I love Babel, it has caused me some sadness as well. My beloved TypeScript is currently not in the mix, Babel is instead sat squarely where I want TS to be. I'm without static types and rather bereft. You can certainly live without them but having done so for a while I'm pretty clear now that static typing is a massive productivity win. You don't have to hold the data structures that you're working on in your head so much, code completion gives you what you need there, you can focus more on the problem that you're trying to solve. You also burn less time on silly mistakes. If you accidentally change the return type of a function you're likely to know straight away. Refactoring is so much harder without static types. I could go on.

All this goes to say: I want my static typing back. It wasn't really an option to use TypeScript in the beginning because we were using JSX and TS didn't support it. However! TypeScript is due to add support for JSX in TS 1.6 (currently in beta). I've plans to see if I can get TypeScript to emit ES6 and then keep using Babel to do the transpilation. Whether this will work, I don't know yet. But it seems likely. So I'm hoping for a bright future.

Browserify (there are no song lyrics that can be crowbarred in)#

Emitting scripts in the required order has been a perpetual pain for everyone in the web world for the longest time. I've taken about 5 different approaches to solving this problem over the years. None felt particularly good. So Browserify.

Browserify solves the problem of script ordering for you by allowing you to define an entry point to your application and getting you to write require (npm style) or import (ES6 modules) to bring in the modules that you need. This allows Browserify (which we're using with Babel thanks to the babelify transform) to create a ginormous js file that contains the scripts served as needed. Thanks to the magic of source maps it also allows us to debug our original code (yup, the original ES6!) Browserify has the added bonus of allowing us free reign to pull in npm packages to use in our app without a great deal of ceremony.

Browserify is pretty fab - my only real reservation is that if you step outside the normal use cases you can quickly find yourself in deep water. Take for instance web workers. We were briefly looking into using them as an optimisation (breaking IO onto a separate process from the UI). A prime reason for backing away from this is that Web Workers don't play particularly well with Browserify. And when you've got Babel (or Babelify) in the mix the problems just multiply. That apart, I really dig Browserify. I think I'd like to give WebPack a go as well as I understand it fulfills a similar purpose.

WebSockets / Protocol Buffers#

The application I'm working on is plugging into an existing system which uses WebSockets for communication. Since WebSockets are native to the web we've been able to plumb these straight into our app. We're also using Protocol Buffers as another optimisation; a way to save a few extra bytes from going down the wire. I don't have much to say about either, just some observations really:

  1. WebSockets is a slightly different way of working - permanently open connections as opposed to the request / response paradigm of classic HTTP
  2. WebSockets are wicked fast (due in part to those permanent connections). So performance is amazing. Fast like native, type amazing. In our case performance is pretty important and so this has been really great.

React / Flux#

Finally, React and Flux. I was completely new to these when I came onto the project and I quickly came to love them. There was a prejudice for me to overcome and that was JSX. When I first saw it I felt a little sick. "Have we learned NOTHING???" I wailed. "Don't we know that embedding strings in our controllers is a BAD thing?" I was wrong. I had an epiphany. I discovered that JSX is not, as I first imagined, embedded HTML strings. Actually it's syntactic sugar for object creation. A simple example:

var App;
// Some JSX:
var app = <App version="1.0.0" />;
// What that JSX transpiles into:
var app = React.createElement(App, {version:"1.0.0"});

Now that I'm well used to JSX and React I've really got to like it. I keep my views / components as dumb as possible and do all the complex logic in the stores. The stores are just standard JavaScript and so, pretty testable (simple Jasmine gives you all you need - I haven't felt the need for Jest). The components / views are also completely testable. I'd advise anyone coming to React afresh to make use of the <a href="">ReactShallowRenderer</a> for these purposes. This means you can test without a DOM - much better all round.

I don't have a great deal to say about Flux; I think my views on it aren't fully formed yet. I do really like predictability that unidirectional data flow gives you. However, I'm mindful that the app that I've been writing is very much about displaying data and doesn't require much user input. I know that I'm living without 2-way data binding and I do wonder if I would come to miss it. Time will tell.

I really want to get back to static typing. That either means TypeScript (which I know and love) or Facebook's Flow. (A Windows version of Flow is in the works.) I'll be very happy if I get either into the mix... Watch this space.