Skip to main content

4 posts tagged with "karma"

View All Tags

Karma: From PhantomJS to Headless Chrome

Like pretty much everyone else I've been using PhantomJS to run my JavaScript (or compiled-to-JS) unit tests. It's been great. So when I heard the news that PhantomJS was dead I was genuinely sad. However, the King is dead.... Long live the King! For there is a new hope; it's called Chrome Headless . It's not a separate version of Chrome; rather the ability to run Chrome without a UI is now baked into Google's favourite browser as of v59. (For those history buffs I might as well be clear: the main reason PhantomJS died is because Chrome Headless was in the works.)

Making the Switch#

As long as you're running Chrome v59 or greater then you can switch. I've just made ts-loader's execution test pack run with Chrome Headless instead of PhantomJS and I've rarely been happier. Honest. Some context: the execution test pack runs Jasmine unit tests via the Karma test runner. The move was surprisingly easy and you can see just how minimal it was in the PR here. If you want to migrate a test that runs tests via Karma then this will take you through what you need to do.

package.json#

You no longer need phantomjs-prebuilt as a dev dependency of your project. That's the PhantomJS browser disappearing in the rear view mirror. Next we need to replace karma-phantomjs-launcher with karma-chrome-launcher. These packages are responsible for firing up the browser that the tests are run in and we no longer want to invoke PhantomJS; we're Chrome all the way baby.

karma.conf.js#

You need to tell Karma to use Chrome Headless instead of PhantomJS. You do that by replacing

browsers: [ 'PhantomJS' ],

with

browsers: [ 'ChromeHeadless' ],

That's it; job done!

Continuous Integration#

There's always one more thing isn't there? Yup, ts-loader has CI builds that run on Windows with AppVeyor and Linux with Travis. The AppVeyor build went green on the first run; that's because Chrome is installed by default in the AppVeyor build environment. (yay!)

Travis went red. (boooo!) Travis doesn't have Chrome installed by default. But it's no biggie; you just need to tweak your .travis.yml like so:

dist: trusty
addons:
chrome: stable

This includes Chrome in the Travis build environment. Green. Boom!

Dynamic import: I've been awaiting you...

One of the most exciting features to ship with TypeScript 2.4 was support for the dynamic import expression. To quote the release blog post:

Dynamic import expressions are a new feature in ECMAScript that allows you to asynchronously request a module at any arbitrary point in your program. These modules come back as Promises of the module itself, and can be await-ed in an async function, or can be given a callback with .then.

...

Many bundlers have support for automatically splitting output bundles (a.k.a. “code splitting”) based on these import() expressions, so consider using this new feature with the esnext module target. Note that this feature won’t work with the es2015 module target, since the feature is anticipated for ES2018 or later.

As the post makes clear, this adds support for a very bleeding edge ECMAScript feature. This is not fully standardised yet; it's currently at stage 3 on the TC39 proposals list. That means it's at the Candidate stage and is unlikely to change further. If you'd like to read more about it then take a look at the official proposal here.

Whilst this is super-new, we are still able to use this feature. We just have to jump through a few hoops first.

TypeScript Setup#

First of all, you need to install TypeScript 2.4. With that in place you need to make some adjustments to your tsconfig.json in order that the relevant compiler switches are flipped. What do you need? First of all you need to be targeting ECMAScript 2015 as a minimum. That's important specifically because ES2015 contained Promises which is what dynamic imports produce. The second thing you need is to target the module type of esnext. You're likely targeting es2015 now, esnext is that plus dynamic imports.

Here's a tsconfig.json I made earlier which has the relevant settings set:

{
"compilerOptions": {
"allowSyntheticDefaultImports": true,
"lib": [
"dom",
"es2015"
],
"target": "es2015",
"module": "esnext",
"moduleResolution": "node",
"noImplicitAny": true,
"noUnusedLocals": true,
"noUnusedParameters": true,
"removeComments": false,
"preserveConstEnums": true,
"sourceMap": true,
"skipLibCheck": true
}
}

Babel Setup#

At the time of writing, browser support for dynamic import is non-existent. This will likely be the case for some time but it needn't hold us back. Babel can step in here and compile our super-new JS into JS that will run in our browsers today.

You'll need to decide for yourself how much you want Babel to do for you. In my case I'm targeting old school browsers which don't yet support ES2015. You may not need to. However, the one thing that you'll certainly need is the Syntax Dynamic Import plugin. It's this that allows Babel to process dynamic import statements.

These are the options I'm passing to Babel:

var babelOptions = {
"plugins": ["syntax-dynamic-import"],
"presets": [
[
"es2015",
{
"modules": false
}
]
]
};

You're also going to need something that actually execute the imports. In my case I'm using webpack...

webpack#

webpack 2 supports import(). So if you webpack set up with ts-loader (or awesome-typescript-loader etc), chaining into babel-loader you should find you have a setup that supports dynamic import. That means a webpack.config.js that looks something like this:

var path = require('path');
var webpack = require('webpack');
var babelOptions = {
"plugins": ["syntax-dynamic-import"],
"presets": [
[
"es2015",
{
"modules": false
}
]
]
};
module.exports = {
entry: './app.ts',
output: {
filename: 'bundle.js'
},
module: {
rules: [{
test: /\.ts(x?)$/,
exclude: /node_modules/,
use: [
{
loader: 'babel-loader',
options: babelOptions
},
{
loader: 'ts-loader'
}
]
}, {
test: /\.js$/,
exclude: /node_modules/,
use: [
{
loader: 'babel-loader',
options: babelOptions
}
]
}]
},
resolve: {
extensions: ['.ts', '.tsx', '.js']
},
};

ts-loader example#

I'm one of the maintainers of ts-loader which is a TypeScript loader for webpack. When support for dynamic imports landed I wanted to add a test to cover usage of the new syntax with ts-loader.

We have 2 test packs for ts-loader, one of which is our "execution" test pack. It is so named because it works by spinning up webpack with ts-loader and then using karma to execute a set of tests. Each "test" in our execution test pack is actually a mini-project with its own test suite (generally jasmine but that's entirely configurabe). Each complete with its own webpack.config.js, karma.conf.js and either a typings.json or package.json for bringing in dependencies. So it's a full test of whether code slung with ts-loader and webpack actually executes when the output is plugged into a browser.

This is the test pack for dynamic imports:

import a from "../src/a";
import b from "../src/b";
describe("app", () => {
it("a to be 'a' and b to be 'b' (classic)", () => {
expect(a).toBe("a");
expect(b).toBe("b");
});
it("import results in a module with a default export", done => {
import("../src/c").then(c => {
// .default is the default export
expect(c.default).toBe("c");
done();
}
});
it("import results in a module with an export", done => {
import("../src/d").then(d => {
// .default is the default export
expect(d.d).toBe("d");
done();
}
});
it("await import results in a module with a default export", async done => {
const c = await import("../src/c");
// .default is the default export
expect(c.default).toBe("c");
done();
});
it("await import results in a module with an export", async done => {
const d = await import("../src/d");
expect(d.d).toBe("d");
done();
});
});

As you can see, it's possible to use the dynamic import as a Promise directly. Alternatively, it's possible to consume the imported module using TypeScripts support for async / await. For my money the latter option makes for much clearer code.

If you're looking for a complete example of how to use the new syntax then you could do worse than taking the existing test pack and tweaking it to your own ends. The only change you'd need to make is to strip out the resolveLoader statements in webpack.config.js and karma.conf.js. (They exist to lock the test in case to the freshly built ts-loader stored locally. You'll not need this.)

You can find the test in question here. Happy code splitting!

TFS 2012 meet PowerShell, Karma and BuildNumber

To my lasting regret, TFS 2012 has no direct support for PowerShell. Such a shame as PowerShell scripts can do a lot of heavy lifting in a build process. Well, here we're going to brute force TFS 2012 into running PowerShell scripts. And along the way we'll also get Karma test results publishing into TFS 2012 as an example usage. Nice huh? Let's go!

PowerShell via csproj#

It's time to hack the csproj (or whatever project file you have) again. We're going to add an AfterBuild target to the end of the file. This target will be triggered after the build completes (as the name suggests):

<Target Name="AfterBuild">
<Message Importance="High" Text="AfterBuild: PublishUrl = $(PublishUrl), BuildUri = $(BuildUri), Configuration = $(Configuration), ProjectDir = $(ProjectDir), TargetDir = $(TargetDir), TargetFileName = $(TargetFileName), BuildNumber = $(BuildNumber), BuildDefinitionName = $(BuildDefinitionName)" />
<Exec Command="powershell.exe -NonInteractive -ExecutionPolicy RemoteSigned "& '$(ProjectDir)AfterBuild.ps1' '$(Configuration)' '$(ProjectDir)' '$(TargetDir)' '$(PublishUrl)' '$(BuildNumber)' '$(BuildDefinitionName)'"" />
</Target>

There's 2 things happening in this target:

  1. A message is printed out during compilation which contains details of the various compile time variables. This is nothing more than a console.log statement really; it's useful for debugging and so I keep it around. You'll notice one of them is called $(BuildNumber); more on that later.
  2. A command is executed; PowerShell! This invokes PowerShell with the -NonInteractive and -ExecutionPolicy RemoteSigned flags. It passes a script to be executed called AfterBuild.ps1 that lives in the root of the project directory. To that script a number of parameters are supplied; compile time variables that we may use in the script.

Where's my BuildNumber and BuildDefinitionName?#

So you've checked in your changes and kicked off a build on the server. You're picking over the log messages and you're thinking: "Where's my BuildNumber?". Well, TFS 2012 does not have that set as a variable at compile time by default. This stumped me for a while but thankfully I wasn't the only person wondering... As ever, Stack Overflow had the answer. So, deep breath, it's time to hack the TFS build template file.

Checkout the DefaultTemplate.11.1.xaml file from TFS and open it in your text editor of choice. It's find and replace time! (There are probably 2 instances that need replacement.) Perform a find for the below

[String.Format(&quot;/p:SkipInvalidConfigurations=true {0}&quot;, MSBuildArguments)]

And replace it with this:

[String.Format("/p:SkipInvalidConfigurations=true /p:BuildNumber={1} /p:BuildDefinitionName={2} {0}", MSBuildArguments, BuildDetail.BuildNumber, BuildDetail.BuildDefinition.Name)]

Pretty long line eh? Let's try breaking that up to make it more readable: (but remember in the XAML it needs to be a one liner)

[String.Format("/p:SkipInvalidConfigurations=true
/p:BuildNumber={1}
/p:BuildDefinitionName={2} {0}", MSBuildArguments, BuildDetail.BuildNumber, BuildDetail.BuildDefinition.Name)]

We're just adding 2 extra parameters of BuildNumber and BuildDefinitionName to the invocation of MSBuild. Once we've checked this back in, BuildNumber and BuildDefinitionName will be available on future builds. Yay! Important! You must have a build name that does not feature spaces; probably there's a way to pass spaces here but I'm not sure what it is.

AfterBuild.ps1#

You can use your AfterBuild.ps1 script to do any number of things. In my case I'm going to use MSTest to publish some test results which have been generated by Karma into TFS:

param ([string]$configuration, [string]$projectDir, [string]$targetDir, [string]$publishUrl, [string]$buildNumber, [string] $buildDefinitionName)
$ErrorActionPreference = 'Stop'
Clear
function PublishTestResults([string]$resultsFile) {
Write-Host 'PublishTests'
$mstest = 'C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\MSTest.exe'
Write-Host "Using $mstest at $pwd"
Write-Host "Publishing: $resultsFile"
& $mstest /publishresultsfile:$resultsFile /publish:http://my-tfs-server:8080/tfs /teamproject:MyProject /publishbuild:$buildNumber /platform:'Any CPU' /flavor:Release
}
function FailBuildIfThereAreTestFailures([string]$resultsFile) {
$results = [xml](GC $resultsFile)
$outcome = $results.TestRun.ResultSummary.outcome
$fgColor = if($outcome -eq "Failed") { "Red" } else { "Green" }
$total = $results.TestRun.ResultSummary.Counters.total
$passed = $results.TestRun.ResultSummary.Counters.passed
$failed = $results.TestRun.ResultSummary.Counters.failed
$failedTests = $results.TestRun.Results.UnitTestResult | Where-Object { $_.outcome -eq "Failed" }
Write-Host Test Results: $outcome -ForegroundColor $fgColor -BackgroundColor "Black"
Write-Host Total tests: $total
Write-Host Passed: $passed
Write-Host Failed: $failed
Write-Host
$failedTests | % { Write-Host Failed test: $_.testName
Write-Host $_.Output.ErrorInfo.Message
Write-Host $_.Output.ErrorInfo.StackTrace }
Write-Host
if($outcome -eq "Failed") {
Write-Host "Failing build as there are broken tests"
$host.SetShouldExit(1)
}
}
function Run() {
Write-Host "Running AfterBuild.ps1 using Configuration: $configuration, projectDir: $projectDir, targetDir: $targetDir, publishUrl: $publishUrl, buildNumber: $buildNumber, buildDefinitionName: $buildDefinitionName"
if($buildNumber) {
$resultsFile = "$projectDir\test-results.trx"
PublishTestResults $resultsFile
FailBuildIfThereAreTestFailures $resultsFile
}
}
# Off we go...
Run

Assuming we have a build number this script kicks off the PublishTestResults function above. So we won't attempt to publish test results when compiling in Visual Studio on our dev machine. The script looks for MSTest.exe in a certain location on disk (the default VS 2015 installation location in fact; it may be installed elsewhere on your build machine). MSTest is then invoked and passed a file called test-results.trx which is is expected to live in the root of the project. All being well, the test results will be registered with the running build and will be visible when you look at test results in TFS.

Finally in FailBuildIfThereAreTestFailures we parse the test-results.trx file (and for this I'm totally in the debt of David Robert's helpful Gist). We write out the results to the host so it'll show up in the MSBuild logs. Also, and this is crucial, if there are any failures we fail the build by exiting PowerShell with a code of 1. We are deliberately swallowing any error that Karma raises earlier when it detects failed tests. We do this because we want to publish the failed test results to TFS before we kill the build.

Bonus Karma: test-results.trx#

If you've read a previous post of mine you'll be aware that it's possible to get MSBuild to kick off npm build tasks. Specifically I have MSBuild kicking off an npm run build. My package.json looks like this:

"scripts": {
"test": "karma start --reporters mocha,trx --single-run --browsers PhantomJS",
"clean": "gulp delete-dist-contents",
"watch": "gulp watch",
"build": "gulp build",
"postbuild": "npm run test"
},

You can see that the postbuild hook kicks off the test script in turn. And that test script kicks off a single run of karma. I'm not going to go over setting up Karma at all here; there are other posts out there that cover that admirably. But I wanted to share news of the karma trx reporter. This reporter is the thing that produces our test-results.trx file; trx being the format that TFS likes to deal with.

So now we've got a PowerShell hook into our build process (something very useful in itself) which we are using to publish Karma test results into TFS 2012. They said it couldn't be done. They were wrong. Huzzah!!!!!!!

ES6 + TypeScript + Babel + React + Flux + Karma: The Secret Recipe

I wrote a while ago about how I was using some different tools in a current project:

  • React with JSX
  • Flux
  • ES6 with Babel
  • Karma for unit testing

I have fully come to love and appreciate all of the above. I really like working with them. However. There was still an ache in my soul and a thorn in my side. Whilst I love the syntax of ES6 and even though I've come to appreciate the clarity of JSX, I have been missing something. Perhaps you can guess? It's static typing.

It's actually been really good to have the chance to work without it because it's made me realise what a productivity boost having static typing actually is. The number of silly mistakes burning time that a compiler could have told me.... Sigh.

But the pain is over. The dark days are gone. It's possible to have strong typing, courtesy of TypeScript, plugged into this workflow. It's yours for the taking. Take it. Take it now!

What a Guy Wants#

I decided a couple of months ago what I wanted to have in my setup:

  1. I want to be able to write React / JSX in TypeScript. Naturally I couldn't achieve that by myself but handily the TypeScript team decided to add support for JSX with TypeScript 1.6. Ooh yeah.
  2. I wanted to be able to write ES6. When I realised the approach for writing ES6 and having the transpilation handled by TypeScript wasn't clear I had another idea. I thought "what if I write ES6 and hand off the transpilation to Babel?" i.e. Use TypeScript for type checking, not for transpilation. I realised that James Brantly had my back here already. Enter Webpack and ts-loader.
  3. Debugging. Being able to debug my code is non-negotiable for me. If I can't debug it I'm less productive. (I'm also bitter and twisted inside.) I should say that I wanted to be able to debug my original source code. Thanks to the magic of sourcemaps, that mad thing is possible.
  4. Karma for unit testing. I've become accustomed to writing my tests in ES6 and running them on a continual basis with Karma. This allows for a rather good debugging story as well. I didn't want to lose this when I moved to TypeScript. I didn't.

So I've talked about what I want and I've alluded to some of the solutions that there are. The question now is how to bring them all together. This post is, for the most part, going to be about correctly orchestrating a number of gulp tasks to achieve the goals listed above. If you're after the Blue Peter "here's one I made earlier" moment then take a look at the es6-babel-react-flux-karma repo in the Microsoft/TypeScriptSamples repo on Github.

gulpfile.js#

/* eslint-disable no-var, strict, prefer-arrow-callback */
'use strict';
var gulp = require('gulp');
var gutil = require('gulp-util');
var connect = require('gulp-connect');
var eslint = require('gulp-eslint');
var webpack = require('./gulp/webpack');
var staticFiles = require('./gulp/staticFiles');
var tests = require('./gulp/tests');
var clean = require('./gulp/clean');
var inject = require('./gulp/inject');
var lintSrcs = ['./gulp/**/*.js'];
gulp.task('delete-dist', function (done) {
clean.run(done);
});
gulp.task('build-process.env.NODE_ENV', function () {
process.env.NODE_ENV = 'production';
});
gulp.task('build-js', ['delete-dist', 'build-process.env.NODE_ENV'], function(done) {
webpack.build().then(function() { done(); });
});
gulp.task('build-other', ['delete-dist', 'build-process.env.NODE_ENV'], function() {
staticFiles.build();
});
gulp.task('build', ['build-js', 'build-other', 'lint'], function () {
inject.build();
});
gulp.task('lint', function () {
return gulp.src(lintSrcs)
.pipe(eslint())
.pipe(eslint.format());
});
gulp.task('watch', ['delete-dist'], function() {
process.env.NODE_ENV = 'development';
Promise.all([
webpack.watch()//,
//less.watch()
]).then(function() {
gutil.log('Now that initial assets (js and css) are generated inject will start...');
inject.watch(postInjectCb);
}).catch(function(error) {
gutil.log('Problem generating initial assets (js and css)', error);
});
gulp.watch(lintSrcs, ['lint']);
staticFiles.watch();
tests.watch();
});
gulp.task('watch-and-serve', ['watch'], function() {
postInjectCb = stopAndStartServer;
});
var postInjectCb = null;
var serverStarted = false;
function stopAndStartServer() {
if (serverStarted) {
gutil.log('Stopping server');
connect.serverClose();
serverStarted = false;
}
startServer();
}
function startServer() {
gutil.log('Starting server');
connect.server({
root: './dist',
port: 8080
});
serverStarted = true;
}

Let's start picking this apart; what do we actually have here? Well, we have 2 gulp tasks that I want you to notice:

build

This is likely the task you would use when deploying. It takes all of your source code, builds it, provides cache-busting filenames (eg main.dd2fa20cd9eac9d1fb2f.js), injects your shell SPA page with references to the files and deploys everything to the ./dist/ directory. So that's TypeScript, static assets like images and CSS all made ready for Production.

The build task also implements this advice:

When deploying your app, set the NODE_ENV environment variable to production to use the production build of React which does not include the development warnings and runs significantly faster.
watch-and-serve

This task represents "development mode" or "debug mode". It's what you'll likely be running as you develop your app. It does the same as the build task but with some important distinctions.

  • As well as building your source it also runs your tests using Karma
  • This task is not triggered on a once-only basis, rather your files are watched and each tweak of a file will result in a new build and a fresh run of your tests. Nice eh?
  • It spins up a simple web server and serves up the contents of ./dist (i.e. your built code) in order that you can easily test out your app.
  • In addition, whilst it builds your source it does not minify your code and it emits sourcemaps. For why? For debugging! You can go to http://localhost:8080/ in your browser of choice, fire up the dev tools and you're off to the races; debugging like gangbusters. It also doesn't bother to provide cache-busting filenames as Chrome dev tools are smart enough to not cache localhost.
  • Oh and Karma.... If you've got problems with a failing test then head to http://localhost:9876/ and you can debug the tests in your dev tools.
  • Finally, it runs ESLint in the console. Not all of my files are TypeScript; essentially the build process (aka "gulp-y") files are all vanilla JS. So they're easily breakable. ESLint is there to provide a little reassurance on that front.

Now let's dig into each of these in a little more detail

WebPack#

Let's take a look at what's happening under the covers of webpack.build() and webpack.watch().

WebPack with ts-loader and babel-loader is what we're using to compile our ES6 TypeScript. ts-loader uses the TypeScript compiler to, um, compile TypeScript and emit ES6 code. This is then passed on to the babel-loader which transpiles it from ES6 down to ES-old-school. It all gets brought together in 2 files; main.js which contains the compiled result of the code written by us and vendor.js which contains the compiled result of 3rd party / vendor files. The reason for this separation is that vendor files are likely to change fairly rarely whilst our own code will constantly be changing. This separation allows for quicker compile times upon file changes as, for the most part, the vendor files will not need to included in this process.

Our gulpfile.js above uses the following task:

'use strict';
var gulp = require('gulp');
var gutil = require('gulp-util');
var webpack = require('webpack');
var WebpackNotifierPlugin = require('webpack-notifier');
var webpackConfig = require('../webpack.config.js');
function buildProduction(done) {
// modify some webpack config options
var myProdConfig = Object.create(webpackConfig);
myProdConfig.output.filename = '[name].[hash].js';
myProdConfig.plugins = myProdConfig.plugins.concat(
// make the vendor.js file with cachebusting filename
new webpack.optimize.CommonsChunkPlugin({ name: 'vendor', filename: 'vendor.[hash].js' }),
new webpack.optimize.DedupePlugin(),
new webpack.optimize.UglifyJsPlugin()
);
// run webpack
webpack(myProdConfig, function(err, stats) {
if(err) { throw new gutil.PluginError('webpack:build', err); }
gutil.log('[webpack:build]', stats.toString({
colors: true
}));
if (done) { done(); }
});
}
function createDevCompiler() {
// show me some sourcemap love people
var myDevConfig = Object.create(webpackConfig);
myDevConfig.devtool = 'inline-source-map';
myDevConfig.debug = true;
myDevConfig.plugins = myDevConfig.plugins.concat(
// Make the vendor.js file
new webpack.optimize.CommonsChunkPlugin({ name: 'vendor', filename: 'vendor.js' }),
new WebpackNotifierPlugin({ title: 'Webpack build', excludeWarnings: true })
);
// create a single instance of the compiler to allow caching
return webpack(myDevConfig);
}
function buildDevelopment(done, devCompiler) {
// run webpack
devCompiler.run(function(err, stats) {
if(err) { throw new gutil.PluginError('webpack:build-dev', err); }
gutil.log('[webpack:build-dev]', stats.toString({
chunks: false, // dial down the output from webpack (it can be noisy)
colors: true
}));
if (done) { done(); }
});
}
function bundle(options) {
var devCompiler;
function build(done) {
if (options.shouldWatch) {
buildDevelopment(done, devCompiler);
} else {
buildProduction(done);
}
}
if (options.shouldWatch) {
devCompiler = createDevCompiler();
gulp.watch('src/**/*', function() { build(); });
}
return new Promise(function(resolve, reject) {
build(function (err) {
if (err) {
reject(err);
} else {
resolve('webpack built');
}
});
});
}
module.exports = {
build: function() { return bundle({ shouldWatch: false }); },
watch: function() { return bundle({ shouldWatch: true }); }
};

Hopefully this is fairly self-explanatory; essentially buildDevelopment performs the development build (providing sourcemap support) and buildProduction builds for Production (providing minification support). Both are driven by this webpack.config.js:

/* eslint-disable no-var, strict, prefer-arrow-callback */
'use strict';
var path = require('path');
module.exports = {
cache: true,
entry: {
// The entry point of our application; the script that imports all other scripts in our SPA
main: './src/main.tsx',
// The packages that are to be included in vendor.js
vendor: [
'babel-polyfill',
'events',
'flux',
'react'
]
},
// Where the output of our compilation ends up
output: {
path: path.resolve(__dirname, './dist/scripts'),
filename: '[name].js',
chunkFilename: '[chunkhash].js'
},
module: {
loaders: [{
// The loader that handles ts and tsx files. These are compiled
// with the ts-loader and the output is then passed through to the
// babel-loader. The babel-loader uses the es2015 and react presets
// in order that jsx and es6 are processed.
test: /\.ts(x?)$/,
exclude: /node_modules/,
loader: 'babel-loader?presets[]=es2015&presets[]=react!ts-loader'
}, {
// The loader that handles any js files presented alone.
// It passes these to the babel-loader which (again) uses the es2015
// and react presets.
test: /\.js$/,
exclude: /node_modules/,
loader: 'babel',
query: {
presets: ['es2015', 'react']
}
}]
},
plugins: [
],
resolve: {
// Files with the following extensions are fair game for webpack to process
extensions: ['', '.webpack.js', '.web.js', '.ts', '.tsx', '.js']
},
};

Inject#

Your compiled output needs to be referenced from some kind of HTML page. So we've got this:

<!doctype html>
<html lang="en">
<head>
<meta charSet="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>ES6 + Babel + React + Flux + Karma: The Secret Recipe</title>
<!-- inject:css -->
<!-- endinject -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css">
</head>
<body>
<div id="content"></div>
<!-- inject:js -->
<!-- endinject -->
</body>
</html>

Which is no more than a boilerplate HTML page with a couple of key features:

  • a single &lt;div /&gt; element in the &lt;body /&gt; which is where our React app is going to be rendered.
  • &lt;!-- inject:css --&gt; and &lt;!-- inject:js --&gt; placeholders where css and js is going to be injected by gulp-inject.
  • a single &lt;link /&gt; to the Bootstrap CDN. This sample app doesn't actually serve up any css generated as part of the project. It could but it doesn't. When it comes to injection time no css will actually be injected. This has been left in place as, more typically, a project would have some styling served up.

This is fed into our inject task in inject.build() and inject.watch(). They take css and javascript and, using our shell template, create a new page which has the css and javascript dropped into their respective placeholders:

'use strict';
var gulp = require('gulp');
var inject = require('gulp-inject');
var glob = require('glob');
function injectIndex(options) {
var postInjectCb = options.postInjectCb;
var postInjectCbTriggerId = null;
function run() {
var target = gulp.src('./src/index.html');
var sources = gulp.src([
//'./dist/styles/main*.css',
'./dist/scripts/vendor*.js',
'./dist/scripts/main*.js'
], { read: false });
return target
.on('end', function() { // invoke postInjectCb after 1s
if (postInjectCbTriggerId || !postInjectCb) { return; }
postInjectCbTriggerId = setTimeout(function() {
postInjectCb();
postInjectCbTriggerId = null;
}, 1000);
})
.pipe(inject(sources, { ignorePath: '/dist/', addRootSlash: false, removeTags: true }))
.pipe(gulp.dest('./dist'));
}
var jsCssGlob = 'dist/**/*.{js,css}';
function checkForInitialFilesThenRun() {
glob(jsCssGlob, function (er, files) {
var filesWeNeed = ['dist/scripts/main', 'dist/scripts/vendor'/*, 'dist/styles/main'*/];
function fileIsPresent(fileWeNeed) {
return files.some(function(file) {
return file.indexOf(fileWeNeed) !== -1;
});
}
if (filesWeNeed.every(fileIsPresent)) {
run('initial build');
} else {
checkForInitialFilesThenRun();
}
});
}
checkForInitialFilesThenRun();
if (options.shouldWatch) {
gulp.watch(jsCssGlob, function(evt) {
if (evt.path && evt.type === 'changed') {
run(evt.path);
}
});
}
}
module.exports = {
build: function() { return injectIndex({ shouldWatch: false }); },
watch: function(postInjectCb) { return injectIndex({ shouldWatch: true, postInjectCb: postInjectCb }); }
};

This also triggers the server to serve up the new content.

Static Files#

Your app will likely rely on a number of static assets; images, fonts and whatnot. This script picks up the static assets you've defined and places them in the dist folder ready for use:

'use strict';
var gulp = require('gulp');
var cache = require('gulp-cached');
var targets = [
// In my own example I don't use any of the targets below, they
// are included to give you more of a feel of how you might use this
{ description: 'FONTS', src: './fonts/*', dest: './dist/fonts' },
{ description: 'STYLES', src: './styles/*', dest: './dist/styles' },
{ description: 'FAVICON', src: './favicon.ico', dest: './dist' },
{ description: 'IMAGES', src: './images/*', dest: './dist/images' }
];
function copy(options) {
// Copy files from their source to their destination
function run(target) {
gulp.src(target.src)
.pipe(cache(target.description))
.pipe(gulp.dest(target.dest));
}
function watch(target) {
gulp.watch(target.src, function() { run(target); });
}
targets.forEach(run);
if (options.shouldWatch) {
targets.forEach(watch);
}
}
module.exports = {
build: function() { return copy({ shouldWatch: false }); },
watch: function() { return copy({ shouldWatch: true }); }
};

Karma#

Finally, we're ready to get our tests set up to run continually with Karma. tests.watch() triggers the following task:

'use strict';
var Server = require('karma').Server;
var path = require('path');
var gutil = require('gulp-util');
module.exports = {
watch: function() {
// Documentation: https://karma-runner.github.io/0.13/dev/public-api.html
var karmaConfig = {
configFile: path.join(__dirname, '../karma.conf.js'),
singleRun: false,
plugins: ['karma-webpack', 'karma-jasmine', 'karma-mocha-reporter', 'karma-sourcemap-loader', 'karma-phantomjs-launcher', 'karma-phantomjs-shim'], // karma-phantomjs-shim only in place until PhantomJS hits 2.0 and has function.bind
reporters: ['mocha']
};
new Server(karmaConfig, karmaCompleted).start();
function karmaCompleted(exitCode) {
gutil.log('Karma has exited with:', exitCode);
process.exit(exitCode);
}
}
};

When running in watch mode it's possible to debug the tests by going to: <a href="http://localhost:9876/">http://localhost:9876/</a>. It's also possible to run the tests standalone with a simple npm run test. Running them like this also outputs the results to an XML file in JUnit format; this can be useful for integrating into CI solutions that don't natively pick up test results.

Whichever approach we use for running tests, we use the following karma.conf.js file to configure Karma:

/* eslint-disable no-var, strict */
'use strict';
var webpackConfig = require('./webpack.config.js');
module.exports = function(config) {
// Documentation: https://karma-runner.github.io/0.13/config/configuration-file.html
config.set({
browsers: [ 'PhantomJS' ],
files: [
'test/import-babel-polyfill.js', // This ensures we have the es6 shims in place from babel
'test/**/*.tests.ts',
'test/**/*.tests.tsx'
],
port: 9876,
frameworks: [ 'jasmine', 'phantomjs-shim' ],
logLevel: config.LOG_INFO, //config.LOG_DEBUG
preprocessors: {
'test/import-babel-polyfill.js': [ 'webpack', 'sourcemap' ],
'src/**/*.{ts,tsx}': [ 'webpack', 'sourcemap' ],
'test/**/*.tests.{ts,tsx}': [ 'webpack', 'sourcemap' ]
},
webpack: {
devtool: 'eval-source-map', //'inline-source-map', - inline-source-map doesn't work at present
debug: true,
module: webpackConfig.module,
resolve: webpackConfig.resolve
},
webpackMiddleware: {
quiet: true,
stats: {
colors: true
}
},
// reporter options
mochaReporter: {
colors: {
success: 'bgGreen',
info: 'cyan',
warning: 'bgBlue',
error: 'bgRed'
}
},
junitReporter: {
outputDir: 'test-results', // results will be saved as $outputDir/$browserName.xml
outputFile: undefined, // if included, results will be saved as $outputDir/$browserName/$outputFile
suite: ''
}
});
};

As you can see, we're still using our webpack configuration from earlier to configure much of how the transpilation takes place.

And that's it; we have a workflow for developing in TypeScript using React with tests running in an automated fashion. I appreciated this has been a rather long blog post but I hope I've clarified somewhat how this all plugs together and works. Do leave a comment if you think I've missed something.

Babel 5 -> Babel 6#

This post has actually been sat waiting to be published for some time. I'd got this solution up and running with Babel 5. Then they shipped Babel 6 and (as is the way with "breaking changes") broke sourcemap support and thus torpedoed this workflow. Happily that's now been resolved. But if you should experience any wonkiness - it's worth checking that you're using the latest and greatest of Babel 6.