Skip to main content

3 posts tagged with "es6"

View All Tags

Creating an ES2015 Map from an Array in TypeScript

I'm a great lover of ES2015's <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map">Map</a>. However, just recently I tumbled over something I find a touch inconvenient about how you initialise a new Map from the contents of an Array in TypeScript.

This Doesn't Work#

We're going try to something like this: (pilfered from the MDN docs)

var kvArray = [["key1", "value1"], ["key2", "value2"]];
// Use the regular Map constructor to transform a 2D key-value Array into a map
var myMap = new Map(kvArray);

Simple enough right? Well I'd rather assumed that I should be able to do something like this in TypeScript:

const iAmAnArray [
{ value: "value1", text: "hello" }
{ value: "value2", text: "map" }
];
const iAmAMap = new Map<string, string>(
iAmAnArray.map(x => [x.value, x.text])
);

However, to my surprise this errored out with:

[ts] Argument of type 'string[][]' is not assignable to parameter of type 'Iterable<[string, string]>'.
Types of property '[Symbol.iterator]' are incompatible.
Type '() => IterableIterator<string[]>' is not assignable to type '() => Iterator<[string, string]>'.
Type 'IterableIterator<string[]>' is not assignable to type 'Iterator<[string, string]>'.
Types of property 'next' are incompatible.
Type '(value?: any) => IteratorResult<string[]>' is not assignable to type '(value?: any) => IteratorResult<[string, string]>'.
Type 'IteratorResult<string[]>' is not assignable to type 'IteratorResult<[string, string]>'.
Type 'string[]' is not assignable to type '[string, string]'.
Property '0' is missing in type 'string[]'.

Disappointing right? It's expecting Iterable&lt;[string, string]&gt; and an Array with 2 elements that are strings is not inferred to be that.

This Does#

It emerges that there is a way to do this though; you just need to give the compiler a clue. You need to include a type assertion of as [string, string] which tells the compiler that what you've just declared is a Tuple of string and string. (Please note that [string, string] corresponds to the types of the Key and Value of your Map and should be set accordingly.)

So a working version of the code looks like this:

const iAmAnArray [
{ value: "value1", text: "hello" }
{ value: "value2", text: "map" }
];
const iAmAMap = new Map<string, string>(
iAmAnArray.map(x => [x.value, x.text] as [string, string])
);

Or, to be terser, this:

const iAmAnArray [
{ value: "value1", text: "hello" }
{ value: "value2", text: "map" }
];
const iAmAMap = new Map( // Look Ma! No type annotations
iAmAnArray.map(x => [x.value, x.text] as [string, string])
);

I've raised this as an issue with the TypeScript team; you can find details here.

ES6 + TypeScript + Babel + React + Flux + Karma: The Secret Recipe

I wrote a while ago about how I was using some different tools in a current project:

  • React with JSX
  • Flux
  • ES6 with Babel
  • Karma for unit testing

I have fully come to love and appreciate all of the above. I really like working with them. However. There was still an ache in my soul and a thorn in my side. Whilst I love the syntax of ES6 and even though I've come to appreciate the clarity of JSX, I have been missing something. Perhaps you can guess? It's static typing.

It's actually been really good to have the chance to work without it because it's made me realise what a productivity boost having static typing actually is. The number of silly mistakes burning time that a compiler could have told me.... Sigh.

But the pain is over. The dark days are gone. It's possible to have strong typing, courtesy of TypeScript, plugged into this workflow. It's yours for the taking. Take it. Take it now!

What a Guy Wants#

I decided a couple of months ago what I wanted to have in my setup:

  1. I want to be able to write React / JSX in TypeScript. Naturally I couldn't achieve that by myself but handily the TypeScript team decided to add support for JSX with TypeScript 1.6. Ooh yeah.
  2. I wanted to be able to write ES6. When I realised the approach for writing ES6 and having the transpilation handled by TypeScript wasn't clear I had another idea. I thought "what if I write ES6 and hand off the transpilation to Babel?" i.e. Use TypeScript for type checking, not for transpilation. I realised that James Brantly had my back here already. Enter Webpack and ts-loader.
  3. Debugging. Being able to debug my code is non-negotiable for me. If I can't debug it I'm less productive. (I'm also bitter and twisted inside.) I should say that I wanted to be able to debug my original source code. Thanks to the magic of sourcemaps, that mad thing is possible.
  4. Karma for unit testing. I've become accustomed to writing my tests in ES6 and running them on a continual basis with Karma. This allows for a rather good debugging story as well. I didn't want to lose this when I moved to TypeScript. I didn't.

So I've talked about what I want and I've alluded to some of the solutions that there are. The question now is how to bring them all together. This post is, for the most part, going to be about correctly orchestrating a number of gulp tasks to achieve the goals listed above. If you're after the Blue Peter "here's one I made earlier" moment then take a look at the es6-babel-react-flux-karma repo in the Microsoft/TypeScriptSamples repo on Github.

gulpfile.js#

/* eslint-disable no-var, strict, prefer-arrow-callback */
'use strict';
var gulp = require('gulp');
var gutil = require('gulp-util');
var connect = require('gulp-connect');
var eslint = require('gulp-eslint');
var webpack = require('./gulp/webpack');
var staticFiles = require('./gulp/staticFiles');
var tests = require('./gulp/tests');
var clean = require('./gulp/clean');
var inject = require('./gulp/inject');
var lintSrcs = ['./gulp/**/*.js'];
gulp.task('delete-dist', function (done) {
clean.run(done);
});
gulp.task('build-process.env.NODE_ENV', function () {
process.env.NODE_ENV = 'production';
});
gulp.task('build-js', ['delete-dist', 'build-process.env.NODE_ENV'], function(done) {
webpack.build().then(function() { done(); });
});
gulp.task('build-other', ['delete-dist', 'build-process.env.NODE_ENV'], function() {
staticFiles.build();
});
gulp.task('build', ['build-js', 'build-other', 'lint'], function () {
inject.build();
});
gulp.task('lint', function () {
return gulp.src(lintSrcs)
.pipe(eslint())
.pipe(eslint.format());
});
gulp.task('watch', ['delete-dist'], function() {
process.env.NODE_ENV = 'development';
Promise.all([
webpack.watch()//,
//less.watch()
]).then(function() {
gutil.log('Now that initial assets (js and css) are generated inject will start...');
inject.watch(postInjectCb);
}).catch(function(error) {
gutil.log('Problem generating initial assets (js and css)', error);
});
gulp.watch(lintSrcs, ['lint']);
staticFiles.watch();
tests.watch();
});
gulp.task('watch-and-serve', ['watch'], function() {
postInjectCb = stopAndStartServer;
});
var postInjectCb = null;
var serverStarted = false;
function stopAndStartServer() {
if (serverStarted) {
gutil.log('Stopping server');
connect.serverClose();
serverStarted = false;
}
startServer();
}
function startServer() {
gutil.log('Starting server');
connect.server({
root: './dist',
port: 8080
});
serverStarted = true;
}

Let's start picking this apart; what do we actually have here? Well, we have 2 gulp tasks that I want you to notice:

build

This is likely the task you would use when deploying. It takes all of your source code, builds it, provides cache-busting filenames (eg main.dd2fa20cd9eac9d1fb2f.js), injects your shell SPA page with references to the files and deploys everything to the ./dist/ directory. So that's TypeScript, static assets like images and CSS all made ready for Production.

The build task also implements this advice:

When deploying your app, set the NODE_ENV environment variable to production to use the production build of React which does not include the development warnings and runs significantly faster.
watch-and-serve

This task represents "development mode" or "debug mode". It's what you'll likely be running as you develop your app. It does the same as the build task but with some important distinctions.

  • As well as building your source it also runs your tests using Karma
  • This task is not triggered on a once-only basis, rather your files are watched and each tweak of a file will result in a new build and a fresh run of your tests. Nice eh?
  • It spins up a simple web server and serves up the contents of ./dist (i.e. your built code) in order that you can easily test out your app.
  • In addition, whilst it builds your source it does not minify your code and it emits sourcemaps. For why? For debugging! You can go to http://localhost:8080/ in your browser of choice, fire up the dev tools and you're off to the races; debugging like gangbusters. It also doesn't bother to provide cache-busting filenames as Chrome dev tools are smart enough to not cache localhost.
  • Oh and Karma.... If you've got problems with a failing test then head to http://localhost:9876/ and you can debug the tests in your dev tools.
  • Finally, it runs ESLint in the console. Not all of my files are TypeScript; essentially the build process (aka "gulp-y") files are all vanilla JS. So they're easily breakable. ESLint is there to provide a little reassurance on that front.

Now let's dig into each of these in a little more detail

WebPack#

Let's take a look at what's happening under the covers of webpack.build() and webpack.watch().

WebPack with ts-loader and babel-loader is what we're using to compile our ES6 TypeScript. ts-loader uses the TypeScript compiler to, um, compile TypeScript and emit ES6 code. This is then passed on to the babel-loader which transpiles it from ES6 down to ES-old-school. It all gets brought together in 2 files; main.js which contains the compiled result of the code written by us and vendor.js which contains the compiled result of 3rd party / vendor files. The reason for this separation is that vendor files are likely to change fairly rarely whilst our own code will constantly be changing. This separation allows for quicker compile times upon file changes as, for the most part, the vendor files will not need to included in this process.

Our gulpfile.js above uses the following task:

'use strict';
var gulp = require('gulp');
var gutil = require('gulp-util');
var webpack = require('webpack');
var WebpackNotifierPlugin = require('webpack-notifier');
var webpackConfig = require('../webpack.config.js');
function buildProduction(done) {
// modify some webpack config options
var myProdConfig = Object.create(webpackConfig);
myProdConfig.output.filename = '[name].[hash].js';
myProdConfig.plugins = myProdConfig.plugins.concat(
// make the vendor.js file with cachebusting filename
new webpack.optimize.CommonsChunkPlugin({ name: 'vendor', filename: 'vendor.[hash].js' }),
new webpack.optimize.DedupePlugin(),
new webpack.optimize.UglifyJsPlugin()
);
// run webpack
webpack(myProdConfig, function(err, stats) {
if(err) { throw new gutil.PluginError('webpack:build', err); }
gutil.log('[webpack:build]', stats.toString({
colors: true
}));
if (done) { done(); }
});
}
function createDevCompiler() {
// show me some sourcemap love people
var myDevConfig = Object.create(webpackConfig);
myDevConfig.devtool = 'inline-source-map';
myDevConfig.debug = true;
myDevConfig.plugins = myDevConfig.plugins.concat(
// Make the vendor.js file
new webpack.optimize.CommonsChunkPlugin({ name: 'vendor', filename: 'vendor.js' }),
new WebpackNotifierPlugin({ title: 'Webpack build', excludeWarnings: true })
);
// create a single instance of the compiler to allow caching
return webpack(myDevConfig);
}
function buildDevelopment(done, devCompiler) {
// run webpack
devCompiler.run(function(err, stats) {
if(err) { throw new gutil.PluginError('webpack:build-dev', err); }
gutil.log('[webpack:build-dev]', stats.toString({
chunks: false, // dial down the output from webpack (it can be noisy)
colors: true
}));
if (done) { done(); }
});
}
function bundle(options) {
var devCompiler;
function build(done) {
if (options.shouldWatch) {
buildDevelopment(done, devCompiler);
} else {
buildProduction(done);
}
}
if (options.shouldWatch) {
devCompiler = createDevCompiler();
gulp.watch('src/**/*', function() { build(); });
}
return new Promise(function(resolve, reject) {
build(function (err) {
if (err) {
reject(err);
} else {
resolve('webpack built');
}
});
});
}
module.exports = {
build: function() { return bundle({ shouldWatch: false }); },
watch: function() { return bundle({ shouldWatch: true }); }
};

Hopefully this is fairly self-explanatory; essentially buildDevelopment performs the development build (providing sourcemap support) and buildProduction builds for Production (providing minification support). Both are driven by this webpack.config.js:

/* eslint-disable no-var, strict, prefer-arrow-callback */
'use strict';
var path = require('path');
module.exports = {
cache: true,
entry: {
// The entry point of our application; the script that imports all other scripts in our SPA
main: './src/main.tsx',
// The packages that are to be included in vendor.js
vendor: [
'babel-polyfill',
'events',
'flux',
'react'
]
},
// Where the output of our compilation ends up
output: {
path: path.resolve(__dirname, './dist/scripts'),
filename: '[name].js',
chunkFilename: '[chunkhash].js'
},
module: {
loaders: [{
// The loader that handles ts and tsx files. These are compiled
// with the ts-loader and the output is then passed through to the
// babel-loader. The babel-loader uses the es2015 and react presets
// in order that jsx and es6 are processed.
test: /\.ts(x?)$/,
exclude: /node_modules/,
loader: 'babel-loader?presets[]=es2015&presets[]=react!ts-loader'
}, {
// The loader that handles any js files presented alone.
// It passes these to the babel-loader which (again) uses the es2015
// and react presets.
test: /\.js$/,
exclude: /node_modules/,
loader: 'babel',
query: {
presets: ['es2015', 'react']
}
}]
},
plugins: [
],
resolve: {
// Files with the following extensions are fair game for webpack to process
extensions: ['', '.webpack.js', '.web.js', '.ts', '.tsx', '.js']
},
};

Inject#

Your compiled output needs to be referenced from some kind of HTML page. So we've got this:

<!doctype html>
<html lang="en">
<head>
<meta charSet="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>ES6 + Babel + React + Flux + Karma: The Secret Recipe</title>
<!-- inject:css -->
<!-- endinject -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css">
</head>
<body>
<div id="content"></div>
<!-- inject:js -->
<!-- endinject -->
</body>
</html>

Which is no more than a boilerplate HTML page with a couple of key features:

  • a single &lt;div /&gt; element in the &lt;body /&gt; which is where our React app is going to be rendered.
  • &lt;!-- inject:css --&gt; and &lt;!-- inject:js --&gt; placeholders where css and js is going to be injected by gulp-inject.
  • a single &lt;link /&gt; to the Bootstrap CDN. This sample app doesn't actually serve up any css generated as part of the project. It could but it doesn't. When it comes to injection time no css will actually be injected. This has been left in place as, more typically, a project would have some styling served up.

This is fed into our inject task in inject.build() and inject.watch(). They take css and javascript and, using our shell template, create a new page which has the css and javascript dropped into their respective placeholders:

'use strict';
var gulp = require('gulp');
var inject = require('gulp-inject');
var glob = require('glob');
function injectIndex(options) {
var postInjectCb = options.postInjectCb;
var postInjectCbTriggerId = null;
function run() {
var target = gulp.src('./src/index.html');
var sources = gulp.src([
//'./dist/styles/main*.css',
'./dist/scripts/vendor*.js',
'./dist/scripts/main*.js'
], { read: false });
return target
.on('end', function() { // invoke postInjectCb after 1s
if (postInjectCbTriggerId || !postInjectCb) { return; }
postInjectCbTriggerId = setTimeout(function() {
postInjectCb();
postInjectCbTriggerId = null;
}, 1000);
})
.pipe(inject(sources, { ignorePath: '/dist/', addRootSlash: false, removeTags: true }))
.pipe(gulp.dest('./dist'));
}
var jsCssGlob = 'dist/**/*.{js,css}';
function checkForInitialFilesThenRun() {
glob(jsCssGlob, function (er, files) {
var filesWeNeed = ['dist/scripts/main', 'dist/scripts/vendor'/*, 'dist/styles/main'*/];
function fileIsPresent(fileWeNeed) {
return files.some(function(file) {
return file.indexOf(fileWeNeed) !== -1;
});
}
if (filesWeNeed.every(fileIsPresent)) {
run('initial build');
} else {
checkForInitialFilesThenRun();
}
});
}
checkForInitialFilesThenRun();
if (options.shouldWatch) {
gulp.watch(jsCssGlob, function(evt) {
if (evt.path && evt.type === 'changed') {
run(evt.path);
}
});
}
}
module.exports = {
build: function() { return injectIndex({ shouldWatch: false }); },
watch: function(postInjectCb) { return injectIndex({ shouldWatch: true, postInjectCb: postInjectCb }); }
};

This also triggers the server to serve up the new content.

Static Files#

Your app will likely rely on a number of static assets; images, fonts and whatnot. This script picks up the static assets you've defined and places them in the dist folder ready for use:

'use strict';
var gulp = require('gulp');
var cache = require('gulp-cached');
var targets = [
// In my own example I don't use any of the targets below, they
// are included to give you more of a feel of how you might use this
{ description: 'FONTS', src: './fonts/*', dest: './dist/fonts' },
{ description: 'STYLES', src: './styles/*', dest: './dist/styles' },
{ description: 'FAVICON', src: './favicon.ico', dest: './dist' },
{ description: 'IMAGES', src: './images/*', dest: './dist/images' }
];
function copy(options) {
// Copy files from their source to their destination
function run(target) {
gulp.src(target.src)
.pipe(cache(target.description))
.pipe(gulp.dest(target.dest));
}
function watch(target) {
gulp.watch(target.src, function() { run(target); });
}
targets.forEach(run);
if (options.shouldWatch) {
targets.forEach(watch);
}
}
module.exports = {
build: function() { return copy({ shouldWatch: false }); },
watch: function() { return copy({ shouldWatch: true }); }
};

Karma#

Finally, we're ready to get our tests set up to run continually with Karma. tests.watch() triggers the following task:

'use strict';
var Server = require('karma').Server;
var path = require('path');
var gutil = require('gulp-util');
module.exports = {
watch: function() {
// Documentation: https://karma-runner.github.io/0.13/dev/public-api.html
var karmaConfig = {
configFile: path.join(__dirname, '../karma.conf.js'),
singleRun: false,
plugins: ['karma-webpack', 'karma-jasmine', 'karma-mocha-reporter', 'karma-sourcemap-loader', 'karma-phantomjs-launcher', 'karma-phantomjs-shim'], // karma-phantomjs-shim only in place until PhantomJS hits 2.0 and has function.bind
reporters: ['mocha']
};
new Server(karmaConfig, karmaCompleted).start();
function karmaCompleted(exitCode) {
gutil.log('Karma has exited with:', exitCode);
process.exit(exitCode);
}
}
};

When running in watch mode it's possible to debug the tests by going to: <a href="http://localhost:9876/">http://localhost:9876/</a>. It's also possible to run the tests standalone with a simple npm run test. Running them like this also outputs the results to an XML file in JUnit format; this can be useful for integrating into CI solutions that don't natively pick up test results.

Whichever approach we use for running tests, we use the following karma.conf.js file to configure Karma:

/* eslint-disable no-var, strict */
'use strict';
var webpackConfig = require('./webpack.config.js');
module.exports = function(config) {
// Documentation: https://karma-runner.github.io/0.13/config/configuration-file.html
config.set({
browsers: [ 'PhantomJS' ],
files: [
'test/import-babel-polyfill.js', // This ensures we have the es6 shims in place from babel
'test/**/*.tests.ts',
'test/**/*.tests.tsx'
],
port: 9876,
frameworks: [ 'jasmine', 'phantomjs-shim' ],
logLevel: config.LOG_INFO, //config.LOG_DEBUG
preprocessors: {
'test/import-babel-polyfill.js': [ 'webpack', 'sourcemap' ],
'src/**/*.{ts,tsx}': [ 'webpack', 'sourcemap' ],
'test/**/*.tests.{ts,tsx}': [ 'webpack', 'sourcemap' ]
},
webpack: {
devtool: 'eval-source-map', //'inline-source-map', - inline-source-map doesn't work at present
debug: true,
module: webpackConfig.module,
resolve: webpackConfig.resolve
},
webpackMiddleware: {
quiet: true,
stats: {
colors: true
}
},
// reporter options
mochaReporter: {
colors: {
success: 'bgGreen',
info: 'cyan',
warning: 'bgBlue',
error: 'bgRed'
}
},
junitReporter: {
outputDir: 'test-results', // results will be saved as $outputDir/$browserName.xml
outputFile: undefined, // if included, results will be saved as $outputDir/$browserName/$outputFile
suite: ''
}
});
};

As you can see, we're still using our webpack configuration from earlier to configure much of how the transpilation takes place.

And that's it; we have a workflow for developing in TypeScript using React with tests running in an automated fashion. I appreciated this has been a rather long blog post but I hope I've clarified somewhat how this all plugs together and works. Do leave a comment if you think I've missed something.

Babel 5 -> Babel 6#

This post has actually been sat waiting to be published for some time. I'd got this solution up and running with Babel 5. Then they shipped Babel 6 and (as is the way with "breaking changes") broke sourcemap support and thus torpedoed this workflow. Happily that's now been resolved. But if you should experience any wonkiness - it's worth checking that you're using the latest and greatest of Babel 6.

Things Done Changed

Some people fear change. Most people actually. I'm not immune to that myself, but not in the key area of technology. Any developer that fears change when it comes to the tools and languages that he / she is using is in the wrong business. Because what you're using to cut code today will not last. The language will evolve, the tools and frameworks that you love will die out and be replaced by new ones that are different and strange. In time, the language you feel you write as a native will fall out of favour, replaced by a new upstart.

My first gig was writing telecoms software using Delphi. I haven't touched Delphi (or telecoms for that matter) for over 10 years now. Believe me, I grok that things change.

That is the developer's lot. If you're able to accept that then you'll be just fine. For my part I've always rather relished the new and so I embrace it. However, I've met a surprising number of devs that are outraged when they realise that the language and tools they have used since their first job are not going to last. They do not go gentle into that good dawn. They rage, rage against the death of WebForms. My apologies to Dylan Thomas.

I recently started a new contract. This always brings a certain amount of change. This is part of the fun of contracting. However, the change was more significant in this case. As a result, the tools that I've been using for the last couple of months have been rather different to those that I'm used to. I've been outside my comfort zone. I've loved it. And now I want to reflect upon it. Because, in the words of Socrates, "the unexamined life is not worth living".

The Shock of the New (Toys)#

I'd been brought in to work on a full stack ASP.Net project. However, I've initially been working on a separate project which is entirely different. A web client app which has nothing to do with ASP.Net at all. It's a greenfield app which is built using the following:

  1. React / Flux
  2. WebSockets / Protocol Buffers
  3. Browserify
  4. ES6 with Babel
  5. Karma
  6. Gulp
  7. Atom

Where to begin? Perhaps at the end - Atom.

How Does it Feel to be on Your Own?#

When all around you, as far as the eye can see, are monitors displaying Visual Studio in all its grey glory whilst I was hammering away on Atom. It felt pretty good actually.

The app I was working on was a React / Flux app. You know what that means? JSX! At the time the project began Visual Studio did not have good editor support for JSX (something that the shipping of VS 2015 may have remedied but I haven't checked). So, rather than endure a life dominated with red squigglies I jumped ship and moved over to using GitHub's Atom to edit code.

I rather like it. Whilst VS is a full blown IDE, Atom is a text editor. A very pretty one. And crucially, one that can be extended by plugins of which there is a rich ecosystem. You want JSX support? There's a plugin for that. You want something that formats JSON nicely for you? There's a plugin for that too.

My only criticism of Atom really is that it doesn't handle large files well and it crashes a lot. I'm quite surprised by both of these characteristics given that in contrast to VS it is so small. You'd think the basics would be better covered. Go figure. It still rocks though. It looks so sexy - how could I not like it?

Someone to watch over me#

I've been using Gulp for a while now. It's a great JavaScript task runner and incredibly powerful. Previously though, I've used it as part of a manual build step (even plumbing it into my csproj). With the current project I've moved over to using the watch functionality of gulp. So I'm scrapping triggering gulp tasks manually. Instead we have gulp configured to gaze lovingly at the source files and, when they change, re-run the build steps.

This is nice for a couple of reasons:

  • When I want to test out the app the build is already done - I don't have to wait for it to happen.
  • When I do bad things I find out faster. So I've got JSHint being triggered by my watch. If I write code that makes JSHint sad (and I haven't noticed the warnings from the atom plugin) then they'll appear in the console. Likewise, my unit tests are continuously running in response to file changes (in an ncrunch-y sorta style) and so I know straight away if I'm breaking tests. Rather invaluable in the dynamic world of JavaScript.

Karma, Karma, Karma, Chameleon#

In the past, when using Visual Studio, it made sense to use the mighty Chutzpah which allows you to run JS unit tests from within VS itself. I needed a new way to run my Jasmine unit tests. The obvious choice was Karma (built by the Angular team I think?). It's really flexible.

You're using Browserify? No bother. You're writing ES6 and transpiling with Babel? Not a problem. You want code coverage? That we can do. You want an integration for TeamCity? That too is sorted....

Karma is fantastic. Fun fact: originally it was called Testacular. I kind of get why they changed the name but the world is all the duller for it. A side effect of the name change is that due to invalid search results I know a lot more about Buddhism than I used to.

I cry Babel, Babel, look at me now#

Can you not wait for the future? Me neither. Even though it's 2015 and Back to the Future II takes place in only a month's time. So I'm not waiting for a million and one browsers to implement ES6 and IE 8 to finally die dammit. Nope, I have a plan and it's Babel. Transpile the tears away, write ES6 and have Babel spit out EStoday.

I've found this pretty addictive. Once you've started using ES6 features it's hard to go back. Take destructuring - I can't get enough of it.

Whilst I love Babel, it has caused me some sadness as well. My beloved TypeScript is currently not in the mix, Babel is instead sat squarely where I want TS to be. I'm without static types and rather bereft. You can certainly live without them but having done so for a while I'm pretty clear now that static typing is a massive productivity win. You don't have to hold the data structures that you're working on in your head so much, code completion gives you what you need there, you can focus more on the problem that you're trying to solve. You also burn less time on silly mistakes. If you accidentally change the return type of a function you're likely to know straight away. Refactoring is so much harder without static types. I could go on.

All this goes to say: I want my static typing back. It wasn't really an option to use TypeScript in the beginning because we were using JSX and TS didn't support it. However! TypeScript is due to add support for JSX in TS 1.6 (currently in beta). I've plans to see if I can get TypeScript to emit ES6 and then keep using Babel to do the transpilation. Whether this will work, I don't know yet. But it seems likely. So I'm hoping for a bright future.

Browserify (there are no song lyrics that can be crowbarred in)#

Emitting scripts in the required order has been a perpetual pain for everyone in the web world for the longest time. I've taken about 5 different approaches to solving this problem over the years. None felt particularly good. So Browserify.

Browserify solves the problem of script ordering for you by allowing you to define an entry point to your application and getting you to write require (npm style) or import (ES6 modules) to bring in the modules that you need. This allows Browserify (which we're using with Babel thanks to the babelify transform) to create a ginormous js file that contains the scripts served as needed. Thanks to the magic of source maps it also allows us to debug our original code (yup, the original ES6!) Browserify has the added bonus of allowing us free reign to pull in npm packages to use in our app without a great deal of ceremony.

Browserify is pretty fab - my only real reservation is that if you step outside the normal use cases you can quickly find yourself in deep water. Take for instance web workers. We were briefly looking into using them as an optimisation (breaking IO onto a separate process from the UI). A prime reason for backing away from this is that Web Workers don't play particularly well with Browserify. And when you've got Babel (or Babelify) in the mix the problems just multiply. That apart, I really dig Browserify. I think I'd like to give WebPack a go as well as I understand it fulfills a similar purpose.

WebSockets / Protocol Buffers#

The application I'm working on is plugging into an existing system which uses WebSockets for communication. Since WebSockets are native to the web we've been able to plumb these straight into our app. We're also using Protocol Buffers as another optimisation; a way to save a few extra bytes from going down the wire. I don't have much to say about either, just some observations really:

  1. WebSockets is a slightly different way of working - permanently open connections as opposed to the request / response paradigm of classic HTTP
  2. WebSockets are wicked fast (due in part to those permanent connections). So performance is amazing. Fast like native, type amazing. In our case performance is pretty important and so this has been really great.

React / Flux#

Finally, React and Flux. I was completely new to these when I came onto the project and I quickly came to love them. There was a prejudice for me to overcome and that was JSX. When I first saw it I felt a little sick. "Have we learned NOTHING???" I wailed. "Don't we know that embedding strings in our controllers is a BAD thing?" I was wrong. I had an epiphany. I discovered that JSX is not, as I first imagined, embedded HTML strings. Actually it's syntactic sugar for object creation. A simple example:

var App;
// Some JSX:
var app = <App version="1.0.0" />;
// What that JSX transpiles into:
var app = React.createElement(App, {version:"1.0.0"});

Now that I'm well used to JSX and React I've really got to like it. I keep my views / components as dumb as possible and do all the complex logic in the stores. The stores are just standard JavaScript and so, pretty testable (simple Jasmine gives you all you need - I haven't felt the need for Jest). The components / views are also completely testable. I'd advise anyone coming to React afresh to make use of the <a href="https://facebook.github.io/react/docs/test-utils.html#shallow-rendering">ReactShallowRenderer</a> for these purposes. This means you can test without a DOM - much better all round.

I don't have a great deal to say about Flux; I think my views on it aren't fully formed yet. I do really like predictability that unidirectional data flow gives you. However, I'm mindful that the app that I've been writing is very much about displaying data and doesn't require much user input. I know that I'm living without 2-way data binding and I do wonder if I would come to miss it. Time will tell.

I really want to get back to static typing. That either means TypeScript (which I know and love) or Facebook's Flow. (A Windows version of Flow is in the works.) I'll be very happy if I get either into the mix... Watch this space.