Skip to main content

14 posts tagged with "javascript"

View All Tags

(Top One, Nice One) Get Sorted

I was recently reading a post by Jaime Gonz谩lez Garc铆a which featured the following mind-bending proposition:

What if I told you that JavaScript has LINQ??

It got me thinking about one of favourite features of LINQ: ordering using OrderBy, ThenBy... The ability to simply expose a collection of objects in a given order with a relatively terse and descriptive syntax. It is fantastically convenient, expressive and something I've been missing in JavaScript. But if Jaime is right... Well, let's see what we can do.

Sort#

JavaScript arrays have a sort method. To quote MDN:

arr.sort([compareFunction])### compareFunction

Optional. Specifies a function that defines the sort order. If omitted, the array is sorted according to each character's Unicode code point value, according to the string conversion of each element.

We want to use the sort function to introduce some LINQ-ish ordering goodness. Sort of. See what I did there?

Before we get going it's worth saying that LINQ's OrderBy and JavaScript's sort are not the same thing. sort actually changes the order of the array. However, OrderBy returns an IOrderedEnumerable which when iterated returns the items of the collection in a particular order. An important difference. If preserving the original order of my array was important to me (spoiler: mostly it isn't) then I could make a call to <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/slice">slice</a> prior to calling sort.

sort also returns the array to the caller which is nice for chaining and means we can use it in a similar fashion to the way we use OrderBy. With that in mind, we're going to create comparer functions which will take a lambda / arrow function (ES6 alert!) and return a function which will compare based on the supplied lambda.

String Comparer#

Let's start with ordering by string properties:

function stringComparer(propLambda) {
return (obj1, obj2) => {
const obj1Val = propLambda(obj1) || '';
const obj2Val = propLambda(obj2) || '';
return obj1Val.localeCompare(obj2Val);
};
}

We need some example data to sort: (I can only apologise for my lack of inspiration here)

const foodInTheHouse = [
{ what: 'cake', daysSincePurchase: 2 },
{ what: 'apple', daysSincePurchase: 8 },
{ what: 'orange', daysSincePurchase: 6 },
{ what: 'apple', daysSincePurchase: 2 },
];

If we were doing a sort by strings in LINQ we wouldn't need to implement our own comparer. And the code we'd write would look something like this:

var foodInTheHouseSorted = foodInTheHouse.OrderBy(x => x.what);

With that in mind, here's how it would look to use our shiny and new stringComparer:

const foodInTheHouseSorted = foodInTheHouse.sort(stringComparer(x => x.what));
// foodInTheHouseSorted: [
// { what: 'apple', daysSincePurchase: 8 },
// { what: 'apple', daysSincePurchase: 2 },
// { what: 'cake', daysSincePurchase: 2 },
// { what: 'orange', daysSincePurchase: 6 }
// ]
// PS Don't forget, for our JavaScript: foodInTheHouse === foodInTheHouseSorted
// But for the LINQ: foodInTheHouse != foodInTheHouseSorted
//
// However, if I'd done this:
const foodInTheHouseSlicedAndSorted = foodInTheHouse.slice().sort(stringComparer(x => x.what));
// then: foodInTheHouse !== foodInTheHouseSlicedAndSorted
//
// I shan't mention this again.

Number Comparer#

Well that's strings sorted (quite literally). Now, what about numbers?

function numberComparer(propLambda) {
return (obj1, obj2) => {
const obj1Val = propLambda(obj1);
const obj2Val = propLambda(obj2);
if (obj1Val > obj2Val) {
return 1;
}
else if (obj1Val < obj2Val) {
return -1;
}
return 0;
};
}

If we use the numberComparer on our original array it looks like this:

const foodInTheHouseSorted = foodInTheHouse.sort(numberComparer(x => x.daysSincePurchase));
// foodInTheHouseSorted: [
// { what: 'cake', daysSincePurchase: 2 },
// { what: 'apple', daysSincePurchase: 2 },
// { what: 'orange', daysSincePurchase: 6 },
// { what: 'apple', daysSincePurchase: 8 }
// ]

Descending Into the Pit of Success#

Well this is all kinds of fabulous. But something's probably nagging at you... What about OrderByDescending? What about when I want to sort in the reverse order? May I present the reverse function:

function reverse(comparer) {
return (obj1, obj2) => comparer(obj1, obj2) * -1;
}

As the name suggests, this function takes a given comparer that's handed to it and returns a function that inverts the results of executing that comparer. Clear as mud? A comparer can return 3 types of return values:

  • 0 - implies equality for obj1 and obj2
  • positive - implies obj1 is greater than obj2 by the ordering criterion
  • negative - implies obj1 is less than obj2 by the ordering criterion

Our reverse function takes the comparer it is given and returns a new comparer that will return a positive value where the old one would have returned a negative and vica versa. (Equality is unaffected.) An alternative implementation would have been this:

function reverse(comparer) {
return (obj1, obj2) => comparer(obj2, obj1);
}

Which is more optimal and even simpler as it just swaps the values supplied to the comparer. Whatever tickles your fancy. Either way, when used it looks like this:

const foodInTheHouseSorted = foodInTheHouse.sort(reverse(stringComparer(x => x.what)));
// foodInTheHouseSorted: [
// { what: 'orange', daysSincePurchase: 6 },
// { what: 'cake', daysSincePurchase: 2 },
// { what: 'apple', daysSincePurchase: 8 },
// { what: 'apple', daysSincePurchase: 2 }
// ]

If you'd rather not have a function wrapping a function inline then you could create stringComparerDescending, a numberComparerDescending etc implementations. Arguably it might make for a nicer API. I'm not unhappy with the present approach myself and so I'll leave it as is. But it's an option.

ThenBy#

So far we can sort arrays by strings, we can sort arrays by numbers and we can do either in descending order. It's time to take it to the next level people. That's right ThenBy; I want to be able to sort by one criteria and then by a subcriteria. So perhaps I want to eat the food in the house in alphabetical order, but if I have multiple apples I want to eat the ones I bought most recently first (because the other ones look old, brown and yukky). This may also be a sign I haven't thought my life through, but it's a choice that people make. People that I know. People I may have married.

It's time to compose our comparers together. May I present... drum roll.... the composeComparers function:

function composeComparers(...comparers) {
return (obj1, obj2) => {
const comparer = comparers.find(c => c(obj1, obj2) !== 0);
return (comparer) ? comparer(obj1, obj2) : 0;
};
}

This fine function takes any number of comparers that have been supplied to it. It then returns a comparer function which, when called, iterates through each of the original comparers and executes them until it finds one that returns a value that is not 0 (ie represents that the 2 items are not equal). It then sends that non-zero value back or if all was equal then sends back 0.

const foodInTheHouseSorted = foodInTheHouse.sort(composeComparers(
stringComparer(x => x.what),
numberComparer(x => x.daysSincePurchase),
));
// foodInTheHouseSorted: [
// { what: 'apple', daysSincePurchase: 2 },
// { what: 'apple', daysSincePurchase: 8 },
// { what: 'cake', daysSincePurchase: 2 },
// { what: 'orange', daysSincePurchase: 6 }
// ]

composeComparers: The Sequel#

I'm not gonna lie - I was feeling quite pleased with this approach. I shared it with my friend (and repeated colleague) Peter Foldi. The next day I found this in my inbox:

function composeComparers(...comparers) {
return (obj1, obj2) => comparers.reduce((prev, curr) => prev || curr(obj1, obj2), 0);
}

Dammit he's improved it. It's down to 1 line of code, it doesn't execute a non-zero returning comparer twice and it doesn't rely on <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/find">find</a> which only arrives with ES6. So if you wanted to backport to ES5 then this is a better choice.

The only criticism I can make of it is that it iterates through each of the comparers even when it doesn't need to execute them. But that's just carping really.

composeComparers: The Ultimate#

So naturally I thought I was done. Showing Peter's improvements to the estimable Matthew Horsley I learned that this was not so. Because he reached for the keyboard and entered this:

function composeComparers(...comparers) {
// README: <a href="https://wiki.haskell.org/Function_composition">https://wiki.haskell.org/Function_composition</a>
return comparers.reduce((prev, curr) => (a, b) => prev(a, b) || curr(a, b));
}

That's right, he's created a function which takes a number of comparers and reduced them up front into a single comparer function. This means that when the sort takes place there is no longer a need to iterate through the comparers, just execute them.

I know.

I'll get my coat...

Update 08/10/2018: Now TypeScript#

You want to do this with TypeScript? Use this:

type Comparer<TObject> = (obj1: TObject, obj2: TObject) => number;
export function stringComparer<TObject>(propLambda: (obj: TObject) => string): Comparer<TObject> {
return (obj1: TObject, obj2: TObject) => {
const obj1Val = propLambda(obj1) || '';
const obj2Val = propLambda(obj2) || '';
return obj1Val.localeCompare(obj2Val);
};
}
export function numberComparer<TObject>(propLambda: (obj: TObject) => number): Comparer<TObject> {
return (obj1: TObject, obj2: TObject) => {
const obj1Val = propLambda(obj1);
const obj2Val = propLambda(obj2);
if (obj1Val > obj2Val) {
return 1;
} else if (obj1Val < obj2Val) {
return -1;
}
return 0;
};
}
export function reverse<TObject>(comparer: Comparer<TObject>) {
return (obj1: TObject, obj2: TObject) => comparer(obj2, obj1);
}
export function composeComparers<TObject>(...comparers: Comparer<TObject>[]) {
return comparers.reduce((prev, curr) => (a, b) => prev(a, b) || curr(a, b));
}

Using Gulp in Visual Studio instead of Web Optimization

Update 17/02/2015: I've taken the approach discussed in this post a little further - you can see here#

I've used a number of tools to package up JavaScript and CSS in my web apps. Andrew Davey's tremendous Cassette has been really useful. Also good (although less powerful/magical) has been Microsoft's very own Microsoft.AspNet.Web.Optimization that ships with MVC.

I was watching the ASP.NET Community Standup from October 7th, 2014 and learned that the ASP.Net team is not planning to migrate Microsoft.AspNet.Web.Optimization to the next version of ASP.Net. Instead they're looking to make use of JavaScript task runners like Grunt and maybe Gulp. Perhaps you're even dimly aware that they've been taking steps to make these runners more of a first class citizen in Visual Studio, hence the recent release of the new and groovy Task Runner Explorer.

Gulp has been on my radar for a while now as has Grunt. By "on my radar" what I really mean is "Hmmmm, I really need to learn this..... perhaps I could wait until the Betamax vs VHS battles are done? Oh never mind, here we go...".

My understanding is that Grunt and Gulp essentially do the same thing (run tasks in JavaScript) but have different approaches. Grunt is more about configuration, Gulp is more about code. At present Gulp also has a performance advantage as it does less IO than Grunt - though I understand that's due to change in the future. But generally my preference is code over configuration. On that basis I decided that I was going to give Gulp first crack.

Bub bye Web Optimization#

I already had a project that used Web Optimization to bundle JavaScript and CSS files. When debugging on my own machine Web Optimization served up the full JavaScript and CSS files. Thanks to the magic of source maps I was able to debug the TypeScript that created the JavaScript files too. Which was nice. When I deployed to production, Web Optimization minified and concatenated the JavaScript and CSS files. This meant I had a single HTTP request for JavaScript and a single HTTP request for CSS. This was also... nooice!

I took a copy of my existing project and created a new repo for it on GitHub. It was very simple in terms of bundling. It had a BundleConfig that created 2 bundles; 1 for JavaScript and 1 for CSS:

using System.Web;
using System.Web.Optimization;
namespace Proverb.Web
{
public class BundleConfig
{
// For more information on bundling, visit http://go.microsoft.com/fwlink/?LinkId=301862
public static void RegisterBundles(BundleCollection bundles)
{
var angularApp = new ScriptBundle("~/angularApp").Include(
// Vendor Scripts
"~/scripts/jquery-{version}.js",
"~/scripts/angular.js",
"~/scripts/angular-animate.js",
"~/scripts/angular-route.js",
"~/scripts/angular-sanitize.js",
"~/scripts/angular-ui/ui-bootstrap-tpls.js",
"~/scripts/toastr.js",
"~/scripts/moment.js",
"~/scripts/spin.js",
"~/scripts/underscore.js",
// Bootstrapping
"~/app/app.js",
"~/app/config.route.js",
// common Modules
"~/app/common/common.js",
"~/app/common/logger.js",
"~/app/common/spinner.js",
// common.bootstrap Modules
"~/app/common/bootstrap/bootstrap.dialog.js"
);
// directives
angularApp.IncludeDirectory("~/app/directives", "*.js", true);
// services
angularApp.IncludeDirectory("~/app/services", "*.js", true);
// controllers
angularApp.IncludeDirectory("~/app/admin", "*.js", true);
angularApp.IncludeDirectory("~/app/about", "*.js", true);
angularApp.IncludeDirectory("~/app/dashboard", "*.js", true);
angularApp.IncludeDirectory("~/app/layout", "*.js", true);
angularApp.IncludeDirectory("~/app/sayings", "*.js", true);
angularApp.IncludeDirectory("~/app/sages", "*.js", true);
bundles.Add(angularApp);
bundles.Add(new StyleBundle("~/Content/css").Include(
"~/content/ie10mobile.css",
"~/content/bootstrap.css",
"~/content/font-awesome.css",
"~/content/toastr.css",
"~/content/styles.css"
));
}
}
}

I set myself a task. I wanted to be able to work in *exactly* the way I was working now. But using Gulp instead of Web Optimization. I wanted to lose the BundleConfig above and remove Web Optimization from my application, secure in the knowledge that I had lost nothing. Could it be done? Read on!

Installing Gulp (and Associates)#

I fired up Visual Studio and looked for an excuse to use the Task Runner Explorer. The first thing I needed was Gulp. My machine already had Node and NPM installed so I went to the command line to install Gulp globally:

npm install gulp -g

Now to start to plug Gulp into my web project. It was time to make the introductions: Visual Studio meet NPM. At the root of the web project I created a package.json file by executing the following command and accepting all the defaults:

npm init

I wanted to add Gulp as a development dependency of my project: ("Development" because I only need to run tasks at development time. My app has no dependency on Gulp at runtime - at that point it's just about serving up static files.)

npm install gulp --save-dev

This installs gulp local to the project as a development dependency. As a result we now have a "node_modules" folder sat in our root which contains our node packages. Currently, as our package.json reveals, this is only gulp:

"devDependencies": {
"gulp": "^3.8.8"
}

It's time to go to town. Let's install all the packages we're going to need to bundle and minify JavaScript and CSS:

npm install gulp-concat gulp-uglify gulp-rev del path gulp-ignore gulp-asset-manifest gulp-minify-css --save-dev

This installs the packages as dev dependencies (as you've probably guessed) and leaves us with a list of dev dependencies like this:

"devDependencies": {
"del": "^0.1.3",
"gulp": "^3.8.8",
"gulp-asset-manifest": "0.0.5",
"gulp-concat": "^2.4.1",
"gulp-ignore": "^1.2.1",
"gulp-minify-css": "^0.3.10",
"gulp-rev": "^1.1.0",
"gulp-uglify": "^1.0.1",
"path": "^0.4.9"
}

Making gulpfile.js#

So now I was ready. I had everything I needed to replace my BundleConfig.cs. I created a new file called gulpfile.js in the root of my web project that looked like this:

/// <vs AfterBuild='default' />
var gulp = require("gulp");
// Include Our Plugins
var concat = require("gulp-concat");
var ignore = require("gulp-ignore");
var manifest = require("gulp-asset-manifest");
var minifyCss = require("gulp-minify-css");
var uglify = require("gulp-uglify");
var rev = require("gulp-rev");
var del = require("del");
var path = require("path");
var tsjsmapjsSuffix = ".{ts,js.map,js}";
var excludetsjsmap = "**/*.{ts,js.map}";
var bundleNames = { scripts: "scripts", styles: "styles" };
var filesAndFolders = {
base: ".",
buildBaseFolder: "./build/",
debug: "debug",
release: "release",
css: "css",
// The fonts we want Gulp to process
fonts: ["./fonts/*.*"],
// The scripts we want Gulp to process - adapted from BundleConfig
scripts: [
// Vendor Scripts
"./scripts/angular.js",
"./scripts/angular-animate.js",
"./scripts/angular-route.js",
"./scripts/angular-sanitize.js",
"./scripts/angular-ui/ui-bootstrap-tpls.js",
"./scripts/toastr.js",
"./scripts/moment.js",
"./scripts/spin.js",
"./scripts/underscore.js",
// Bootstrapping
"./app/app" + tsjsmapjsSuffix,
"./app/config.route" + tsjsmapjsSuffix,
// common Modules
"./app/common/common" + tsjsmapjsSuffix,
"./app/common/logger" + tsjsmapjsSuffix,
"./app/common/spinner" + tsjsmapjsSuffix,
// common.bootstrap Modules
"./app/common/bootstrap/bootstrap.dialog" + tsjsmapjsSuffix,
// directives
"./app/directives/**/*" + tsjsmapjsSuffix,
// services
"./app/services/**/*" + tsjsmapjsSuffix,
// controllers
"./app/about/**/*" + tsjsmapjsSuffix,
"./app/admin/**/*" + tsjsmapjsSuffix,
"./app/dashboard/**/*" + tsjsmapjsSuffix,
"./app/layout/**/*" + tsjsmapjsSuffix,
"./app/sages/**/*" + tsjsmapjsSuffix,
"./app/sayings/**/*" + tsjsmapjsSuffix
],
// The styles we want Gulp to process - adapted from BundleConfig
styles: [
"./content/ie10mobile.css",
"./content/bootstrap.css",
"./content/font-awesome.css",
"./content/toastr.css",
"./content/styles.css"
]
};
filesAndFolders.debugFolder = filesAndFolders.buildBaseFolder + "/" + filesAndFolders.debug + "/";
filesAndFolders.releaseFolder = filesAndFolders.buildBaseFolder + "/" + filesAndFolders.release + "/";
/**
* Create a manifest depending upon the supplied arguments
*
* @param {string} manifestName
* @param {string} bundleName
* @param {boolean} includeRelativePath
* @param {string} pathPrepend
*/
function getManifest(manifestName, bundleName, includeRelativePath, pathPrepend) {
// Determine filename ("./build/manifest-debug.json" or "./build/manifest-release.json"
var manifestFile = filesAndFolders.buildBaseFolder + "manifest-" + manifestName + ".json";
return manifest({
bundleName: bundleName,
includeRelativePath: includeRelativePath,
manifestFile: manifestFile,
log: true,
pathPrepend: pathPrepend,
pathSeparator: "/"
});
}
// Delete the build folder
gulp.task("clean", function (cb) {
del([filesAndFolders.buildBaseFolder], cb);
});
// Copy across all files in filesAndFolders.scripts to build/debug
gulp.task("scripts-debug", ["clean"], function () {
return gulp
.src(filesAndFolders.scripts, { base: filesAndFolders.base })
.pipe(gulp.dest(filesAndFolders.debugFolder));
});
// Create a manifest.json for the debug build - this should have lots of script files in
gulp.task("manifest-scripts-debug", ["scripts-debug"], function () {
return gulp
.src(filesAndFolders.scripts, { base: filesAndFolders.base })
.pipe(ignore.exclude("**/*.{ts,js.map}")) // Exclude ts and js.map files from the manifest (as they won't become script tags)
.pipe(getManifest(filesAndFolders.debug, bundleNames.scripts, true));
});
// Copy across all files in filesAndFolders.styles to build/debug
gulp.task("styles-debug", ["clean"], function () {
return gulp
.src(filesAndFolders.styles, { base: filesAndFolders.base })
.pipe(gulp.dest(filesAndFolders.debugFolder));
});
// Create a manifest.json for the debug build - this should have lots of style files in
gulp.task("manifest-styles-debug", ["styles-debug", "manifest-scripts-debug"], function () {
return gulp
.src(filesAndFolders.styles, { base: filesAndFolders.base })
//.pipe(ignore.exclude("**/*.{ts,js.map}")) // Exclude ts and js.map files from the manifest (as they won't become script tags)
.pipe(getManifest(filesAndFolders.debug, bundleNames.styles, true));
});
// Concatenate & Minify JS for release into a single file
gulp.task("scripts-release", ["clean"], function () {
return gulp
.src(filesAndFolders.scripts)
.pipe(ignore.exclude("**/*.{ts,js.map}")) // Exclude ts and js.map files - not needed in release mode
.pipe(concat("app.js")) // Make a single file - if you want to see the contents then include the line below
//.pipe(gulp.dest(releaseFolder))
.pipe(uglify()) // Make the file titchy tiny small
.pipe(rev()) // Suffix a version number to it
.pipe(gulp.dest(filesAndFolders.releaseFolder)); // Write single versioned file to build/release folder
});
// Create a manifest.json for the release build - this should just have a single file for scripts
gulp.task("manifest-scripts-release", ["scripts-release"], function () {
return gulp
.src(filesAndFolders.buildBaseFolder + filesAndFolders.release + "/*.js")
.pipe(getManifest(filesAndFolders.release, bundleNames.scripts, false));
});
// Copy across all files in filesAndFolders.styles to build/debug
gulp.task("styles-release", ["clean"], function () {
return gulp
.src(filesAndFolders.styles)
.pipe(concat("app.css")) // Make a single file - if you want to see the contents then include the line below
//.pipe(gulp.dest(releaseFolder))
.pipe(minifyCss()) // Make the file titchy tiny small
.pipe(rev()) // Suffix a version number to it
.pipe(gulp.dest(filesAndFolders.releaseFolder + "/" + filesAndFolders.css)); // Write single versioned file to build/release folder
});
// Create a manifest.json for the debug build - this should have a single style files in
gulp.task("manifest-styles-release", ["styles-release", "manifest-scripts-release"], function () {
return gulp
.src(filesAndFolders.releaseFolder + "**/*.css")
.pipe(getManifest(filesAndFolders.release, bundleNames.styles, false, filesAndFolders.css + "/"));
});
// Copy across all fonts in filesAndFolders.fonts to both release and debug locations
gulp.task("fonts", ["clean"], function () {
return gulp
.src(filesAndFolders.fonts, { base: filesAndFolders.base })
.pipe(gulp.dest(filesAndFolders.debugFolder))
.pipe(gulp.dest(filesAndFolders.releaseFolder));
});
// Default Task
gulp.task("default", [
"scripts-debug", "manifest-scripts-debug", "styles-debug", "manifest-styles-debug",
"scripts-release", "manifest-scripts-release", "styles-release", "manifest-styles-release",
"fonts"
]);

What gulpfile.js does#

This file does a number of things each time it is run. First of all it deletes any build folder in the root of the web project so we're ready to build anew. Then it packages up files both for debug and for release mode. For debug it does the following:

  1. It copies the ts, js.map and js files declared in filesAndFolders.scripts to the build/debug folder preserving their original folder structure. (So, for example, app/app.ts, app/app.js.map and app/app.js will all end up at build/debug/app/app.ts, build/debug/app/app.js.map and build/debug/app/app.js respectively.) This is done to allow the continued debugging of the original TypeScript files when running in debug mode.
  2. It copies the css files declared in filesAndFolders.styles to the build/debug folder preserving their original folder structure. (So content/bootstrap.css will end up at build/debug/content/bootstrap.css.)
  3. It creates a build/manifest-debug.json file which contains details of all the script and style files that have been packaged up: ```json { "scripts":[ "scripts/angular.js", "scripts/angular-animate.js", "scripts/angular-route.js", "scripts/angular-sanitize.js", "scripts/angular-ui/ui-bootstrap-tpls.js", "scripts/toastr.js", "scripts/moment.js", "scripts/spin.js", "scripts/underscore.js", "app/app.js", "app/config.route.js", "app/common/common.js", "app/common/logger.js", "app/common/spinner.js", "app/common/bootstrap/bootstrap.dialog.js", "app/directives/imgPerson.js", "app/directives/serverError.js", "app/directives/sidebar.js", "app/directives/spinner.js", "app/directives/waiter.js", "app/directives/widgetClose.js", "app/directives/widgetHeader.js", "app/directives/widgetMinimize.js", "app/services/datacontext.js", "app/services/repositories.js", "app/services/repository.sage.js", "app/services/repository.saying.js", "app/about/about.js", "app/admin/admin.js", "app/dashboard/dashboard.js", "app/layout/shell.js", "app/layout/sidebar.js", "app/layout/topnav.js", "app/sages/sageDetail.js", "app/sages/sageEdit.js", "app/sages/sages.js", "app/sayings/sayingEdit.js", "app/sayings/sayings.js" ], "styles":[ "content/ie10mobile.css", "content/bootstrap.css", "content/font-awesome.css", "content/toastr.css", "content/styles.css" ] }

For release our gulpfile works with the same resources but has a different aim. Namely to minimise the the number of HTTP requests, obfuscate the code and version the files produced to prevent caching issues. To achieve those lofty aims it does the following:

  1. It concatenates together all the js files declared in filesAndFolders.scripts, minifies them and writes them to a single build/release/app-{xxxxx}.js file (where -{xxxxx} represents a version created by gulp-rev).
  2. It concatenates together all the css files declared in filesAndFolders.styles, minifies them and writes them to a single build/release/css/app-{xxxxx}.css file. The file is placed in a css subfolder because of relative paths specified in the CSS file.
  3. It creates a build/manifest-release.json file which contains details of all the script and style files that have been packaged up: ```json { "scripts":["app-95d1e06d.js"], "styles":["css/app-1a6256ea.css"] }
As you can see, the number of files included are reduced down to 2; 1 for JavaScript and 1 for CSS.
<!-- -->
Finally, for both the debug and release packages the contents of the `fonts` folder is copied across wholesale, preserving the original folder structure. This is because the CSS files contain relative references that point to the font files. If I had image files which were referenced by my CSS I'd similarly need to include these in the build process.
## Task Runner Explorer gets in on the action
The eagle eyed amongst you will also have noticed a peculiar first line to our `gulpfile.js`:
```js
/// <vs AfterBuild='default' />

This mysterious comment is actually how the Task Runner Explorer hooks our gulpfile.js into the Visual Studio build process. Our "magic comment" ensures that on the AfterBuild event, Task Runner Explorer runs the default task in our gulpfile.js. The reason we're using the AfterBuild event rather than the BeforeBuild event is because our project contains TypeScript and we need the transpiled JavaScript to be created before we can usefully run our package tasks. If we were using JavaScript alone then that wouldn't be an issue and either build event would do.

How do I use this in my HTML?#

Well this is magnificent - we have a gulpfile that builds our debug and release packages. The question now is, how do we use it?

Web Optimization made our lives really easy. Up in my head I had a @Styles.Render("~/Content/css") which pushed out my CSS and down at the foot of the body tag I had a @Scripts.Render("~/angularApp") which pushed out my script tags. Styles and Scripts are server-side utilities. It would be very easy to write equivalent utility classes that, depending on whether we were in debug or not, read the appropriate build/manifest-xxxxxx.json file and served up either debug or release style / script tags.

That would be pretty simple - and for what it's worth **simple is good

**. But today I felt like a challenge. What say server side rendering had been outlawed? A draconian ruling had been passed and all you had to play with was HTML / JavaScript and a server API that served up JSON? What would you do then? (All fantasy I know... But go with me on this - it's a journey.) Or more sensibly, what if you just want to remove some of the work your app is doing server-side to bundle and minify. Just serve up static assets instead. Spend less money in Azure why not?

Before I make all the changes let's review where we were. I had a single MVC view which, in terms of bundles, CSS and JavaScript pretty much looked like this:

<!DOCTYPE html>
<html>
<head>
<!-- ... -->
@Styles.Render("~/Content/css")
</head>
<body>
<!-- ... -->
@Scripts.Render("~/angularApp")
<script>
(function () {
$.getJSON('@Url.Content("~/Home/StartApp")')
.done(function (startUpData) {
var appConfig = $.extend({}, startUpData, {
appRoot: '@Url.Content("~/")',
remoteServiceRoot: '@Url.Content("~/api/")'
});
angularApp.start({
thirdPartyLibs: {
moment: window.moment,
toastr: window.toastr,
underscore: window._
},
appConfig: appConfig
});
});
})();
</script>
</body>
</html>

This is already more a complicated example than most peoples use cases. Essentially what's happening here is both bundles are written out as part of the HTML and then, once the scripts have loaded the Angular app is bootstrapped with some configuration loaded from the server by a good old jQuery AJAX call.

After reading an article about script loading by the magnificently funny Jake Archibald I felt ready. I cast my MVC view to the four winds and created myself a straight HTML file:

<!DOCTYPE html>
<html>
<head>
<!-- ... -->
</head>
<body>
<!-- ... -->
<script src="Scripts/jquery-2.1.1.min.js"></script>
<script>
(function () {
var appConfig = {};
var scriptsToLoad;
/**
* Handler which fires as each script loads
*/
function onScriptLoad(event) {
scriptsToLoad -= 1;
// Now all the scripts are present start the app
if (scriptsToLoad === 0) {
angularApp.start({
thirdPartyLibs: {
moment: window.moment,
toastr: window.toastr,
underscore: window._
},
appConfig: appConfig
});
}
}
// Load startup data from the server
$.getJSON("api/Startup")
.done(function (startUpData) {
appConfig = startUpData;
// Determine the assets folder depending upon whether in debug mode or not
var buildFolder = appConfig.appRoot + "build/";
var debugOrRelease = (appConfig.inDebug ? "debug" : "release");
var manifestFile = buildFolder + "manifest-" + debugOrRelease + ".json";
var outputFolder = buildFolder + debugOrRelease + "/";
// Load JavaScript and CSS listed in manifest file
$.getJSON(manifestFile)
.done(function (manifest){
manifest.styles.forEach(function (href) {
var link = document.createElement("link");
link.rel = "stylesheet";
link.media = "all";
link.href = outputFolder + href;
document.head.appendChild(link);
});
scriptsToLoad = manifest.scripts.length;
manifest.scripts.forEach(function (src) {
var script = document.createElement("script");
script.onload = onScriptLoad;
script.src = outputFolder + src;
script.async = false;
document.head.appendChild(script);
});
})
});
})();
</script>
</body>
</html>

If you very carefully compare the HTML above the MVC view that it replaces you can see the commonalities. They are doing pretty much the same thing - the only real difference is the bootstrapping API. Previously it was an MVC endpoint at /Home/StartApp. Now it's a Web API endpoint at api/Startup. Here's how it works:

  1. A jQuery AJAX call kicks off a call to load the bootstrapping / app config data. Importantly this data includes whether the app is running in debug or not.
  2. Depending on the isDebug flag the app either loads the build/manifest-debug.json or build/manifest-release.json manifest.
  3. For each CSS file in the styles bundle a link element is created and added to the page.
  4. For each JavaScript file in the scripts bundle a script element is created and added to the page.

It's worth pointing out that this also has a performance edge over Web Optimization as the assets are loaded asynchronously! (Yes I know it says script.async = false but that's not what you think it is... Go read Jake's article!)

To finish off I had to make a few tweaks to my web.config:

<!-- Allow ASP.Net to serve up JSON files -->
<system.webServer>
<staticContent>
<mimeMap fileExtension=".json" mimeType="application/json"/>
</staticContent>
</system.webServer>
<!-- The build folder (and it's child folder "debug") will not be cached.
When people are debugging they don't want to cache -->
<location path="build">
<system.webServer>
<staticContent>
<clientCache cacheControlMode="DisableCache"/>
</staticContent>
</system.webServer>
</location>
<!-- The release folder will be cached for a loooooong time
When you're in Production caching is your friend -->
<location path="build/release">
<system.webServer>
<staticContent>
<clientCache cacheControlMode="UseMaxAge"/>
</staticContent>
</system.webServer>
</location>

I want to publish, how do I include my assets?#

It's time for some csproj trickery. I must say I think I'll be glad to see the back of project files when ASP.Net vNext ships. This is what you need:

<Target Name="AfterBuild">
<ItemGroup>
<!-- what ever is in the build folder should be included in the project -->
<Content Include="build\**\*.*" />
</ItemGroup>
</Target>

What's happening here is that *after* a build Visual Studio considers the complete contents of the build folder to part of the project. It's after the build because the folder will be deleted and reconstructed as part of the build.

Journalling the Migration of Jasmine Tests to TypeScript

I previously attempted to migrate my Jasmine tests from JavaScript to TypeScript. The last time I tried it didn't go so well and I bailed. Thank the Lord for source control. But feeling I shouldn't be deterred I decided to have another crack at it.

I did manage it this time... Sort of. Unfortunately there was a problem which I discovered right at the end. An issue with the TypeScript / Visual Studio tooling. So, just to be clear, this is not a blog post of "do this and it will work perfectly". On this occasion there will be some rough edges. This post exists, as much as anything else, as a record of the problems I experienced - I hope it will prove useful. Here we go:

What to Migrate?#

I'm going to use one of the test files in my my side project Proverb. It's the tests for an AngularJS controller called sageDetail - I've written about it before. Here it is in all it's JavaScript-y glory:

describe("Proverb.Web -> app-> controllers ->", function () {
beforeEach(function () {
module("app");
});
describe("sageDetail ->", function () {
var $rootScope,
getById_deferred, // deferred used for promises
$location, $routeParams_stub, common, datacontext, // controller dependencies
sageDetailController; // the controller
beforeEach(inject(function (_$controller_, _$rootScope_, _$q_, _$location_, _common_, _datacontext_) {
$rootScope = _$rootScope_;
$q = _$q_;
$location = _$location_;
common = _common_;
datacontext = _datacontext_;
$routeParams_stub = { id: "10" };
getById_deferred = $q.defer();
spyOn(datacontext.sage, "getById").and.returnValue(getById_deferred.promise);
spyOn(common, "activateController").and.callThrough();
spyOn(common.logger, "getLogFn").and.returnValue(jasmine.createSpy("log"));
spyOn($location, "path").and.returnValue(jasmine.createSpy("path"));
sageDetailController = _$controller_("sageDetail", {
$location: $location,
$routeParams: $routeParams_stub,
common: common,
datacontext: datacontext
});
}));
describe("on creation ->", function () {
it("controller should have a title of 'Sage Details'", function () {
expect(sageDetailController.title).toBe("Sage Details");
});
it("controller should have no sage", function () {
expect(sageDetailController.sage).toBeUndefined();
});
it("datacontext.sage.getById should be called", function () {
expect(datacontext.sage.getById).toHaveBeenCalledWith(10, true);
});
});
describe("activateController ->", function () {
var sage_stub;
beforeEach(function () {
sage_stub = { name: "John" };
});
it("should set sages to be the resolved promise values", function () {
getById_deferred.resolve(sage_stub);
$rootScope.$digest(); // So Angular processes the resolved promise
expect(sageDetailController.sage).toBe(sage_stub);
});
it("should log 'Activated Sage Details View' and set title with name", function () {
getById_deferred.resolve(sage_stub);
$rootScope.$digest(); // So Angular processes the resolved promise
expect(sageDetailController.log).toHaveBeenCalledWith("Activated Sage Details View");
expect(sageDetailController.title).toBe("Sage Details: " + sage_stub.name);
});
});
describe("gotoEdit ->", function () {
var sage_stub;
beforeEach(function () {
sage_stub = { id: 20 };
});
it("should set $location.path to edit URL", function () {
getById_deferred.resolve(sage_stub);
$rootScope.$digest(); // So Angular processes the resolved promise
sageDetailController.gotoEdit();
expect($location.path).toHaveBeenCalledWith("/sages/edit/" + sage_stub.id);
});
});
});
});

Off we go#

Righteo. Let's flip the switch. sageDetail.js you shall go to the ball! One wave of my magic wand and sageDetail.js becomes sageDetail.ts... Alakazam!! Of course we've got to do the fiddling with the csproj file to include the dependent JavaScript files. (I'll be very pleased when ASP.Net vNext ships and I don't have to do this anymore....) So find this:

<TypeScriptCompile Include="app\sages\sageDetail.ts" />

And add this:

<Content Include="app\sages\sageDetail.js">
<DependentUpon>sageDetail.ts</DependentUpon>
</Content>
<Content Include="app\sages\sageDetail.js.map">
<DependentUpon>sageDetail.ts</DependentUpon>
</Content>

What next? I've a million red squigglies in my code. It's "could not find symbol" city. Why? Typings! We need typings! So let's begin - I'm needing the Jasmine typings for starters. So let's hit NuGet and it looks like we need this:

Install-Package jasmine.TypeScript.DefinitelyTypedThat did no good at all. Still red squigglies. I'm going to hazard a guess that this is something to do with the fact my JavaScript Unit Test project doesn't contain the various TypeScript artefacts that Visual Studio kindly puts into the web csproj for you. This is because I'm keeping my JavaScript tests in a separate project from the code being tested. Also, the Visual Studio TypeScript tooling seems to work on the assumption that TypeScript will only be used within a web project; not a test project. Well I won't let that hold me back... Time to port the TypeScript artefacts in the web csproj over by hand. I'll take this:

<Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.Default.props" Condition="Exists('$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.Default.props')" />

And I'll also take this

<PropertyGroup Condition="'$(Configuration)' == 'Debug'">
<TypeScriptNoImplicitAny>True</TypeScriptNoImplicitAny>
</PropertyGroup>
<Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.targets" Condition="Exists('$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.targets')" />

Bingo bango - a difference. I no longer have red squigglies under the Jasmine statements (describe, it etc). But alas, I do everywhere else. One in particular draws my eye...

Could not find symbol '$q'#

Once again TypeScript picks up the hidden bugs in my JavaScript:

$q = _$q_;

That's right it's an implicit global. Quickly fixed:

var $q = _$q_;

Typings? Where we're going, we need typings...#

We need more types. We're going to need the types created by our application; our controllers / services / directives etc. As well that we need the types used in the creation of the app. So the Angular typings etc. Since we're going to need to use reference statements to pull in the types created by our application I might as well use them to pull in the required definition files as well (eg angular.d.ts):

/// <reference path="../../../proverb.web/scripts/typings/angularjs/angular.d.ts" />
/// <reference path="../../../proverb.web/scripts/typings/angularjs/angular-mocks.d.ts" />
/// <reference path="../../../proverb.web/app/sages/sagedetail.ts" />
/// <reference path="../../../proverb.web/app/common/common.ts" />
/// <reference path="../../../proverb.web/app/services/datacontext.ts" />
/// <reference path="../../../proverb.web/app/services/repository.sage.ts" />

Now we need to work our way through the "variable 'x' implicitly has an 'any' type" messages. One thing we need to do is to amend our original sageDetails.ts file so that the sageDetailRouteParams interface and SageDetail class are exported from the controllers module. We can't use the types otherwise. Now we can add typings to our file - once finished it looks like this:

/// <reference path="../../../proverb.web/scripts/typings/angularjs/angular.d.ts" />
/// <reference path="../../../proverb.web/scripts/typings/angularjs/angular-mocks.d.ts" />
/// <reference path="../../../proverb.web/app/sages/sagedetail.ts" />
/// <reference path="../../../proverb.web/app/common/common.ts" />
/// <reference path="../../../proverb.web/app/services/datacontext.ts" />
/// <reference path="../../../proverb.web/app/services/repository.sage.ts" />
describe("Proverb.Web -> app-> controllers ->", function () {
beforeEach(function () {
module("app");
});
describe("sageDetail ->", function () {
var $rootScope: ng.IRootScopeService,
// deferred used for promises
getById_deferred: ng.IDeferred<sage>,
// controller dependencies
$location: ng.ILocationService,
$routeParams_stub: controllers.sageDetailRouteParams,
common: common,
datacontext: datacontext,
sageDetailController: controllers.SageDetail; // the controller
beforeEach(inject(function (
_$controller_: any,
_$rootScope_: ng.IRootScopeService,
_$q_: ng.IQService,
_$location_: ng.ILocationService,
_common_: common,
_datacontext_: datacontext) {
$rootScope = _$rootScope_;
var $q = _$q_;
$location = _$location_;
common = _common_;
datacontext = _datacontext_;
$routeParams_stub = { id: "10" };
getById_deferred = $q.defer();
spyOn(datacontext.sage, "getById").and.returnValue(getById_deferred.promise);
spyOn(common, "activateController").and.callThrough();
spyOn(common.logger, "getLogFn").and.returnValue(jasmine.createSpy("log"));
spyOn($location, "path").and.returnValue(jasmine.createSpy("path"));
sageDetailController = _$controller_("sageDetail", {
$location: $location,
$routeParams: $routeParams_stub,
common: common,
datacontext: datacontext
});
}));
describe("on creation ->", function () {
it("controller should have a title of 'Sage Details'", function () {
expect(sageDetailController.title).toBe("Sage Details");
});
it("controller should have no sage", function () {
expect(sageDetailController.sage).toBeUndefined();
});
it("datacontext.sage.getById should be called", function () {
expect(datacontext.sage.getById).toHaveBeenCalledWith(10, true);
});
});
describe("activateController ->", function () {
var sage_stub: sage;
beforeEach(function () {
sage_stub = { name: "John", id: 10, username: "John", email: "[email protected]", dateOfBirth: new Date() };
});
it("should set sages to be the resolved promise values", function () {
getById_deferred.resolve(sage_stub);
$rootScope.$digest(); // So Angular processes the resolved promise
expect(sageDetailController.sage).toBe(sage_stub);
});
it("should log 'Activated Sage Details View' and set title with name", function () {
getById_deferred.resolve(sage_stub);
$rootScope.$digest(); // So Angular processes the resolved promise
expect(sageDetailController.log).toHaveBeenCalledWith("Activated Sage Details View");
expect(sageDetailController.title).toBe("Sage Details: " + sage_stub.name);
});
});
describe("gotoEdit ->", function () {
var sage_stub: sage;
beforeEach(function () {
sage_stub = { name: "John", id: 20, username: "John", email: "[email protected]", dateOfBirth: new Date() };
});
it("should set $location.path to edit URL", function () {
getById_deferred.resolve(sage_stub);
$rootScope.$digest(); // So Angular processes the resolved promise
sageDetailController.gotoEdit();
expect($location.path).toHaveBeenCalledWith("/sages/edit/" + sage_stub.id);
});
});
});
});

So That's All Good...#

Except it's not. When I run the tests using Chutzpah my sageDetail controller tests aren't found. My spider sense is tingling. This is something to do with the reference statements. They're throwing Chutzpah off. No bother, I can fix that with a quick tweak of the project file:

<PropertyGroup Condition="'$(Configuration)' == 'Debug'">
<TypeScriptNoImplicitAny>True</TypeScriptNoImplicitAny>
<TypeScriptRemoveComments>True</TypeScriptRemoveComments>
</PropertyGroup>

The TypeScript compiler will now strip comments; which includes the reference statements. Now my tests are detected *and* they run. Yay!

Who Killed the TypeScript Language Service?#

Yup it's dead. Whilst the compilation itself has no issues, take a look at the errors being presented for just one of the files back in the original web project:

It looks like having one TypeScript project in a solution which uses reference comments somehow breaks the implicit referencing behaviour built into Visual Studio for other TypeScript projects in the solution. I can say this with some confidence as if I pull out the reference comments from the top of the test file that we've converted then it's business as usual - the TypeScript Language Service lives once more. I'm sure you can see the problem here though: the TypeScript test file doesn't compile. All rather unsatisfactory.

I suspect that if I added reference comments throughout the web project the TypeScript Language Service would be just fine. But I rather like the implicit referencing functionality so I'm not inclined to do that. After reaching something of a brick wall and thinking I had encountered a bug in the TypeScript Language service I raised an issue on GitHub.

Solutions....#

Thanks to the help of Mohamed Hegazy it emerged that the problem was down to missing reference comments in my sageDetail controller tests. One thing I had not considered was the 2 different ways each of my TypeScript projects were working:

  • Proverb.Web uses the Visual Studio implicit referencing functionality. This means that I do not need to use reference comments in the TypeScript files in Proverb.Web.
  • Proverb.Web.JavaScript does *not* uses the implicit referencing functionality. It needs reference comments to resolve references.

The important thing to take away from this (and the thing I had overlooked) was that Proverb.Web.JavaScript uses reference comments to pull in Proverb.Web TypeScript files. Those files have dependencies which are *not* stated using reference comments. So the compiler trips up when it tries to walk the dependency tree - there are no reference comments to be followed! So for example, common.ts has a dependency upon logger.ts. Fixing the TypeScript Language Service involves ensuring that the full dependency list is included in the sageDetail controller tests file, like so:

/// <reference path="../../../proverb.web/scripts/typings/angularjs/angular.d.ts" />
/// <reference path="../../../proverb.web/scripts/typings/angularjs/angular-mocks.d.ts" />
/// <reference path="../../../proverb.web/scripts/typings/angularjs/angular-route.d.ts" />
/// <reference path="../../../proverb.web/scripts/typings/toastr/toastr.d.ts" />
/// <reference path="../../../proverb.web/scripts/typings/underscore/underscore.d.ts" />
/// <reference path="../../../proverb.web/app/sages/sagedetail.ts" />
/// <reference path="../../../proverb.web/app/common/logger.ts" />
/// <reference path="../../../proverb.web/app/common/common.ts" />
/// <reference path="../../../proverb.web/app/services/datacontext.ts" />
/// <reference path="../../../proverb.web/app/services/repositories.ts" />
/// <reference path="../../../proverb.web/app/services/repository.sage.ts" />
/// <reference path="../../../proverb.web/app/services/repository.saying.ts" />
/// <reference path="../../../proverb.web/app/app.ts" />
/// <reference path="../../../proverb.web/app/config.route.ts" />

With this in place you have a working solution, albeit one that is a little flaky. An alternative solution was suggested by Noel Abrahams which I quote here:

Why not do the following?

  • Compile Proverb.Web with --declarations and the option for combining output into a single file. This should create a Proverb.Web.d.ts in your output directory.
  • In Proverb.Web.Tests.JavaScript add a reference to this file.
  • Right-click Proverb.Web.Tests.JavaScript select "Build Dependencies" > "Project Dependencies" and add a reference to Proverb.Web.

I don't think directly referencing TypeScript source files is a good idea, because it causes the file to be rebuilt every time the dependant project is compiled.

Mohamed rather liked this solution. It looks like some more work is due to be done on the TypeScript tooling to make this less headache-y in future.

Running JavaScript Unit Tests in AppVeyor

With a little help from Chutzpah...#

AppVeyor (if you're not aware of it) is a Continuous Integration provider. If you like, it's plug-and-play CI for .NET developers. It's lovely. And what's more it's "free for open-source projects with public repositories hosted on GitHub and BitBucket". Boom! I recently hooked up 2 of my GitHub projects with AppVeyor. It took me all of... 10 minutes. If that? It really is *that* good.

But.... There had to be a "but" otherwise I wouldn't have been writing the post you're reading. For a little side project of mine called Proverb there were C# unit tests and there were JavaScript unit tests. And the JavaScript unit tests weren't being run... No fair!!!

Chutzpah is a JavaScript test runner which at this point runs QUnit, Jasmine and Mocha JavaScript tests. I use the Visual Studio extension to run Jasmine tests on my machine during development. I've also been able to use Chutzpah for CI purposes with Visual Studio Online / Team Foundation Server. So what say we try and do the triple and make it work with AppVeyor too?

NuGet me?#

In order that I could run Chutzpah I needed Chutzpah to be installed on the build machine. So I had 2 choices:

  1. Add Chutzpah direct to the repo
  2. Add the Chutzpah Nuget package to the solution

Unsurprisingly I chose #2 - much cleaner.

Now to use Chutzpah#

Time to dust down the PowerShell. I created myself a "before tests script" and added it to my build. It looked a little something like this:

# Locate Chutzpah
$ChutzpahDir = get-childitem chutzpah.console.exe -recurse | select-object -first 1 | select -expand Directory
# Run tests using Chutzpah and export results as JUnit format to chutzpah-results.xml
$ChutzpahCmd = "$($ChutzpahDir)\chutzpah.console.exe $($env:APPVEYOR_BUILD_FOLDER)\AngularTypeScript\Proverb.Web.Tests.JavaScript /junit .\chutzpah-results.xml"
Write-Host $ChutzpahCmd
Invoke-Expression $ChutzpahCmd
# Upload results to AppVeyor one by one
$testsuites = [xml](get-content .\chutzpah-results.xml)
$anyFailures = $FALSE
foreach ($testsuite in $testsuites.testsuites.testsuite) {
write-host " $($testsuite.name)"
foreach ($testcase in $testsuite.testcase){
$failed = $testcase.failure
$time = $testsuite.time
if ($testcase.time) { $time = $testcase.time }
if ($failed) {
write-host "Failed $($testcase.name) $($testcase.failure.message)"
Add-AppveyorTest $testcase.name -Outcome Failed -FileName $testsuite.name -ErrorMessage $testcase.failure.message -Duration $time
$anyFailures = $TRUE
}
else {
write-host "Passed $($testcase.name)"
Add-AppveyorTest $testcase.name -Outcome Passed -FileName $testsuite.name -Duration $time
}
}
}
if ($anyFailures -eq $TRUE){
write-host "Failing build as there are broken tests"
$host.SetShouldExit(1)
}

What this does is:

  1. Run Chutzpah from the installed NuGet package location, passing in the location of my Jasmine unit tests. In the case of my project there is a chutzpah.json file in the project which dictates how Chutzpah should run the tests. Also, the JUnit flag is also passed in order that Chutzpah creates a chutzpah-results.xml file of test results in the JUnit format.
  2. We iterate through test results and tell AppVeyor about the the test passes and failures using the Build Worker API.
  3. If there have been any failed tests then we fail the build. If you look here you can see a deliberately failed build which demo's that this works as it should.

That's a wrap - We now have CI which includes our JavaScript tests! That's right we get to see beautiful screens like these:

Thanks to...#

Thanks to Dan Jones, whose comments on this discussion provided a number of useful pointers which moved me in the right direction. And thanks to Feador Fitzner who has generously said AppVeyor will support JUnit in the future which may simplify use of Chutzpah with AppVeyor even further.

My Unrequited Love for Isolate Scope

I wrote a little while ago about creating a directive to present server errors on the screen in an Angular application. In my own (not so humble opinion), it was really quite nice. I was particularly proud of my usage of isolate scope. However, pride comes before a fall.

It turns out that using isolate scope in a directive is not always wise. Or rather 鈥 not always possible. And this is why:

Error: [$compile:multidir] Multiple directives [datepickerPopup, serverError] asking for new/isolated scope on: &lt;input name="sage.dateOfBirth" class="col-xs-12 col-sm-9" type="text" value="" ng-click="vm.dateOfBirthDatePickerOpen()" server-error="vm.errors" ng-model="vm.sage.dateOfBirth" is-open="vm.dateOfBirthDatePickerIsOpen" datepicker-popup="dd MMM yyyy"&gt; Ug. What happened here? Well, I had a date field that I was using my serverError directive on. Nothing too controversial there. The problem came when I tried to plug in UI Bootstrap鈥檚 datepicker as well. That鈥檚 right the directives are fighting. Sad face.

To be more precise, it turns out that only one directive on an element is allowed to create an isolated scope. So if I want to use UI Bootstrap鈥檚 datepicker (and I do) 鈥 well my serverError directive is toast.

A New Hope#

So ladies and gentlemen, let me present serverError 2.0 鈥 this time without isolated scope:

serverError.ts#

(function () {
"use strict";
var app = angular.module("app");
// Plant a validation message to the right of the element when it is declared invalid by the server
app.directive("serverError", [function () {
// Usage:
// <input class="col-xs-12 col-sm-9"
// name="sage.name" ng-model="vm.sage.name" server-error="vm.errors" />
var directive = {
link: link,
restrict: "A",
require: "ngModel" // supply the ngModel controller as the 4th parameter in the link function
};
return directive;
function link(scope: ng.IScope, element: ng.IAugmentedJQuery, attrs: ng.IAttributes, ngModelController: ng.INgModelController) {
// Extract values from attributes (deliberately not using isolated scope)
var errorKey: string = attrs["name"]; // eg "sage.name"
var errorDictionaryExpression: string = attrs["serverError"]; // eg "vm.errors"
// Bootstrap alert template for error
var template = '<div class="alert alert-danger col-xs-9 col-xs-offset-2" role="alert"><i class="glyphicon glyphicon-warning-sign larger"></i> %error%</div>';
// Create an element to hold the validation message
var decorator = angular.element('<div></div>');
element.after(decorator);
// Watch ngModelController.$error.server & show/hide validation accordingly
scope.$watch(safeWatch(() => ngModelController.$error.server), showHideValidation);
function showHideValidation(serverError: boolean) {
// Display an error if serverError is true otherwise clear the element
var errorHtml = "";
if (serverError) {
var errorDictionary: { [field: string]: string } = scope.$eval(errorDictionaryExpression);
errorHtml = template.replace(/%error%/, errorDictionary[errorKey] || "Unknown error occurred...");
}
decorator.html(errorHtml);
}
// wipe the server error message upon keyup or change events so can revalidate with server
element.on("keyup change", (event) => {
scope.$apply(() => { ngModelController.$setValidity("server", true); });
});
}
}]);
// Thanks @Basarat! http://stackoverflow.com/a/24863256/761388
function safeWatch<T extends Function>(expression: T) {
return () => {
try {
return expression();
}
catch (e) {
return null;
}
};
}
})();

serverError.js#

(function () {
"use strict";
var app = angular.module("app");
// Plant a validation message to the right of the element when it is declared invalid by the server
app.directive("serverError", [function () {
// Usage:
// <input class="col-xs-12 col-sm-9"
// name="sage.name" ng-model="vm.sage.name" server-error="vm.errors" />
var directive = {
link: link,
restrict: "A",
require: "ngModel"
};
return directive;
function link(scope, element, attrs, ngModelController) {
// Extract values from attributes (deliberately not using isolated scope)
var errorKey = attrs["name"];
var errorDictionaryExpression = attrs["serverError"];
// Bootstrap alert template for error
var template = '<div class="alert alert-danger col-xs-9 col-xs-offset-2" role="alert"><i class="glyphicon glyphicon-warning-sign larger"></i> %error%</div>';
// Create an element to hold the validation message
var decorator = angular.element('<div></div>');
element.after(decorator);
// Watch ngModelController.$error.server & show/hide validation accordingly
scope.$watch(safeWatch(function () {
return ngModelController.$error.server;
}), showHideValidation);
function showHideValidation(serverError) {
// Display an error if serverError is true otherwise clear the element
var errorHtml = "";
if (serverError) {
var errorDictionary = scope.$eval(errorDictionaryExpression);
errorHtml = template.replace(/%error%/, errorDictionary[errorKey] || "Unknown error occurred...");
}
decorator.html(errorHtml);
}
// wipe the server error message upon keyup or change events so can revalidate with server
element.on("keyup change", function (event) {
scope.$apply(function () {
ngModelController.$setValidity("server", true);
});
});
}
}]);
// Thanks @Basarat! http://stackoverflow.com/a/24863256/761388
function safeWatch(expression) {
return function () {
try {
return expression();
} catch (e) {
return null;
}
};
}
})();

This version of the serverError directive is from a users perspective identical to the previous version. But it doesn鈥檛 use isolated scope 鈥 this means it can be used in concert with other directives which do.

It works by pulling the name and serverError values off the attrs parameter. name is just a string - the value of which never changes so it can be used as is. serverError is an expression that represents the error dictionary that is used to store the server error messages. This is accessed through use of scope.$eval as an when it needs to.

My Plea#

What I鈥檝e outlined here works. I鈥檒l admit that usage of $eval makes me feel a little bit dirty (I鈥檝e got 鈥渆val is evil鈥 running through my head). Whilst it works, I鈥檓 not sure what I鈥檝e done is necessarily best practice. After all the Angular docs themselves say:

*Best Practice: Use the scope option to create isolate scopes when making components that you want to reuse throughout your app. *

But as we鈥檝e seen this isn鈥檛 always an option. I鈥檝e written this post to document my own particular struggle and ask the question 鈥渋s there a better way?鈥 If you know then please tell me!

The Surprisingly Happy Tale of Visual Studio Online, Continous Integration and Chutzpah

Going off piste#

The post that follows is a slightly rambly affair which is pretty much my journal of the first steps of getting up and running with JavaScript unit testing. I will not claim that much of this blog is down to me. In fact in large part is me working my way through Mathew Aniyan's excellent blog post on integrating Chutzpah with TFS. But a few deviations from this post have made me think it worth keeping hold of this record for my benefit (if no-one else's).

That's the disclaimers out of the way now...

...Try, try, try again...#

Getting started with JavaScript unit testing has not been the breeze I鈥檇 expected. Frankly I鈥檝e found the docs out there not particularly helpful. But if at first you don't succeed then try, try, try again.

So after a number of failed attempts I鈥檓 going to give it another go. Rushaine McBean says Jasmine is easiest so I'm going to follow her lead and give it a go.

Let鈥檚 new up a new (empty) ASP.NET project. Yes, I know ASP.NET has nothing to do with JavaScript unit testing but my end goal is to be able to run JS unit tests in Visual Studio and as part of Continuous Integration. Further to that, I'm anticipating a future where I have a solution that contains JavaScript unit tests and C# unit tests as well. It is indeed a bold and visionary Brave New World. We'll see how far we get.

First up, download Jasmine from GitHub - I'll use v2.0. Unzip it and fire up SpecRunner.html and whaddya know... It works!

As well it might. I鈥檇 be worried if it didn鈥檛. So I鈥檒l move the contents of the release package into my empty project. Now let鈥檚 see if we can get those tests running inside Visual Studio. I鈥檇 heard of Chutzpah which describes itself thusly:

鈥淐hutzpah is an open source JavaScript test runner which enables you to run unit tests using QUnit, Jasmine, Mocha, CoffeeScript and TypeScript.鈥

What I鈥檓 after is the Chutzpah test adapter for Visual Studio 2012/2013 which can be found here. I download the VSIX and install. Pretty painless. Once I restart Visual Studio I can see my unit tests in the test explorer. Nice! Run them and...

All fail. This makes me sad. All the errors say 鈥淐an鈥檛 find variable: Player in file鈥. Hmmm. Why? Dammit I鈥檓 actually going to have to read the documentation... It turns out the issue can be happily resolved by adding these 3 references to the top of PlayerSpec.js:

/// <reference path="../src/Player.js" />
/// <reference path="../src/Song.js" />
/// <reference path="SpecHelper.js" />

Now the tests pass:

The question is: can we get this working with Visual Studio Online?

Fortunately another has gone before me. Mathew Aniyan has written a superb blog post called "Javascript Unit Tests on Team Foundation Service with Chutzpah". Using this post as a guide (it was written 18 months ago which is frankly aeons in the world of the web) I'm hoping that I'll be able to, without too many tweaks, get Javascript unit tests running on Team Foundation Service / Visual Studio Online ( / insert this weeks rebranding here).

First of all in Visual Studio Online I鈥檒l create a new project called "GettingStartedWithJavaScriptUnitTesting" (using all the default options). Apparently 鈥淵our project is created and your team is going to absolutely love this.鈥 Hmmmm... I think I鈥檒l be judge of that.

Let's navigate to the project. I'll fire up Visual Studio by clicking on the 鈥淥pen in Visual Studio鈥 link. Once fired up and all the workspace mapping is sorted I鈥檒l move my project into the GettingStartedWithJavaScriptUnitTesting folder that now exists on my machine and add this to source control.

Back to Mathew's blog. It suggests renaming Chutzpah.VS2012.vsix to Chutzpah.VS2012.zip and checking certain files into TFS. I think Chutzpah has changed a certain amount since this was written. To be on the safe side I鈥檒l create a new folder in the root of my project called Chutzpah.VS2012 and put the contents of Chutzpah.VS2012.zip in there and add it to TFS (being careful to ensure that no dll鈥檚 are excluded).

Then I'll follow steps 3 and 4 from the blog post:

*3. In Visual Studio, Open Team Explorer & connect to Team Foundation Service. Bring up the Manage Build Controllers dialog. [Build 鈥> Manage Build Controllers] Select Hosted Build Controller Click on Properties button to bring up the Build Controller Properties dialog.

4. Change Version Control Path to custom Assemblies to refer to the folder where you checked in the binaries in step 2.

In step 5 the blog tells me to edit my build definition. Well I don鈥檛 have one in this new project so let鈥檚 click on 鈥淣ew Build Definition鈥, create one and then follow step 5:

*5. In Team Explorer, go to the Builds section and Edit your Build Definition which will run the javascript tests. Click on the Process tab Select the row named Automated Tests. Click on 鈥 button next to the value.

Rather than following step 6 (which essentially nukes the running of any .NET tests you might have) I chose to add another row by clicking "Add". In the dialog presented I changed the Test assembly specification to **\*.js and checked the "Fail build on test failure" checkbox.

Step 7 says:

*7. Create your Web application in Visual Studio and add your Qunit or Jasmine unit tests to them. Make sure that the js files (that contain the tests) are getting copied to the build output directory.

The picture below step 7 suggests that you should be setting your test / spec files to have a Copy to Output Directory setting of Copy always. This did not work for me!!! Instead, setting a Build Action of Content and a Copy to Output Directory setting of Do not copy did work.

Finally I checked everything into source control and queued a build. I honestly did not expect this to work. It couldn鈥檛 be this easy could it? And...

Wow! It did! Here鈥檚 me cynically expecting some kind of 鈥減ermission denied鈥 error and it actually works! Brilliant! Look up in the cloud it says the same thing!

Fantastic!

I realise that I haven鈥檛 yet written a single JavaScript unit test of my own and so I鈥檝e still a way to go. What I have done is quietened those voices in my head that said 鈥渢here鈥檚 not too much point having a unit test suite that isn鈥檛 plugged into continuous integration鈥. Although it's not documented here I'm happy to be able to report that I have been able to follow the self-same instructions for Team Foundation Service / Visual Studio Online and get CI working with TFS 2012 on our build server as well.

Having got up and running off the back of other peoples hard work I best try and write some of my own tests now....

How I'm Using Cassette part 3:Cassette and TypeScript Integration

The modern web is JavaScript. There's no two ways about it. HTML 5 has new CSS, new HTML but the most important aspect of it from an application development point of view is JavaScript. It's the engine. Without it HTML 5 wouldn't be the exciting application platform that it is. Half the posts on Hacker News would vanish.

It's easy to break a JavaScript application. One false keypress and you can mysteriously turn a fully functioning app into toast. And not know why. There's tools you can use to help yourself - JSHint / JSLint but whilst these make error detection a little easier it remains very easy to shoot yourself in the foot with JavaScript. Because of this I've come to really rather love TypeScript. If you didn't already know, TypeScript can be summed up as JavaScript with optional static typing. It's a superset of JavaScript - JavaScript with go-faster stripes. When run through the compiler TypeScript is transpiled into JavaScript. And importantly, if you have bugs in your code, the compiler should catch them at this point and let you know.

Now very few of us are working on greenfield applications. Most of us have existing applications to maintain and support. Happily, TypeScript fits very well with this purely because TypeScript is a superset of JavaScript. That is to say: all JavaScript is valid TypeScript in the same way that all CSS is valid LESS. This means that you can take an existing .js file, rename it to have a .ts suffix, run the TypeScript compiler over it and out will pop your JavaScript file just as it was before. You're then free to enrich your TypeScript file with the relevant type annotations at your own pace. Increasing the robustness of your codebase is a choice left to you.

The project I am working on has recently started to incorporate TypeScript. It's an ASP.Net MVC 4 application which makes use of Knockout. The reason we started to incorporate TypeScript is because certain parts of the app, particularly the Knockout parts, were becoming more complex. This complexity wasn't really an issue when we were writing the relevant JavaScript. However, when it came to refactoring and handing files from one team member to another we realised it was very easy to introduce bugs into the codebase, particularly around the JavaScript. Hence TypeScript.

Cassette and TypeScript#

Enough of the pre-amble. The project was making use of Cassette for serving up its CSS and JavaScript. Because Cassette rocks. One of the reasons we use it is that we're making extensive use of Cassette's ability to serve scripts in dependency order. So if we were to move to using TypeScript it was important that TypeScript and Cassette would play well together.

I'm happy to report that Cassettes and TypeScript do work well together, but there are a few things that you need to get up and running. Or, to be a little clearer, if you want to make use of Cassette's in-file Asset Referencing then you'll need to follow these steps. If you don't need Asset Referencing then you'll be fine using Cassette with TypeScript generated JavaScript as is *provided* you ensure the TypeScript compiler is not preserving comments in the generated JavaScript.

The Fly in the Ointment: Asset References#

TypeScript is designed to allow you to break up your application into modules. However, the referencing mechanism which allows you to reference one TypeScript file / module from another is exactly the same as the existing Visual Studio XML reference comments mechanism that was originally introduced to drive JavaScript Intellisense in Visual Studio. To quote the TypeScript spec:

  • A comment of the form /// adds a dependency on the source file specified in the path argument. The path is resolved relative to the directory of the containing source file.
  • An external import declaration that specifies a relative external module name (section 11.2.1) resolves the name relative to the directory of the containing source file. If a source file with the resulting path and file extension 鈥.ts鈥 exists, that file is added as a dependency. Otherwise, if a source file with the resulting path and file extension 鈥.d.ts鈥 exists, that file is added as a dependency.

The problem is that Cassette *also* supports Visual Studio XML reference comments to drive Asset References. The upshot of this is, that Cassette will parse the /// &lt;reference path="*.ts"/&gt;s and will attempt to serve up the TypeScript files in the browser... Calamity!

Pulling the Fly from the Ointment#

Again I'm going to take the demo from last time (the References branch of my CassetteDemo project) and build on top of it. First of all, we need to update the Cassette package. This is because to get Cassette working with TypeScript you need to be running at least Cassette 2.1. So let's let NuGet do it's thing:

Update-Package Cassette.Aspnet

And whilst we're at it let's grab the jQuery TypeScript typings - we'll need them later:

Install-Package jquery.TypeScript.DefinitelyTyped

Now we need to add a couple of classes to the project. First of all this:

Which subclasses ParseJavaScriptReferences and ensures TypeScript files are excluded when JavaScript references are being parsed. And to make sure that Cassette makes use of ParseJavaScriptNotTypeScriptReferences in place of ParseJavaScriptReferences we need this:

Now we're in a position to use TypeScript with Cassette. To demonstrate this let's take the Index.js and rename it to Index.ts. And now it's TypeScript. However before it can compile it needs to know what jQuery is - so we drag in the jQuery typings from Definitely Typed. And now it can compile from this:

To this: (Please note that I get the TypeScript compiler to preserve my comments in order that I can continue to use Cassettes Asset Referencing)

As you can see the output JavaScript has both the TypeScript and the Cassette references in place. However thanks to ParseJavaScriptNotTypeScriptReferences those TypeScript references will be ignored by Cassette.

So that's it - we're home free. Before I finish off I'd like to say thanks to Cassette's Andrew Davey who set me on the right path when trying to work out how to do this. A thousand thank yous Andrew!

And finally, again as last time you can see what I've done in this post by just looking at the repository on GitHub. The changes I made are on the TypeScript branch of that particular repository.

Optimally Serving Up JavaScript

I have occasionally done some server-side JavaScript with Rhino and Node.js but this is the exception rather than the rule. Like most folk at the moment, almost all the JavaScript I write is in a web context.

Over time I've come to adopt a roughly standard approach to how I structure my JavaScript; both the JavaScript itself and how it is placed / rendered in the an HTML document. I wanted to write about the approach I'm using. Partly just to document the approach but also because I often find writing about something crystalises my feelings on the subject in one way or another. I think that most of what I'm doing is sensible and rational but maybe as I write about this I'll come to some firmer conclusions about my direction of travel.

What are you up to?#

Before I get started it's probably worth mentioning the sort of web development I'm generally called to do (as this has obviously influenced my decisions).

Most of my work tends to be on web applications used internally within a company. That is to say, web applications accessible on a Company intranet. Consequently, the user base for my applications tends to be smaller than the Amazons and Googles of this world. It almost invariably sits on the ASP.NET stack in some way. Either classic WebForms or MVC.

"Render first. JS second."#

I took 2 things away from Steve Souder's article:

  1. Async script loading is better than synchronous script loading
  2. Get your screen rendered and *then* execute your JavaScript

I'm not doing any async script loading as yet; although I am thinking of giving it a try at some point. In terms of choosing a loader I'll probably give RequireJS first crack of the whip (purely as it looks like most people are tending it's direction and that can't be without reason).

However - it seems that the concept of async script loading is kind of conflict with one of the other tenets of web wisdom: script bundling. Script bundling, if you're not already aware, is the idea that you should combine all your scripts into a single file and then just serve that. This prevents multiple HTTP requests as each script loads in. Async script loading is obviously okay with multiple HTTP requests, presumably because of the asynchronous non-blocking pattern of loading. So. 2 different ideas. And there's further movement on this front right now as Microsoft are baking in script bundling to .NET 4.5.

Rather than divide myself between these 2 horses I have at the moment tried to follow the "JS second" part of this advice in my own (perhaps slightly old fashioned) way...

I want to serve you...#

I have been making sure that scripts are the last thing served to the screen by using a customised version of Michael J. Ryan's HtmlHelper. This lovely helper allows you to add script references as required from a number of different sources (layout page, view, partial view etc - even the controller if you so desired). It's simple to control the ordering of scripts by allowing you to set a priority for each script which determines the render order.

Then as a final step before rendering the &lt;/body&gt; tag the scripts can be rendered in one block. By this point the web page is rendered visually and a marginal amount of blocking is, in my view, acceptable.

If anyone is curious - the class below is my own version of Michael's helper. My contribution is the go faster stripes relating to the caching suffix and the ability to specify dependancies using script references rather than using numeric priority mechanism):

Minification - I want to serve you less...#

Another tweak I made to the script helper meant that when compiling either the debug or production (minified) versions of common JS files will be included if available. This means in a production environment the users get minified JS files so faster loading. And in a development environment we get the full JS files which make debugging more straightforward.

What I haven't started doing is minifying my own JS files as yet. I know I'm being somewhat inconsistent here by sometimes serving minified files and sometimes not. I'm not proud. Part of my rationale for this that since most of my users use my apps on a daily basis they will for the most part be using cached JS files. Obviously there'll be slightly slower load times the first time they go to a page but nothing that significant I hope.

I have thought of starting to do my own minification as a build step but have held off for now. Again this is something being baked into .NET 4.5; another reason why I have held off doing this a different way for now.

Update

It now looks like this Microsofts optimisations have become this Nuget package. It's early days (well it was released on 15th August 2012 and I'm writing this on the 16th) but I think this looks not to be tied to MVC 4 or .NET 4.5 in which case I could use it in my current MVC 3 projects. I hope so...

By the way there's a nice rundown of how to use this by K. Scott Allen of Pluralsight. It's fantastic. Recommended.

Update 2

Having done a little asking around I now understand that this *can* be used with MVC 3 / .NET 4.0. Excellent!

One rather nice alternative script serving mechanism I've seen (but not yet used) is Andrew Davey's Cassette which I mean to take for a test drive soon. This looks fantastic (and is available as a Nuget package - 10 points!).

CDNs (they want to serve you)#

I've never professionally made use of CDNs at all. There are clearly good reasons why you should but most of those good reasons relate most to public facing web apps.

As I've said, the applications I tend to work on sit behind firewalls and it's not always guaranteed what my users can see from the grand old world of web beyond. (Indeed what they see can change on hour by hour basis sometimes...) Combined with that, because my apps are only accessible by a select few I don't face the pressure to reduce load on the server that public web apps can face.

So while CDN's are clearly a good thing. I don't use them at present. And that's unlikely to change in the short term.

TL:DR#

  1. I don't use CDNs - they're clearly useful but they don't suit my particular needs
  2. I serve each JavaScript file individually just before the body tag. I don't bundle.
  3. I don't minify my own scripts (though clearly it wouldn't be hard) but I do serve the minified versions of 3rd party libraries (eg jQuery) in a Production environment.
  4. I don't use async script loaders at present. I may in future; we shall see.

I expect some of the above may change (well, possibly not point #1) but this general approach is working well for me at present.

I haven't touched at all on how I'm structuring my JavaScript code itself. Perhaps next time.

Globalize.js - number and date localisation made easy

I wanted to write about a JavaScript library which seems to have had very little attention so far. And that surprises me as it's

  1. Brilliant!
  2. Solves a common problem that faces many app developers who work in the wonderful world of web; myself included

The library is called Globalize.js and can be found on GitHub here. Globalize.js is a simple JavaScript library that allows you to format and parse numbers and dates in culture specific fashion.

Why does this matter?#

Because different countries and cultures do dates and numbers in different ways. Christmas Day this year in England will be 25/12/2012 (dd/MM/yyyy). But for American eyes this should be 12/25/2012 (M/d/yyyy). And for German 25.12.2012 (dd.MM.yyyy). Likewise, if I was to express numerically the value of "one thousand exactly - to 2 decimal places", as a UK citizen I would do it like so: 1,000.00. But if I was French I'd express it like this: 1.000,00. You see my point?

Why does this matter to me?#

For a number of years I've been working on applications that are used globally, from London to Frankfurt to Shanghai to New York to Singapore and many other locations besides. The requirement has always been to serve up localised dates and numbers so users experience of the system is more natural. Since our applications are all ASP.NET we've never really had a problem server-side. Microsoft have blessed us with all the goodness of System.Globalization which covers hundreds of different cultures and localisations. It makes it frankly easy:

using System.Globalization;
//Produces: "06.05.2012"
new DateTime(2012,5,6).ToString("d", new CultureInfo("de-DE"));
//Produces: "45,56"
45.56M.ToString("n", new CultureInfo("fr-FR"));

The problem has always been client-side. If you need to localise dates and numbers on the client what do you do?

JavaScript Date / Number Localisation - the Status Quo#

Well to be frank - it's a bit rubbish really. What's on offer natively at present basically amounts to this:

This is better than nothing - but not by much. There's no real control or flexibility here. If you don't like the native localisation format or you want something slightly different then tough. This is all you've got to play with.

For the longest time this didn't matter too much. Up until relatively recently the world of web was far more about the thin client and the fat server. It would be quite standard to have all HTML generated on the server. And, as we've seen .NET (and many other back end enviroments as well) give you all the flexiblility you might desire given this approach.

But the times they are a-changing. And given the ongoing explosion of HTML 5 the rich client is very definitely with us. So we need tools.

Microsoft doing *good things*#

Hands up who remembers when Microsoft first shipped it's ASP.NET AJAX library back in 2007?

Well a small part of this was the extensions ASP.NET AJAX added to JavaScripts native Date and Number objects.... These extensions allowed the localisation of Dates and Numbers to the current UI culture and the subsequent string parsing of these back into Dates / Numbers. These extensions pretty much gave JavaScript the functionality that the server already had in System.Globalization. (not quite like-for-like but near enough the mark)

I'm not aware of a great fuss ever being made about this - a fact I find surprising since one would imagine this is a common need. There's good documentation about this on MSDN - here's some useful links:

When our team became aware of this we started to make use of it in our web applications. I imagine we weren't alone...

Microsoft doing *even better things* (Scott Gu to the rescue!)#

I started to think about this again when MVC reared it's lovely head.

Like many, I found I preferred the separation of concerns / testability etc that MVC allowed. As such, our team was planning to, over time, migrate our ASP.NET WebForms applications over to MVC. However, before we could even begin to do this we had a problem. Our JavaScript localisation was dependant on the ScriptManager. The ScriptManager is very much a WebForms construct.

What to do? To the users it wouldn't be acceptable to remove the localisation functionality from the web apps. The architecture of an application is, to a certain extent, meaningless from the users perspective - they're only interested in what directly impacts them. That makes sense, even if it was a problem for us.

Fortunately the Great Gu had it in hand. Lo and behold the this post appeared on the jQuery forum and the following post appeared on Guthrie's blog:

http://weblogs.asp.net/scottgu/archive/2010/06/10/jquery-globalization-plugin-from-microsoft.aspx

Yes that's right. Microsoft were giving back to the jQuery community by contributing a jQuery globalisation plug-in. They'd basically taken the work done with ASP.NET AJAX Date / Number extensions, jQuery-plug-in-ified it and put it out there. Fantastic!

Using this we could localise / globalise dates and numbers whether we were working in WebForms or in MVC. Or anything else for that matter. If we were suddenly seized with a desire to re-write our apps in PHP we'd *still* be able to use Globalize.js on the client to handle our regionalisation of dates and numbers.

History takes a funny course...#

Now for my part I would have expected that this announcement to be followed in short order by dancing in the streets and widespread adoption. Surprisingly, not so. All went quiet on the globalisation front for some time and then out of the blue the following comment appeared on the jQuery forum by Richard D. Worth (he of jQuery UI fame):

http://blog.jquery.com/2011/04/16/official-plugins-a-change-in-the-roadmap/#comment-527484

The long and short of which was:

  • The jQuery UI team were now taking care of (the re-named) Globalize.js library as the grid control they were developing had a need for some of Globalize.js's goodness. Consequently a home for Globalize.js appeared on the jQuery UI website: http://wiki.jqueryui.com/Globalize
  • The source of Globalize.js moved to this location on GitHub: https://github.com/jquery/globalize/
  • Perhaps most significantly, the jQuery globalisation plug-in as developed by Microsoft had now been made a standalone JavaScript library. This was clearly brilliant news for Node.js developers as they would now be able to take advantage of this and perform localisation / globalisation server-side - they wouldn't need to have jQuery along for the ride. Also, this would be presumably be good news for users of other client side JavaScript libraries like Dojo / YUI etc.

Globalize.js clearly has a rosy future in front of it. Using the new Globalize.js library was still simplicity itself. Here's some examples of localising dates / numbers using the German culture:

<script
src="/Scripts/Globalize/globalize.js"
type="text/javascript"></script>
<script
src="/Scripts/Globalize/cultures/globalize.culture.de-DE.js"
type="text/javascript"></script>
Globalize.culture("de-DE");
//"2012-05-06" - ISO 8601 format
Globalize.format(new Date(2012,4,6), "yyyy-MM-dd");
//"06.05.2012" - standard German short date format of dd.MM.yyyy
Globalize.format(new Date(2012,4,6), Globalize.culture().calendar.patterns.d);
//"4.576,3" - a number rendered to 1 decimal place
Globalize.format(4576.34, "n1");

Stick a fork in it - it's done#

The entry for Globalize.js on the jQuery UI site reads as follows:

"version: 0.1.0a1 (not a jQuery UI version number, as this is a standalone utility) status: in development (part of Grid project)"

I held back from making use of the library for some time, deterred by the "in development" status. However, I had a bit of dialog with one of the jQuery UI team (I forget exactly who) who advised that the API was unlikely to change further and that the codebase was actually pretty stable. Our team did some testing of Globalize.js and found this very much to be case. Everything worked just as we expected and hoped. We're now using Globalize.js in a production environment with no problems reported; it's been doing a grand job.

In my opinion, Number / Date localisation on the client is ready for primetime right now - it works! Unfortunately, because Globalize.js has been officially linked in with the jQuery UI grid project it seems unlikely that this will officially ship until the grid does. Looking at the jQuery UI roadmap the grid is currently slated to release with jQuery UI 2.1. There isn't yet a release date for jQuery UI 1.9 and so it could be a long time before the grid actually sees the light of day.

I'm hoping that the jQuery UI team will be persuaded to "officially" release Globalize.js long before the grid actually ships. Obviously people can use Globalize.js as is right now (as we are) but it seems a shame that many others will be missing out on using this excellent functionality, deterred by the "in development" status. Either way, the campaign to release Globalise.js officially starts here!

The Future?#

There are plans to bake globalisation right into JavaScript natively with EcmaScript 5.1. There's a good post on the topic here. And here's a couple of historical links worth reading too:

http://norbertlindenberg.com/2012/02/ecmascript-internationalization-api/http://wiki.ecmascript.org/doku.php?id=globalization:specification_drafts

Beg, Steal or Borrow a Decent JavaScript DateTime Converter

I've so named this blog post because it shamelessly borrows from the fine work of others: Sebastian Markb氓ge and Nathan Vonnahme. Sebastian wrote a blog post documenting a good solution to the ASP.NET JavaScriptSerializer DateTime problem at the tail end of last year. However, his solution didn't get me 100% of the way there when I tried to use it because of a need to support IE 8 which lead me to use Nathan Vonnahme's ISO 8601 JavaScript Date parser. I thought it was worth documenting this, hence this post, but just so I'm clear; the hard work here was done by Sebastian Markb氓ge and Nathan Vonnahme and not me. Consider me just a curator in this case. The original blog posts that I am drawing upon can be found here: 1. http://blog.calyptus.eu/seb/2011/12/custom-datetime-json-serialization/ and here: 2. http://n8v.enteuxis.org/2010/12/parsing-iso-8601-dates-in-javascript/

DateTime, JSON, JavaScript Dates....#

Like many, I've long been frustrated with the quirky DateTime serialisation employed by the System.Web.Script.Serialization.JavaScriptSerializer class. When serialising DateTimes so they can be JSON.parsed on the client, this serialiser uses the following approach: (from MSDN) Date object, represented in JSON as "\/Date(number of ticks)\/". The number of ticks is a positive or negative long value that indicates the number of ticks (milliseconds) that have elapsed since midnight 01 January, 1970 UTC." Now this is not particularly helpful in my opinion because it's not human readable (at least not this human; perhaps Jon Skeet...) When consuming your data from web services / PageMethods using jQuery.ajax you are landed with the extra task of having to convert what were DateTimes on the server from Microsofts string Date format (eg "\/Date(1293840000000)\/") into actual JavaScript Dates. It's also unhelpful because it's divergent from the approach to DateTime / Date serialisation used by a native JSON serialisers:

Just as an aside it's worth emphasising that one of the limitations of JSON is that the JSON.parsing of a JSON.stringified date will *not* return you to a JavaScript Date but rather an ISO 8601 date string which will need to be subsequently converted into a Date. Not JSON's fault - essentially down to the absence of a Date literal within JavaScript. ## Making JavaScriptSerializer behave more JSON'y

Anyway, I didn't think there was anything I could really do about this in an ASP.NET classic / WebForms world because, to my knowledge, it is not possible to swap out the serialiser that is used. JavaScriptSerializer is the only game in town. (Though I am optimistic about the future; given the announcement that I first picked up on Rick Strahl's blog that Json.NET was going to be adopted as the default JSON serializer for ASP.NET Web API; what with Json.NET having out-of-the-box ISO 8601 support. I digress...) Because it can make debugging a much more straightforward process I place a lot of value on being able to read the network traffic that web apps generate. It's much easier to drop into Fiddler / FireBug / Chrome dev tools etc and watch what's happening there and then instead of having to manually process the data separately first so that you can understand it. I think this is nicely aligned with the KISS principle. For that reason I've been generally converting DateTimes to ISO 8601 strings on the server before returning them to the client. A bit of extra overhead but generally worth it for the gains in clarity in my opinion. So I was surprised and delighted when I happened upon Sebastian Markb氓ge's blog post which provided a DateTime JavaScriptConverter that could be plugged into the JavaScriptSerializer. You can see the code below (or on Sebastian's original post with a good explanation of how it works):

Using this converter meant that a DateTime that previously would have been serialised as "\/Date(1293840000000)\/" would now be serialised as "2011-01-01T00:00:00.0000000Z" instead. This is entirely agreeable because 1. it's entirely clear what a "2011-01-01T00:00:00.0000000Z" style date represents and 2. this is more in line with native browser JSON implementations and &lt;statingTheObvious&gt;consistency is a good thing.&lt;/statingTheObvious&gt;

Getting your web services to use the ISO 8601 DateTime Converter#

Sebastian alluded in his post to a web.config setting that could be used to get web services / pagemethods etc. implementing his custom DateTime serialiser. This is it:

With this in place your web services / page methods will happily be able to serialise / deserialise ISO style date strings to your hearts content. ## What no ISO 8601 date string Date constructor?

As I mentioned earlier, Sebastian's solution didn't get me 100% of the way there. There was still a fly in the ointment in the form of IE 8. Unfortunately IE 8 doesn't have JavaScript Date constructor that takes ISO 8601 date strings. This lead me to using Nathan Vonnahme's ISO 8601 JavaScript Date parser, the code of which is below (or see his original post here):

With this in place I could parse ISO 8601 Dates just like anyone else. Great stuff. parseISO8601Date("2011-01-01T00:00:00.0000000Z") would give me a JavaScript Date of Sat Jan 1 00:00:00 UTC 2011. Obviously in the fullness of time the parseISO8601Date solution should no longer be necessary because EcmaScript 5 specifies an ISO 8601 date string constructor. However, in the interim Nathan's solution is a lifesaver. Thanks again both to Sebastian Markb氓ge and Nathan Vonnahme who have both generously allowed me use their work as the basis for this post. ## PS And it would have worked if it wasn't for that pesky IE 9...

Subsequent to writing this post I thought I'd check that IE 9 had implemented a JavaScript Date constructor that would process an ISO 8601 date string like this: new Date("2011-01-01T00:00:00.0000000Z"). It hasn't. Take a look:

This is slightly galling as the above code works dandy in Firefox and Chrome. As you can see from the screenshot you can get the JavaScript IE 9 Date constructor to play nice by trimming off the final 4 "0"'s from the string. Frustrating. Obviously we can still use Nathan's solution but it's a shame that we can't use the native support. Based on what I've read here I think it would be possible to amend Sebastians serializer to fall in line with IE 9's pendantry by changing this: ```cs return new CustomString(((DateTime)obj).ToUniversalTime()

.ToString("O")

);

To this: ```cs
return new CustomString(((DateTime)obj).ToUniversalTime()
.ToString(<b>"yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'fffzzz"</b>)
);

I've held off from doing this myself as I rather like Sebastian's idea of being able to use Microsoft's Round-trip ("O", "o") Format Specifier. And it seems perverse that we should have to move away from using Microsoft's Round-trip Format Specifier purely because of (Microsoft's) IE! But it's a possibility to consider and so I put it out there. I would hope that MS will improve their JavaScript Date constructor with IE 10. A missed opportunity if they don't I think. ## PPS Just when you thought is over... IE 9 was right!

Sebastian got in contact after I first published this post and generously pointed out that, contrary to my expectation, IE 9 technically had the correct implementation. According to the EMCAScript standard the Date constructor should not allow more than millisecond precision. In this case, Chrome and Firefox are being less strict - not more correct. On reflection this does rather make sense as the result of a JSON.stringify(new Date()) never results in an ISO date string to the 10 millionths of a second detail. Sebastian has himself stopped using Microsoft's Round-trip ("O", "o") Format Specifier in favour of this format string: ```cs return new CustomString(((DateTime)obj).ToUniversalTime()

.ToString("yyyy-MM-ddTHH:mm:ss.fffZ")

);

This results in date strings that comply perfectly with the ECMAScript spec. I suspect I'll switch to using this also now. Though I'll probably leave the first part of the post intact as I think the background remains interesting. Thanks again Sebastian!

JSHint - Customising your hurt feelings

As I've started making greater use of JavaScript to give a richer GUI experience the amount of JS in my ASP.NET apps has unsurprisingly ballooned. If I'm honest, I hadn't given much consideration to the code quality of my JavaScript in the past. However, if I was going to make increasing use of it (and given the way the web is going at the moment I'd say that's a given) I didn't think this was tenable position to maintain. A friend of mine works for Coverity which is a company that provides tools for analysing code quality. I understand, from conversations with him, that their tools provide static analysis for compiled languages such as C++ / C# / Java etc. I was looking for something similar for JavaScript. Like many, I have read and loved Douglas Crockford's "JavaScript: The Good Parts"; it is by some margin the most useful and interesting software related book I have read.So I was aware that Crockford had come up with his own JavaScript code quality tool called JSLint. JSLint is quite striking when you first encounter it:

It's the "Warning! JSLint will hurt your feelings." that grabs you. And it's not wrong. I've copied and pasted code that I've written into JSLint and then gasped at the reams of errors JSLint would produce. I subsequently tried JSLint-ing various well known JS libraries (jQuery etc) and saw that JSLint considered they were thoroughly problematic as well. This made me feel slightly better. It was when I started examining some of the "errors" JSLint reported that I took exception. Yes, I took exception to exceptions! (I'm *very* pleased with that!) Here's a few of the errors generated by JSLint when inspecting jquery-1.7.2.js: - Problem at line 16 character 10: Expected exactly one space between 'function' and '('.

  • Problem at line 25 character 1: Expected 'var' at column 13, not column 1.
  • Problem at line 31 character 5: Unexpected dangling '_' in '_jQuery'.

JSLint is, much like it's creator, quite opinionated. Which is no bad thing. Many of Crockfords opinions are clearly worth their salt. It's just I didn't want all of them enforced upon me. As you can see above most of these "problems" are essentially complaints about a different style rather than bugs or potential issues. Now there are options in JSLint that you can turn on and off which looked quite promising. But before I got to investigating them I heard about JSHint, brainchild of Anton Kovalyov and Paul Irish. In their own words: JSHint is a fork of JSLint, the tool written and maintained by Douglas Crockford. The project originally started as an effort to make a more configurable version of JSLint鈥攐ne that doesn't enforce one particular coding style on its users鈥攂ut then transformed into a separate static analysis tool with its own goals and ideals. This sounded right up my alley! So I thought I'd repeat my jQuery test. Here's a sample of what JSHint threw back at me, with its default settings in place: - Line 230: return num == null ? Expected '===' and instead saw '=='.

  • Line 352: if ( (options = arguments[ i ]) != null ) { Expected '!==' and instead saw '!='.
  • Line 354: for ( name in options ) { The body of a for in should be wrapped in an if statement to filter unwanted properties from the prototype.

These were much more the sort of "issues" I was interested in. Plus it seemed there was plenty of scope to tweak my options. Excellent. This was good. The icing on my cake would have been a plug-in for Visual Studio which would allow me to evaluate my JS files from within my IDE. Happily the world seems to be full of developers doing good turns for one another. I discovered an extension for VS called JSLint for Visual Studio 2010:

This was an extension that provided either JSLint *or* JSHint evaluation as you preferred from within Visual Studio. Fantastic! With this extension in play you could add JavaScript static code analysis to your compilation process and so learn of all the issues in your code at the same time, whether they lay in C# or JS or [insert language here]. You could control how JS problems were reported; as warnings, errors etc. You could straightforwardly exclude files from evaluation (essential if you're reliant on a number of 3rd party JS libraries which you are not responsible for maintaining). You could cater for predefined variables; allow for jQuery or DOJO. You could simply evaluate a single file in your solution by right clicking it and hitting the "JS Lint" option in the context menu. And it was simplicity itself to activate and deactivate the JSHint / JSLint extension as required. For a more exhaustive round up of the options available I advise taking a look here: http://jslint4vs2010.codeplex.com. I would heartily recommend using JSHint if you're looking to improve your JS code quality. I'm grateful to Crockford for making JSHint possible by first writing JSLint. For my part though I think JSHint is the more pragmatic and useful tool and likely to be the one I stick with. For interest (and frankly sheer entertainment value at the crotchetiness of Crockford) it's definitely worth having a read up on how JSHint came to pass: - http://anton.kovalyov.net/2011/02/20/why-i-forked-jslint-to-jshint/

Using the PubSub / Observer pattern to emulate constructor chaining without cluttering up global scope

Yes the title of this post is *painfully* verbose. Sorry about that. Couple of questions for you: - Have you ever liked the way you can have base classes in C# which can then be inherited and subclassed in a different file / class

?

  • Have you ever thought; gosh it'd be nice to do something like that in JavaScript...
  • Have you then looked at JavaScripts prototypical inheritance and thought "right.... I'm sure it's possible but this going to end up like War and Peace"
  • Have you then subsequently thought "and hold on a minute... even if I did implement this using the prototype and split things between different files / modules wouldn't I have to pollute the global scope to achieve that? And wouldn't that mean that my code was exposed to the vagaries of any other scripts on the page? Hmmm..."
  • Men! Are you skinny? Do bullies kick sand in your face? (Just wanted to see if you were still paying attention...)

The Problem#

Well, the above thoughts occurred to me just recently. I had a situation where I was working on an MVC project and needed to build up quite large objects within JavaScript representing various models. The models in question were already implemented on the server side using classes and made extensive use of inheritance because many of the properties were shared between the various models. That is to say we would have models which were implemented through the use of a class inheriting a base class which in turn inherits a further base class. With me? Good. Perhaps I can make it a little clearer with an example. Here are my 3 classes. First BaseReilly.cs: ```cs public class BaseReilly { public string LastName { get; set; }

public BaseReilly()
{
LastName = "Reilly";
}
}
Next BoyReilly.cs (which inherits from BaseReilly): ```cs
public class BoyReilly : BaseReilly
{
public string Sex { get; set; }
public BoyReilly()
: base()
{
Sex = "It is a manchild";
}
}

And finally JohnReilly.cs (which inherits from BoyReilly which in turn inherits from BaseReilly): ```cs public class JohnReilly : BoyReilly { public string FirstName { get; set; }

public JohnReilly()
: base()
{
FirstName = "John";
}
}
Using the above I can create myself my very own "JohnReilly" like so: ```cs
var johnReilly = new JohnReilly();

And it will look like this:

I was looking to implement something similar on the client and within JavaScript. I was keen to ensure code reuse. And my inclination to keep things simple made me wary of making use of the prototype. It is undoubtedly powerful but I don't think even the mighty Crockford would consider it "simple". Also I had the reservation of exposing my object to the global scope. So what to do? I had an idea.... ## The Big Idea

For a while I've been making use explicit use of the Observer pattern in my JavaScript, better known by most as the publish/subscribe (or "PubSub") pattern. There's a million JavaScript libraries that facilitate this and after some experimentation I finally settled on higgins implementation as it's simple and I saw a JSPerf which demonstrated it as either the fastest or second fastest in class. Up until now my main use for it had been to facilitate loosely coupled GUI interactions. If I wanted one component on the screen to influence anothers behaviour I simply needed to get the first component to publish out the relevant events and the second to subscribe to these self-same events. One of the handy things about publishing out events this way is that with them you can also include data. This data can be useful when driving the response in the subscribers. However, it occurred to me that it would be equally possible to pass an object when publishing an event. **And the subscribers could enrich that object with data as they saw fit.

** Now this struck me as a pretty useful approach. It's not rock solid secure as it's always possible that someone could subscribe to your events and get access to your object as you published out. However, that's pretty unlikely to happen accidentally; certainly far less likely than someone else's global object clashing with your global object. ## What might this look like in practice?

So this is what it ended up looking like when I turned my 3 classes into JavaScript files / modules. First BaseReilly.js: ```js $(function () {

$.subscribe("PubSub.Inheritance.Emulation", function (obj) {
obj.LastName = "Reilly";
});

});

Next BoyReilly.js: ```js
$(function () {
$.subscribe("PubSub.Inheritance.Emulation", function (obj) {
obj.Sex = "It is a manchild";
});
});

And finally JohnReilly.js: ```js $(function () {

$.subscribe("PubSub.Inheritance.Emulation", function (obj) {
obj.FirstName = "John";
});

});

If the above scripts have been included in a page I can create myself my very own "JohnReilly" in JavaScript like so: ```js
var oJohnReilly = {}; //Empty object
$.publish("PubSub.Inheritance.Emulation", [oJohnReilly]); //Empty object "published" so it can be enriched by subscribers
console.log(JSON.stringify(oJohnReilly)); //Show me this thing you call "JohnReilly"

And it will look like this:

And it works. Obviously the example I've given above it somewhat naive - in reality my object properties are driven by GUI components rather than hard-coded. But I hope this illustrates the point. This technique allows you to simply share functionality between different JavaScript files and so keep your codebase tight. I certainly wouldn't recommend it for all circumstances but when you're doing something as simple as building up an object to be used to pass data around (as I am) then it works very well indeed. ## A Final Thought on Script Ordering

A final thing that maybe worth mentioning is script ordering. The order in which functions are called is driven by the order in which subscriptions are made. In my example I was registering the scripts in this order: ```html

<script src="/Scripts/PubSubInheritanceDemo/BoyReilly.js"></script>
<script src="/Scripts/PubSubInheritanceDemo/JohnReilly.js"<>/script>
So when my event was published out the functions in the above JS files would be called in this order: 1. BaseReilly.js
2. BoyReilly.js
3. JohnReilly.js
<!-- -->
If you were so inclined you could use this to emulate inheritance in behaviour. Eg you could set a property in `BaseReilly.js` which was subsequently overridden in `JohnReilly.js` or `BoyReilly.js` if you so desired. I'm not doing that myself but it occurred as a possibility. ## PS
If you're interested in learning more about JavaScript stabs at inheritance you could do far worse than look at Bob Inces in depth StackOverflow [answer](<http://stackoverflow.com/a/1598077/761388>).

Striving for (JavaScript) Convention

Update#

The speed of change makes fools of us all. Since I originally wrote this post all of 3 weeks ago Visual Studio 11 beta has been released and the issues I was seeking to solve have pretty much been resolved by the new innovations found therein. It's nicely detailed in @carlbergenhem's blog post: My Top 5 Visual Studio 11 Designer Improvements for ASP.NET 4.5 Development. I've left the post in place below but much of what I said (particularly with regard to Hungarian Notation) I've now moved away from. That was originally my intention anyway so that's no bad thing. The one HN artefact that I've held onto is using "$" as a prefix for jQuery objects. I think that still makes sense. I would have written my first line of JavaScript in probably 2000. It probably looked something like this: alert('hello world'). I know. Classy. As I've mentioned before it was around 2010 before I took JavaScript in any way seriously. Certainly it was then when I started to actively learn the language. Because up until this point I'd been studiously avoiding writing any JavaScript at all I'd never really given thought to forms and conventions. When I wrote any JavaScript I just used the same style and approaches as I used in my main development language (of C#). By and large I have been following the .net naming conventions which are ably explained by Pete Brown here. Over time I have started to move away from this approach. Without a deliberate intention to do so I have found myself adopting a different style for my JavaScript code as compared with anything else I write. I wouldn't go so far as to say I'm completely happy with the style I'm currently using. But I find it more helpful than not and thought it might be worth talking about. It was really 2 things that started me down the road of "rolling my own" convention: dynamic typing and the lack of safety nets. Let's take each in turn.... ### 1. Dynamic typing

Having grown up (in a development sense) using compiled and strongly-typed languages I was used to the IDE making it pretty clear what was what through friendly tooltips and the like:

JavaScript is loosely / dynamically typed (occasionally called "untyped" but let's not go there). This means that the IDE can't easily determine what's what. So no tooltips for you sunshine. ### 2. The lack of safety nets / running with scissors

Now I've come to love it but what I realised pretty quickly when getting into JavaScript was this: you are running with scissors. If you're not careful and you don't take precautions it can bloody quickly. If I'm writing C# I have a lot of safety nets. Not the least of which is "does it compile"? If I declare an integer and then subsequently try to assign a string value to it it won't let me

. But JavaScript is forgiving. Some would say too forgiving. Let's do something mad: ```js var iAmANumber = 77;

console.log(iAmANumber); //Logs a number

iAmANumber = "It's a string";

console.log(iAmANumber); //Logs a string

iAmANumber = { description: "I am an object" };

console.log(iAmANumber); //Logs an object

iAmANumber = function (myVariable) {

console.log(myVariable); }

console.log(iAmANumber); //Logs a function iAmANumber("I am not a number, I am a free man!"); //Calls a function which performs a log

Now if I were to attempt something similar in C# fuggedaboudit but JavaScript; no I'm romping home free: ![](../static/blog/2012-03-12-striving-for-javascript-convention/Mad%2BStuff.png)
Now I'm not saying that you should ever do the above, and thinking about it I can't think of a situation where you'd want to (suggestions on a postcard). But the point is it's possible. And because it's possible to do this deliberately, it's doubly possible to do this accidentally. My point is this: it's easy to make bugs in JavaScript. ## What ~~Katy~~ Johnny Did Next
I'd started making more and more extensive use of JavaScript. I was beginning to move in the direction of using the [single-page application](<http://en.wikipedia.org/wiki/Single-page_application>) approach (*<sideNote>although more in the sense of giving application style complexity to individual pages rather than ensuring that entire applications ended up in a single page</sideNote>*). This meant that whereas in the past I'd had the occasional 2 lines of JavaScript I now had a multitude of functions which were all interacting in response to user input. All these functions would contain a number of different variables. As well as this I was making use of jQuery for both Ajax purposes and to smooth out the DOM inconsistencies between various browsers. This only added to the mix as variables in one of my functions could be any one of the following: - a number
- a string
- a boolean
- a date
- an object
- an array
- a function
- a jQuery object - not strictly a distinct JavaScript type obviously but treated pretty much as one in the sense that it has a particular functions / properties etc associated with it
<!-- -->
As I started doing this sort of work I made no changes to my coding style. Wherever possible I did \***exactly**\* what I would have been doing in C# in JavaScript. And it worked fine. Until.... Okay there is no "until" as such, it did work fine. But what I found was that I would do a piece of work, check it into source control, get users to test it, release the work into Production and promptly move onto the next thing. However, a little way down the line there would be a request to add a new feature or perhaps a bug was reported and I'd find myself back looking at the code. And, as is often the case, despite the comments I would realise that it wasn't particularly clear why something worked in the way it did. (Happily it's not just me that has this experience, paranoia has lead me to ask many a fellow developer and they have confessed to similar) When it came to bug hunting in particular I found myself cursing the lack of friendly tooltips and the like. Each time I wanted to look at a variable I'd find myself tracking back through the function, looking for the initial use of the variable to determine the type. Then I'd be tracking forward through the function for each subsequent use to ensure that it conformed. Distressingly, I would find examples of where it looked like I'd forgotten the type of the variable towards the end of a function (for which I can only, regrettably, blame myself). Most commonly I would have a situation like this: ```js
var tableCell = $("#ItIsMostDefinitelyATableCell"); //I jest ;-)
/* ...THERE WOULD BE SOME CODE DOING SOMETHING HERE... */
tableCell.className = "makeMeProminent"; //Oh dear - not good.

You see what happened above? I forgot I had a jQuery object and instead treated it like it was a standard DOM element. Oh dear. ## Spinning my own safety net; Hungarian style

After I'd experienced a few of the situations described above I decided that steps needed to be taken to minimise the risk of this. In this case, I decided that "steps" meant Hungarian notation. I know. I bet you're wincing right now. For those of you that don't remember HN was pretty much the standard way of coding at one point (although at the point that I started coding professionally it had already started to decline). It was adopted in simpler times long before the modern IDE's that tell you what each variable is became the norm. Back when you couldn't be sure of the types you were dealing with. In short, kind of like my situation with JavaScript right now. There's not much to it. By and large HN simply means having a lowercase prefix of 1-3 characters on all your variables indicating type. It doesn't solve all your problems. It doesn't guarantee to stop bugs. But because each instance of the variables use implicitly indicates it's type it makes bugs more glaringly obvious. This means when writing code I'm less likely to misuse a variable (eg iNum = "JIKJ") because part of my brain would be bellowing: "that just looks wrong... pay better attention lad!". Likewise, if I'm scanning through some JavaScript and searching for a bug then this can make it more obvious. Here's some examples of different types of variables declared using the style I have adopted: ```js var iInteger = 4; var dDecimal = 10.50; var sString = "I am a string"; var bBoolean = true; var dteDate = new Date(); var oObject = { description: "I am an object" }; var aArray = [34, 77]; var fnFunction = function () { //Do something }; var $jQueryObject = $("#ItIsMostDefinitelyATableCell");

Some of you have read this and thought "hold on a minute... JavaScript doesn't have integers / decimals etc". You're quite right. My style is not specifically stating the type of a variable. More it is seeking to provide a guide on how a variable should be used. JavaScript does not have integers. But oftentimes I'll be using a number variable which i will only ever want to treat as an integer. And so I'll name it accordingly. ## Spinning a better safety net; DOJO style
I would be the first to say that alternative approaches are available. And here's one I recently happened upon that I rather like the look of: look 2/3rds down at the parameters section of [the DOJO styleguide](<http://dojotoolkit.org/community/styleGuide>) Essentially they advise specifying parameter types through the use of prefixed comments. See the examples below: ```js
function(/*String*/ foo, /*int*/ bar)...

or ```js function(/String?/ foo, /int/ bar, /String[]?/ baz)...

I really rather like this approach and I'm thinking about starting to adopt it. It's not possible in Hungarian Notation to be so clear about the purpose of a variable. At least not without starting to adopt all kinds of kooky conventions that take in all the possible permutations of variable types. And if you did that you'd really be defeating yourself anyway as it would simply reduce the clarity of your code and make bugs more likely. ## Spinning a better safety net; unit tests
Despite being quite used to writing unit tests for all my server-side code I have not yet fully embraced unit testing on the client. Partly I've been holding back because of the variety of JavaScript testing frameworks available. I wasn't sure which to start with. But given that it is so easy to introduce bugs into JavaScript I have come to the conclusion that it's better to have some tests in place rather than none. Time to embrace the new. ## Conclusion
I've found using Hungarian Notation useful whilst working in JavaScript. Not everyone will feel the same and I think that's fair enough; within reason I think it's generally a good idea to go with what you find useful. However, I am giving genuine consideration to moving to the DOJO style and moving back to my more standard camel-cased variable names instead of Hungarian Notation. Particularly since I strive to keep my functions short with the view that ideally each should 1 thing well. Keep it simple etc... And so in a perfect world the situation of forgetting a variables purpose shouldn't really arise... I think once I've got up and running with JavaScript unit tests I may make that move. Hungarian Notation may have proved to be just a stop-gap measure until better techniques were employed...

JavaScript - getting to know the beast...

So it's 2010 and I've started using jQuery. jQuery is a JavaScript library. This means that I'm writing JavaScript... Gulp! I should say that at this point in time I *hated* JavaScript (I have mentioned this previously). But what I know now is that I barely understood the language at all. All the JavaScript I knew was the result of copying and pasting after I'd hit "view source". I don't feel too bad about this - not because my ignorance was laudable but because I certainly wasn't alone in this. It seems that up until recently hardly anyone knew anything about JavaScript. It puzzles me now that I thought this was okay. I suppose like many people I didn't think JavaScript was capable of much and hence felt time spent researching it would be wasted. Just to illustrate where I was then, here is 2009 John's idea of some pretty "advanced" JavaScript: ```html function GiveMeASum(iNum1, iNum2) { var dteDate = new Date(); var iTotal = iNum1 + iNum2; return "This is your total: " + iTotal + ", at this time: " + dteDate.toString(); }

```

I know - I'm not to proud of it... Certainly if it was a horse you'd shoot it. Basically, at that point I knew the following: - JavaScript had functions (but I knew only one way to use them - see above)

  • It had some concept of numbers (but I had no idea of the type of numbers I was dealing with; integer / float / decimal / who knows?)
  • It had some concept of strings
  • It had a date object

This was about the limit of my knowledge. If I was right, and that's all there was to JavaScript then my evaluation of it as utter rubbish would have been accurate. I was wrong. SOOOOOOOOOOOO WRONG! I first realised how wrong I was when I opened up the jQuery source to have a read. Put simply I had *no* idea what I was looking at. For a while I wondered if I was actually looking at JavaScript; the code was so different to what I was expecting that for a goodly period I considered jQuery to be some kind of strange black magic; written in a language I did not understand. I was half right. jQuery wasn't black magic. But it was written in a language I didn't understand; namely JavaScript. :-( Here beginneth the lessons... I started casting around looking for information about JavaScript. Before very long I discovered one Elijah Manor who had helpfully done a number of talks and blog posts directed at C# developers (which I was) about JavaScript. My man! - How good C# habits can encourage bad JavaScript habits part 1

For me this was all massively helpful. In my development life so far I had only ever dealt with strongly typed, compiled "classical" languages. I had little to no experience of functional, dynamic and loosely typed languages (essentially what JavaScript is). Elijahs work opened up my eyes to some of the massive differences that exist. He also pointed me in the direction of the (never boring) Doug Crockford, author of the best programming book I have ever purchased: JavaScript: The Good Parts. Who could not like a book about JavaScript which starts each chapter with a quote from Shakespeare and still comes in at only a 100 pages? It's also worth watching the man in person as he's a thoroughly engaging presence. There's loads of videos of him out there but this one is pretty good: Douglas Crockford: The JavaScript Programming Language. I don't want to waste your time by attempting to rehash what these guys have done already. I think it's always best to go to the source so I'd advise you to check them out for yourselves. That said it's probably worth summarising some of the main points I took away from them (you can find better explanations of all of these through looking at their posts): 1. JavaScript has objects but has no classes. Instead it has (what I still consider to be) the weirdest type of inheritance going: prototypical inheritance. 2. JavaScript has the simplest and loveliest way of creating a new object out there; the "JavaScript Object Literal". Using this we can simply var myCar = { wheels: 4, colour: "blue" } and ladies and gents we have ourselves a car! (object) 3. In JavaScript functions are first class objects. This means functions can be assigned to variables (as easily as you'd assign a string to a variable) and crucially you can pass them as parameters to a function and pass them back as a return type. Herein lies power! 4. JavaScript has 6 possible values (false, null, undefined, empty strings, 0 and NaN) which it evaluates as false. These are known as the "false-y" values. It's a bit weird but on the plus side this can lead to some nicely terse code. 5. To perform comparisons in JavaScript you should avoid == and != and instead use === and !==. Before I discovered this I had been using == and != and then regularly puzzling over some truly odd behaviour. Small though it may sound, this may be the most important discovery of the lot as it was this that lead to me actually *trusting* the language. Prior to this I vaguely thought I was picking up on some kind of bug in the JavaScript language which I plain didn't understand. (After all, in any sane universe should this really evaluate to true?: 0 == "") 6. Finally JavaScript has function scope rather than block scope. Interestingly it "hoists" variable declaration to the top of each function which can lead to some very surprising behaviour if you don't realise what is happening.

I now realise that JavaScript is a fantastic language because of it's flexibility. It is also a deeply flawed language; in part due to it's unreasonably forgiving nature (you haven't finished your line with a semi-colon; that's okay - I can see you meant to so I'll stick one in / you haven't declared your variable; not a problem I won't tell you but I'll create a new variable stick it in global scope and off we go etc). It is without question the easiest language with which to create a proper dogs breakfast. To get the best out of JavaScript we need to understand the quirks of the language and we need good patterns. If you're interested in getting to grips with it I really advise you to check out the Elijah and Dougs work - it really helped me.