Skip to main content

53 posts tagged with "TypeScript"

View All Tags

· 6 min read

There's a debate to be had about whether using JavaScript or TypeScript leads to better outcomes when building a project. The introduction of using JSDoc annotations to type a JavaScript codebase introduces a new dynamic to this discussion. This post will investigate what that looks like, and come to an (opinionated) conclusion.

title image reading "JSDoc JavaScript vs TypeScript" with a JavaScript logo and TypeScript logo

This blog is evolving into meetup talk. Join us on Dec 1st at 2pm EDT / 7pm GMT - sign up here

If you'd talked to me in 2018, I would have solidly recommended using TypeScript, and steering away from JavaScript. The rationale is simple: I'm exceedingly convinced of the value that static typing provides in terms of productivity / avoiding bugs in production. I appreciate this can be a contentious issue, but that is my settled opinion on the subject. Other opinions are available.

TypeScript has long had a good static typing story. JavaScript is dynamically typed and so historically has not. Thanks to TypeScript support for JSDoc, JavaScript can now be statically type checked.

What is JSDoc JavaScript?

JSDoc itself actually dates way back to 1999. According to the Wikipedia entry:

JSDoc is a markup language used to annotate JavaScript source code files. Using comments containing JSDoc, programmers can add documentation describing the application programming interface of the code they're creating.

The TypeScript team have taken JSDoc support and run with it. You can now use a variant of JSDoc annotations to provide type information in JavaScript files.

What does this look like? Well, to take a simple example, a TypeScript statement like so:

let myString: string;

Could become the equivalent JavaScript statement with a JSDoc annotation:

/** @type {string} */
let myString;

This is type enhanced JavaScript which the TypeScript compiler can understand and type check.

Why use JSDoc JavaScript?

Why would you use JSDoc JavaScript instead of TypeScript? Well there's a number of possible use cases.

Perhaps you're writing simple node scripts and you'd like a little type safety to avoid mistakes. Or perhaps you want to dip your project's toe in the waters of static type checking but without fully committing. JSDoc allows for that. Or perhaps your team simply prefers not having a compile step.

That, in fact, was the rationale of the webpack team. A little bit of history: webpack has always been a JavaScript codebase. As the codebase grew and grew, there was often discussion about using static typing. However, having a compilation step wasn't desired.

TypeScript had been quietly adding support for type checking JavaScript with the assistance of JSDoc for some time. Initial support arrived with the --checkJs compiler option in TypeScript 2.3.

A community member by the name of Mohsen Azimi experimentally started out using this approach to type check the webpack codebase. His PR ended up being a test case that helped improve the type checking of JavaScript by TypeScript. TypeScript v2.9 shipped with a whole host of JSDoc improvements as a consequence of the webpack work. Being such a widely used project this also helped popularise the approach of using JSDoc to type check JavaScript codebases. It demonstrated that this approach could work on a significantly sized codebase.

These days, JSDoc type checking with TypeScript is extremely powerful. Whilst not quite on par with TypeScript (not all TypeScript syntax is supported in JSDoc) the gap in functionality is pretty small.

It's a completely legitimate choice to build a JavaScript codebase with all the benefits of static typing.

Why use TypeScript?

So if you were starting a project today, and you'd decided you wanted to make use of static typing, how do you choose? TypeScript or JavaScript with JSDoc?

Well, unless you've a compelling need to avoid a compilation step, I'm going to suggest that TypeScript may be the better choice for a number of reasons.

Firstly, the tooling support for using TypeScript directly is better than that for JSDoc JavaScript. At the time of writing, things like refactoring tools etc in your editor work more effectively with TypeScript than with JSDoc JavaScript. (Although these are improving as time goes by.)

Secondly, working with JSDoc is distinctly "noisier". It requires far more keystrokes to achieve the same level of type safety. Consider the following TypeScript:

function stringsStringStrings(
p1: string,
p2?: string,
p3?: string,
p4 = 'test'
): string {
// ...
}

As compared to the equivalent JSDoc JavaScript:

/**
* @param {string} p1
* @param {string=} p2
* @param {string} [p3]
* @param {string} [p4="test"]
* @return {string}
*/
function stringsStringStrings(p1, p2, p3, p4) {
// ...
}

It may be my own familiarity with TypeScript speaking, but I find that the TypeScript is easier to read and comprehend as compared to the JSDoc JavaScript alternative. The fact that all JSDoc annotations live in comments, rather than directly in syntax, makes it harder to follow. (It certainly doesn't help that many VS Code themes present comments in a very faint colour.)

My final reason for favouring TypeScript comes down to falling into the "pit of success". You're cutting against the grain when it comes to static typing and JavaScript. You can have it, but you have to work that bit harder to ensure that you have statically typed code. On the other hand, you're cutting with the grain when it comes to static typing and TypeScript. You have to work hard to opt out of static typing. The TypeScript defaults tend towards static typing, whilst the JavaScript defaults tend away.

As someone who very much favours static typing, you can imagine how this is compelling to me!

It's your choice!

So in a way, I don't feel super strongly whether people use JavaScript or TypeScript. But having static typing will likely be a benefit to new projects. Bottom line, I'm keen that people fall into the "pit of success", so my recommendation for a new project would be TypeScript.

I really like JSDoc myself, and will often use it on small projects. It's a fantastic addition to TypeScript's capabilities. For bigger projects, I'll likely go with TypeScript from the get go. But really, this is a choice - and either is great.

This post was originally published on LogRocket.

· 9 min read

Google has a wealth of APIs which we can interact with. At the time of writing, there's more than two hundred available; including YouTube, Google Calendar and GMail (alongside many others). To integrate with these APIs, it's necessary to authenticate and then use that credential with the API. This post will take you through how to do just that using TypeScript. It will also demonstrate how to use one of those APIs: the Google Calendar API.

Creating an OAuth 2.0 Client ID on the Google Cloud Platform

The first thing we need to do is go to the Google Cloud Platform to create a project. The name of the project doesn't matter particularly; although it can be helpful to name the project to align with the API you're intending to consume. That's what we'll do here as we plan to integrate with the Google Calendar API:

Screenshot of the Create Project screen in the Google Cloud Platform

The project is the container in which the OAuth 2.0 Client ID will be housed. Now we've created the project, let's go to the credentials screen and create an OAuth Client ID using the Create Credentials dropdown:

Screenshot of the Create Credentials dropdown in the Google Cloud Platform

You'll likely have to create an OAuth consent screen before you can create the OAuth Client ID. Going through the journey of doing that feels a little daunting as many questions have to be answered. This is because the consent screen can be used for a variety of purposes beyond the API authentication we're looking at today.

When challenged, you can generally accept the defaults and proceed. The user type you'll require will be "External":

Screenshot of the OAuth consent screen in the Google Cloud Platform

You'll also be required to create an app registration - all that's really required here is a name (which can be anything) and your email address:

Screenshot of the OAuth consent screen in the Google Cloud Platform

You don't need to worry about scopes. You can either plan to publish the app, or alternately set yourself up to be a test user - you'll need to do one of these in order that you can authenticate with the app. Continuing to the end of the journey should provide you with the OAuth consent screen which you need in order that you may then create the OAuth Client ID.

Creating the OAuth Client ID is slightly confusing as the "Application type" required is "TVs and Limited Input devices".

Screenshot of the create OAuth Client ID screen in the Google Cloud Platform

We're using this type of application as we want to acquire a refresh token which we'll be able to use in future to aquire access tokens which will be used to access the Google APIs.

Once it's created, you'll be able to download the Client ID from the Google Cloud Platform:

Screenshot of the create OAuth Client ID screen in the Google Cloud Platform

When you download it, it should look something like this:

{
"installed": {
"client_id": "CLIENT_ID",
"project_id": "PROJECT_ID",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_secret": "CLIENT_SECRET",
"redirect_uris": ["urn:ietf:wg:oauth:2.0:oob", "http://localhost"]
}
}

You'll need the client_id, client_secret and redirect_uris - but keep them in a safe place and don't commit client_id and client_secret to source control!

Acquiring a refresh token

Now we've got our client_id and client_secret, we're ready to write a simple node command line application which we can use to obtain a refresh token. This is actually a multi-stage process that will end up looking like this:

  • Provide the Google authentication provider with the client_id and client_secret, in return it will provide an authentication URL.
  • Open the authentication URL in the browser and grant consent, the provider will hand over a code.
  • Provide the Google authentication provider with the client_id, client_secret and the code, it will acquire and provide users with a refresh token.

Let's start coding. We'll initialise a TypeScript Node project like so:

mkdir src
cd src
npm init -y
npm install googleapis ts-node typescript yargs @types/yargs @types/node
npx tsc --init

We've added a number of dependencies that will allow us to write a TypeScript Node command line application. We've also added a dependency to the googleapis package which describes itself as:

Node.js client library for using Google APIs. Support for authorization and authentication with OAuth 2.0, API Keys and JWT tokens is included.

We're going to make use of the OAuth 2.0 part. We'll start our journey by creating a file called google-api-auth.ts:

import { getArgs, makeOAuth2Client } from './shared';

async function getToken() {
const { clientId, clientSecret, code } = await getArgs();
const oauth2Client = makeOAuth2Client({ clientId, clientSecret });

if (code) await getRefreshToken(code);
else getAuthUrl();

async function getAuthUrl() {
const url = oauth2Client.generateAuthUrl({
// 'online' (default) or 'offline' (gets refresh_token)
access_type: 'offline',

// scopes are documented here: https://developers.google.com/identity/protocols/oauth2/scopes#calendar
scope: [
'https://www.googleapis.com/auth/calendar',
'https://www.googleapis.com/auth/calendar.events',
],
});

console.log(`Go to this URL to acquire a refresh token:\n\n${url}\n`);
}

async function getRefreshToken(code: string) {
const token = await oauth2Client.getToken(code);
console.log(token);
}
}

getToken();

And a common file named shared.ts which google-api-auth.ts imports and which we'll re-use later:

import { google } from 'googleapis';
import yargs from 'yargs/yargs';
const { hideBin } = require('yargs/helpers');

export async function getArgs() {
const argv = await Promise.resolve(yargs(hideBin(process.argv)).argv);

const clientId = argv['clientId'] as string;
const clientSecret = argv['clientSecret'] as string;

const code = argv.code as string | undefined;
const refreshToken = argv.refreshToken as string | undefined;
const test = argv.test as boolean;

if (!clientId) throw new Error('No clientId ');
console.log('We have a clientId');

if (!clientSecret) throw new Error('No clientSecret');
console.log('We have a clientSecret');

if (code) console.log('We have a code');
if (refreshToken) console.log('We have a refreshToken');

return { code, clientId, clientSecret, refreshToken, test };
}

export function makeOAuth2Client({
clientId,
clientSecret,
}: {
clientId: string;
clientSecret: string;
}) {
return new google.auth.OAuth2(
/* YOUR_CLIENT_ID */ clientId,
/* YOUR_CLIENT_SECRET */ clientSecret,
/* YOUR_REDIRECT_URL */ 'urn:ietf:wg:oauth:2.0:oob'
);
}

The getToken function above does these things:

  1. If given a client_id and client_secret it will obtain an authentication URL.
  2. If given a client_id, client_secret and code it will obtain a refresh token (scoped to access the Google Calendar API).

We'll add an entry to our package.json which will allow us to run our console app:

    "google-api-auth": "ts-node google-api-auth.ts"

Now we're ready to acquire the refresh token. We'll run the following command (substituting in the appropriate values):

npm run google-api-auth -- --clientId CLIENT_ID --clientSecret CLIENT_SECRET

Click on the URL that is generated in the console, it should open up a consent screen in the browser which looks like this:

Screenshot of the consent screen

Authenticate and grant consent and you should get a code:

Screenshot of the generated code

Then (quickly) paste the acquired code into the following command:

npm run google-api-auth -- --clientId CLIENT_ID --clientSecret CLIENT_SECRET --code THISISTHECODE

The refresh_token (alongside much else) will be printed to the console. Grab it and put it somewhere secure. Again, no storing in source control!

It's worth taking a moment to reflect on what we've done. We've acquired a refresh token which involved a certain amount of human interaction. We've had to run a console command, do some work in a browser and run another commmand. You wouldn't want to do this repeatedly because it involves human interaction. Intentionally it cannot be automated. However, once you've acquired the refresh token, you can use it repeatedly until it expires (which may be never or at least years in the future). So once you have the refresh token, and you've stored it securely, you have what you need to be able to automate an API interaction.

Accessing the Google Calendar API

Let's test out our refresh token by attempting to access the Google Calendar API. We'll create a calendar.ts file

import { google } from 'googleapis';
import { getArgs, makeOAuth2Client } from './shared';

async function makeCalendarClient() {
const { clientId, clientSecret, refreshToken } = await getArgs();
const oauth2Client = makeOAuth2Client({ clientId, clientSecret });
oauth2Client.setCredentials({
refresh_token: refreshToken,
});

const calendarClient = google.calendar({
version: 'v3',
auth: oauth2Client,
});
return calendarClient;
}

async function getCalendar() {
const calendarClient = await makeCalendarClient();

const { data: calendars, status } = await calendarClient.calendarList.list();

if (status === 200) {
console.log('calendars', calendars);
} else {
console.log('there was an issue...', status);
}
}

getCalendar();

The getCalendar function above uses the client_id, client_secret and refresh_token to access the Google Calendar API and retrieve the list of calendars.

We'll add an entry to our package.json which will allow us to run this function:

    "calendar": "ts-node calendar.ts",

Now we're ready to test calendar.ts. We'll run the following command (substituting in the appropriate values):

npm run calendar -- --clientId CLIENT_ID --clientSecret CLIENT_SECRET --refreshToken REFRESH_TOKEN

When we run for the first time, we may encounter a self explanatory message which tells us that we need enable the calendar API for our application:

(node:31563) UnhandledPromiseRejectionWarning: Error: Google Calendar API has not been used in project 77777777777777 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/calendar-json.googleapis.com/overview?project=77777777777777 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.

Once enabled, we can run successfully for the first time. Consequently we should see something like this showing up in the console:

Screenshot of calendars list response in the console

This demonstrates that we're successfully integrating with a Google API using our refresh token.

Today the Google Calendar API, tomorrow the (Google API) world!

What we've demonstrated here is integrating with the Google Calendar API. However, that is not the limit of what we can do. As we discussed earlier, Google has more than two hundred APIs we can interact with, and the key to that interaction is following the same steps for authentication that this post outlines.

Let's imagine that we want to integrate with the YouTube API or the GMail API. We'd be able to follow the steps in this post, using different scopes for the refresh token appropriate to the API, and build an integration against that API. Take a look at the available APIs here.

The approach outlined by this post is the key to integrating with a multitude of Google APIs. Happy integrating!

The idea of this was sparked by Martin Fowler's post on the topic which comes from a Ruby angle.

This post was originally published on LogRocket.

· 5 min read

TypeScript has the ability to define classes as abstract. This means they cannot be instantiated directly, only non-abstract subclasses can be. Let's take a look at what this means when it comes to constructor usage.

Making a scratchpad

In order that we can dig into this, let's create ourselves a scratchpad project to work with. We're going to create a node project and install TypeScript as a dependency.

mkdir ts-abstract-constructors
cd ts-abstract-constructors
npm init --yes
npm install typescript @types/node --save-dev

We now have a package.json file set up. We need to initialise a TypeScript project as well:

npx tsc --init

This will give us a tsconfig.json file that will drive configuration of TypeScript. By default TypeScript transpiles to an older version of JavaScript that predates classes. So we'll update the config to target a newer version of the language that does include them:

    "target": "es2020",
"lib": ["es2020"],

Let's create ourselves a TypeScript file called index.ts. The name is not significant; we just need a file to develop in.

Finally we'll add a script to our package.json that compiles our TypeScript to JavaScript, and then runs the JS with node:

"start": "tsc --project \".\" && node index.js"

Making an abstract class

Now we're ready. Let's add an abstract class with a constructor to our index.ts file:

abstract class ViewModel {
id: string;

constructor(id: string) {
this.id = id;
}
}

Consider the ViewModel class above. Let's say we're building some kind of CRUD app, we'll have different views. Each of those views will have a corresponding viewmodel which is a subclass of the ViewModel abstract class. The ViewModel class has a mandatory id parameter in the constructor. This is to ensure that every viewmodel has an id value. If this were a real app, id would likely be the value with which an entity was looked up in some kind of database.

Importantly, all subclasses of ViewModel should either:

  • not implement a constructor at all, leaving the base class constructor to become the default constructor of the subclass or

  • implement their own constructor which invokes the ViewModel base class constructor.

Taking our abstract class for a spin

Now we have it, let's see what we can do with our abstract class. First of all, can we instantiate our abstract class? We shouldn't be able to do this:

const viewModel = new ViewModel('my-id');

console.log(`the id is: ${viewModel.id}`);

And sure enough, running npm start results in the following error (which is also being reported by our editor; VS Code).

index.ts:9:19 - error TS2511: Cannot create an instance of an abstract class.

const viewModel = new ViewModel('my-id');

Screenshot of "Cannot create an instance of an abstract class." error in VS Code

Tremendous. However, it's worth remembering that abstract is a TypeScript concept. When we compile our TS, although it's throwing a compilation error, it still transpiles an index.js file that looks like this:

'use strict';
class ViewModel {
constructor(id) {
this.id = id;
}
}
const viewModel = new ViewModel('my-id');
console.log(`the id is: ${viewModel.id}`);

As we can see, there's no mention of abstract; it's just a straightforward class. In fact, if we directly execute the file with node index.js we can see an output of:

the id is: my-id

So the transpiled code is valid JavaScript even if the source code isn't valid TypeScript. This all reminds us that abstract is a TypeScript construct.

Subclassing without a new constructor

Let's now create our first subclass of ViewModel and attempt to instantiate it:

class NoNewConstructorViewModel extends ViewModel {}

// error TS2554: Expected 1 arguments, but got 0.
const viewModel1 = new NoNewConstructorViewModel();

const viewModel2 = new NoNewConstructorViewModel('my-id');

Screenshot of "error TS2554: Expected 1 arguments, but got 0." error in VS Code

As the TypeScript compiler tells us, the second of these instantiations is legitimate as it relies upon the constructor from the base class as we'd hope. The first is not as there is no parameterless constructor.

Subclassing with a new constructor

Having done that, let's try subclassing and implementing a new constructor which has two parameters (to differentiate from the constructor we're overriding):

class NewConstructorViewModel extends ViewModel {
data: string;
constructor(id: string, data: string) {
super(id);
this.data = data;
}
}

// error TS2554: Expected 2 arguments, but got 0.
const viewModel3 = new NewConstructorViewModel();

// error TS2554: Expected 2 arguments, but got 1.
const viewModel4 = new NewConstructorViewModel('my-id');

const viewModel5 = new NewConstructorViewModel('my-id', 'important info');

Screenshot of "error TS2554: Expected 1 arguments, but got 1." error in VS Code

Again, only one of the attempted instantiations is legitimate. viewModel3 is not as there is no parameterless constructor. viewModel4 is not as we have overridden the base class constructor with our new one that has two parameters. Hence viewModel5 is our "Goldilocks" instantiation; it's just right!

It's also worth noting that we're calling super in the NewConstructorViewModel constructor. This invokes the constructor of the ViewModel base (or "super") class. TypeScript enforces that we pass the appropriate arguments (in our case a single string).

Wrapping it up

We've seen that TypeScript ensures correct usage of constructors when we have an abstract class. Importantly, all subclasses of abstract classes either:

  • do not implement a constructor at all, leaving the base class constructor (the abstract constructor) to become the default constructor of the subclass or

  • implement their own constructor which invokes the base (or "super") class constructor with the correct arguments.

This post was originally published on LogRocket.

· 4 min read

React 18 alpha has been released; but can we use it with TypeScript? The answer is "yes", but you need to do a couple of things to make that happen. This post will show you what to do.

Creating a React App with TypeScript

Let's create ourselves a vanilla React TypeScript app with Create React App:

yarn create react-app my-app --template typescript

Now let's upgrade the version of React to @next:

Which will leave you with entries in the package.json which use React 18. It will likely look something like this:

    "react": "^18.0.0-alpha-e6be2d531",
"react-dom": "^18.0.0-alpha-e6be2d531",

If we run yarn start we'll find ourselves running a React 18 app. Exciting!

Using the new APIs

So let's try using ReactDOM.createRoot API. It's this API that opts our application into using new features of React 18. We'll open up index.tsx and make this change:

-ReactDOM.render(
- <React.StrictMode>
- <App />
- </React.StrictMode>,
- document.getElementById('root')
-);
+const root = ReactDOM.createRoot(document.getElementById('root'));
+
+root.render(
+ <React.StrictMode>
+ <App />
+ </React.StrictMode>
+);

If we were running JavaScript alone, this would work. However, because we're using TypeScript as well, we're now confronted with an error:

Property 'createRoot' does not exist on type 'typeof import("/code/my-app/node_modules/@types/react-dom/index")'. TS2339

a screenshot of the Property &#39;createRoot&#39; does not exist error

This is the TypeScript compiler complaining that it doesn't know anything about ReactDOM.createRoot. This is because the type definitions that are currently in place in our application don't have that API defined.

Let's upgrade our type definitions:

yarn add @types/react @types/react-dom

We might reasonably hope that everything should work now. Alas it does not. The same error is presenting. TypeScript is not happy.

Telling TypeScript about the new APIs

If we take a look at the PR that added support for the APIs, we'll find some tips. If you look at one of the next.d.ts you'll find this info, courtesy of Sebastian Silbermann:

/**
* These are types for things that are present in the upcoming React 18 release.
*
* Once React 18 is released they can just be moved to the main index file.
*
* To load the types declared here in an actual project, there are three ways. The easiest one,
* if your `tsconfig.json` already has a `"types"` array in the `"compilerOptions"` section,
* is to add `"react/next"` to the `"types"` array.
*
* Alternatively, a specific import syntax can to be used from a typescript file.
* This module does not exist in reality, which is why the {} is important:
*
* ```ts
* import {} from 'react/next'
* ```
*
* It is also possible to include it through a triple-slash reference:
*
* ```ts
* /// <reference types="react/next" />
* ```
*
* Either the import or the reference only needs to appear once, anywhere in the project.
*/

Let's try the first item on the list. We'll edit our tsconfig.json and add a new entry to the "compilerOptions" section:

    "types": ["react/next", "react-dom/next"]

If we restart our build with yarn start we're now presented with a different error:

Argument of type 'HTMLElement | null' is not assignable to parameter of type 'Element | Document | DocumentFragment | Comment'. Type 'null' is not assignable to type 'Element | Document | DocumentFragment | Comment'. TS2345

a screenshot of the null is not assignable error

Now this is actually nothing to do with issues with our new React type definitions. They are fine. This is TypeScript saying "it's not guaranteed that document.getElementById('root') returns something that is not null... since we're in strictNullChecks mode you need to be sure root is not null".

We'll deal with that by testing we do have an element in play before invoking ReactDOM.createRoot:

-const root = ReactDOM.createRoot(document.getElementById('root'));
+const rootElement = document.getElementById('root');
+if (!rootElement) throw new Error('Failed to find the root element');
+const root = ReactDOM.createRoot(rootElement);

Now that change is made, we have a working React 18 application, using TypeScript. Enjoy!

This post was originally published on LogRocket.

· 8 min read

The Service Now REST API is an API which allows you to interact with Service Now. It produces different shaped results based upon the sysparm_display_value query parameter. This post looks at how we can model these API results with TypeScripts conditional types. The aim being to minimise repetition whilst remaining strongly typed. This post is specifically about the Service Now API, but the principles around conditional type usage are generally applicable.

Service Now and TypeScript

The power of a query parameter

There is a query parameter which many endpoints in Service Nows Table API support named sysparm_display_value. The docs describe it thus:

Data retrieval operation for reference and choice fields. Based on this value, retrieves the display value and/or the actual value from the database.

Valid values:

  • true: Returns the display values for all fields.
  • false: Returns the actual values from the database.
  • all: Returns both actual and display value

Let's see what that looks like when it comes to loading a Change Request. Consider the following curls:

# sysparm_display_value=all
curl "https://ourcompanyinstance.service-now.com/api/now/table/change_request?sysparm_query=number=CHG0122585&sysparm_limit=1&sysparm_display_value=all" --request GET --header "Accept:application/json" --user 'API_USERNAME':'API_PASSWORD' | jq '.result[0] | { state, sys_id, number, requested_by, reason }'

# sysparm_display_value=true
curl "https://ourcompanyinstance.service-now.com/api/now/table/change_request?sysparm_query=number=CHG0122585&sysparm_limit=1&sysparm_display_value=true" --request GET --header "Accept:application/json" --user 'API_USERNAME':'API_PASSWORD' | jq '.result[0] | { state, sys_id, number, requested_by, reason }'

# sysparm_display_value=false
curl "https://ourcompanyinstance.service-now.com/api/now/table/change_request?sysparm_query=number=CHG0122585&sysparm_limit=1&sysparm_display_value=false" --request GET --header "Accept:application/json" --user 'API_USERNAME':'API_PASSWORD' | jq '.result[0] | { state, sys_id, number, requested_by, reason }'

When executed, they each load the same Change Request from Service Now with a different value for sysparm_display_value. You'll notice there's some jq in the mix as well. This is because there's a lot of data in a Change Request. Rather than display everything, we're displaying a subset of fields. The first curl has a sysparm_display_value value of all, the second false and the third true. What do the results look like?

sysparm_display_value=all:

{
"state": {
"display_value": "Closed",
"value": "3"
},
"sys_id": {
"display_value": "4d54d7481b37e010d315cbb5464bcb95",
"value": "4d54d7481b37e010d315cbb5464bcb95"
},
"number": {
"display_value": "CHG0122595",
"value": "CHG0122595"
},
"requested_by": {
"display_value": "Sally Omer",
"link": "https://ourcompanyinstance.service-now.com/api/now/table/sys_user/b15cf3ebdbe11300f196f3651d961999",
"value": "b15cf3ebdbe11300f196f3651d961999"
},
"reason": {
"display_value": null,
"value": ""
}
}

sysparm_display_value=true:

{
"state": "Closed",
"sys_id": "4d54d7481b37e010d315cbb5464bcb95",
"number": "CHG0122595",
"requested_by": {
"display_value": "Sally Omer",
"link": "https://ourcompanyinstance.service-now.com/api/now/table/sys_user/b15cf3ebdbe11300f196f3651d961999"
},
"reason": null
}

sysparm_display_value=false:

{
"state": "3",
"sys_id": "4d54d7481b37e010d315cbb5464bcb95",
"number": "CHG0122595",
"requested_by": {
"link": "https://ourcompanyinstance.service-now.com/api/now/table/sys_user/b15cf3ebdbe11300f196f3651d961999",
"value": "b15cf3ebdbe11300f196f3651d961999"
},
"reason": ""
}

As you can see, we have the same properties being returned each time, but with a different shape. Let's call out some interesting highlights:

  • requested_by is always an object which contains link. It may also contain value and display_value depending upon sysparm_display_value
  • state, sys_id, number and reason are objects containing value and display_value when sysparm_display_value is all. Otherwise, the value of value or display_value is surfaced up directly; not in an object.
  • most values are strings, even if they represent another data type. So state.value is always a stringified number. The only exception to this rule is reason.display_value which can be null

Type Definition time

We want to create type definitions for these API results. We could of course create three different results, but that would involve duplication. Boo! It's worth bearing in mind we're looking at a subset of five properties in this example. In reality, there are many, many properties on a Change Request. Whilst this example is for a subset, if we wanted to go on to create the full type definition the duplication would become very impractical.

What can we do? Well, if all of the underlying properties were of the same type, we could use a generic and be done. But given the underlying types can vary, that's not going to work. We can achieve this though through using a combination of generics and conditional types.

Let's begin by creating a string literal type of the possible values of sysparm_display_value:

export type DisplayValue = 'all' | 'true' | 'false';

Making a PropertyValue type

Next we need to create a type that models the object with display_value and value properties.

a type for state, sys_id, number and reason
  • state, sys_id, number and reason are objects containing value and display_value when sysparm_display_value is 'all'. Otherwise, the value of value or display is surfaced up directly; not in an object.
  • most values are strings, even if they represent another data type. So state.value is always a stringified number. The only exception to this rule is reason.display_value which can be null
export interface ValueAndDisplayValue<TValue = string, TDisplayValue = string> {
display_value: TDisplayValue;
value: TValue;
}

Note that this is a generic property with a default type of string for both display_value and value. Most of the time, string is the type in question so it's great that TypeScript allows us to cut down on the amount of syntax we use.

Now we're going to create our first conditional type:

export type PropertyValue<
TAllTrueFalse extends DisplayValue,
TValue = string,
TDisplayValue = string
> = TAllTrueFalse extends 'all'
? ValueAndDisplayValue<TValue, TDisplayValue>
: TAllTrueFalse extends 'true'
? TDisplayValue
: TValue;

The PropertyValue will either be a ValueAndDisplayValue, a TDisplayValue or a TValue, depending upon whether PropertyValue is 'all', 'true' or 'false' respectively. That's hard to grok. Let's look at an example of each of those cases using the reason property, which allows a TValue of string and a TDisplayValue of string | null:

const reasonAll: PropertyValue<'all', string, string | null> = {
display_value: null,
value: '',
};
const reasonTrue: PropertyValue<'true', string, string | null> = null;
const reasonFalse: PropertyValue<'false', string, string | null> = '';

Consider the type on the left and the value on the right. We're successfully modelling our PropertyValues. I've deliberately picked an edge case example to push our conditional type to its limits.

Service Now Change Request States

Let's look at another usage. We'll create a type that repesents the possible values of a Change Request's state in Service Now. Do take a moment to appreciate these values. Many engineers were lost in the numerous missions to obtain these rare and secret enums. Alas, the Service Now API docs have some significant gaps.

/** represents the possible Change Request "State" values in Service Now */
export const STATE = {
NEW: '-5',
ASSESS: '-4',
SENT_FOR_APPROVAL: '-3',
SCHEDULED: '-2',
APPROVED: '-1',
WAITING: '1',
IN_PROGRESS: '2',
COMPLETE: '3',
ERROR: '4',
CLOSED: '7',
} as const;

export type State = typeof STATE[keyof typeof STATE];

By combining State and PropertyValue, we can strongly type the state property of Change Requests. Consider:

const stateAll: PropertyValue<'all', State> = {
display_value: 'Closed',
value: '3',
};
const stateTrue: PropertyValue<'true', State> = 'Closed';
const stateFalse: PropertyValue<'false', State> = '3';

With that in place, let's turn our attention to our other natural type that the requested_by property demonstrates.

Making a LinkValue type

a type for requested_by

requested_by is always an object which contains link. It may also contain value and display_value depending upon sysparm_display_value

interface Link {
link: string;
}

/** when TAllTrueFalse is 'false' */
export interface LinkAndValue extends Link {
value: string;
}

/** when TAllTrueFalse is 'true' */
export interface LinkAndDisplayValue extends Link {
display_value: string;
}

/** when TAllTrueFalse is 'all' */
export interface LinkValueAndDisplayValue
extends LinkAndValue,
LinkAndDisplayValue {}

The three types above model the different scenarios. Now we need a conditional type to make use of them:

export type LinkValue<TAllTrueFalse extends DisplayValue> =
TAllTrueFalse extends 'all'
? LinkValueAndDisplayValue
: TAllTrueFalse extends 'true'
? LinkAndDisplayValue
: LinkAndValue;

This is hopefully simpler to read than the PropertyValue type, and if you look at the examples below you can see what usage looks like:

const requested_byAll: LinkValue<'all'> = {
display_value: 'Sally Omer',
link: 'https://ourcompanyinstance.service-now.com/api/now/table/sys_user/b15cf3ebdbe11300f196f3651d961999',
value: 'b15cf3ebdbe11300f196f3651d961999',
};
const requested_byTrue: LinkValue<'true'> = {
display_value: 'Sally Omer',
link: 'https://ourcompanyinstance.service-now.com/api/now/table/sys_user/b15cf3ebdbe11300f196f3651d961999',
};
const requested_byFalse: LinkValue<'false'> = {
link: 'https://ourcompanyinstance.service-now.com/api/now/table/sys_user/b15cf3ebdbe11300f196f3651d961999',
value: 'b15cf3ebdbe11300f196f3651d961999',
};

Making our complete type

With these primitives in place, we can now build ourself a (cut-down) type that models a Change Request:

export interface ServiceNowChangeRequest<TAllTrueFalse extends DisplayValue> {
state: PropertyValue<TAllTrueFalse, State>;
sys_id: PropertyValue<TAllTrueFalse>;
number: PropertyValue<TAllTrueFalse>;
requested_by: LinkValue<TAllTrueFalse>;
reason: PropertyValue<TAllTrueFalse, string, string | null>;
// there are *way* more properties in reality
}

This is a generic type which will accept 'all', 'true' or 'false' and will use that type to drive the type of the properties inside the object. And now we have successfully typed our Service Now Change Request, thanks to TypeScript's conditional types.

To test it out, let's take the JSON responses we got back from our curls at the start, and see if we can make ServiceNowChangeRequests with them.

const changeRequestFalse: ServiceNowChangeRequest<'false'> = {
state: '3',
sys_id: '4d54d7481b37e010d315cbb5464bcb95',
number: 'CHG0122595',
requested_by: {
link: 'https://ourcompanyinstance.service-now.com/api/now/table/sys_user/b15cf3ebdbe11300f196f3651d961999',
value: 'b15cf3ebdbe11300f196f3651d961999',
},
reason: '',
};

const changeRequestTrue: ServiceNowChangeRequest<'true'> = {
state: 'Closed',
sys_id: '4d54d7481b37e010d315cbb5464bcb95',
number: 'CHG0122595',
requested_by: {
display_value: 'Sally Omer',
link: 'https://ourcompanyinstance.service-now.com/api/now/table/sys_user/b15cf3ebdbe11300f196f3651d961999',
},
reason: null,
};

const changeRequestAll: ServiceNowChangeRequest<'all'> = {
state: {
display_value: 'Closed',
value: '3',
},
sys_id: {
display_value: '4d54d7481b37e010d315cbb5464bcb95',
value: '4d54d7481b37e010d315cbb5464bcb95',
},
number: {
display_value: 'CHG0122595',
value: 'CHG0122595',
},
requested_by: {
display_value: 'Sally Omer',
link: 'https://ourcompanyinstance.service-now.com/api/now/table/sys_user/b15cf3ebdbe11300f196f3651d961999',
value: 'b15cf3ebdbe11300f196f3651d961999',
},
reason: {
display_value: null,
value: '',
},
};

We can! Do take a look at this in the TypeScript playground.

· 7 min read

ts-loader has just released v9.0.0. This post goes through what this release is all about, and what it took to ship this version. For intrigue, it includes a brief scamper into my mental health along the way. Some upgrades go smoothly - this one had some hiccups. But we'll get into that.

hello world bicep

One big pull request

As of v8, ts-loader supported webpack 4 and webpack 5. However the webpack 5 support was best efforts, and not protected by any automated tests. ts-loader has two test packs:

  1. A comparison test pack that compares transpilation and webpack compilation output with known outputs.
  2. An execution test pack that executes Karma test packs written in TypeScript using ts-loader.

The test packs were tightly coupled to webpack 4 (and in the case of the comparison test pack, that's unavoidable). The mission was to port ts-loader to be built against (and have an automated test pack that ran against) webpack 5.

This ended up being a very big pull request. Work on it started back in February 2021 and we're shipping now in April of 2021. I'd initially expected it would take a couple of days at most. I had underestimated.

A number of people collaborated on this PR, either with code, feedback, testing or even just responding to questions. So I'd like to say thank you to:

What's changed

Let's go through what's different in v9. There's two breaking changes:

  • The minimum webpack version supported is now webpack 5. This simplifies the codebase, which previously had to if/else the various API registrations based on the version of webpack being used.
  • The minimum node version supported is now node 12. Node 10 reaches end of life status at the end of April 2021.

An interesting aspect of migrating to building against webpack 5 was dropping the dependency upon @types/webpack in favour of the types that now ship with webpack 5 itself. This was a mostly great experience; however we discovered some missing pieces.

Most notably, the LoaderContext wasn't strongly typed. LoaderContext is the value of this in the context of a running loader function. So it is probably the most interesting and important type from the perspective of a loader author.

Historically we used our own definition which had been adapted from the one in @types/webpack. I've looked into the possibility of a type being exposed in webpack itself. However, it turns out, it's complicated - with the LoaderContext type being effectively created across two packages. The type is initially created in webpack and then augmented later in loader-runner, prior to being supplied to loaders. You can read more on that here.

For now we've opted to stick with keeping an interface in ts-loader that models what arrives in the loader when executed. We have freshened it up somewhat, to model the webpack 5 world.

Alongside these changes, a number of dependencies were upgraded.

The hole

By the 19th of February most of the work was done. However, we were experiencing different behaviour between Linux and Windows in our comparison test pack.

As far as I was aware, we were doing all the appropriate work to ensure ts-loader and our test packs worked cross platform. But we were still experiencing problems whenever we ran the test pack on Windows. I'd done no end of tweaking but nothing worked. I couldn't explain it. I couldn't fix it. I was finding that tough to deal with.

I really want to be transparent about the warts and all aspect of open source software development. It is like all other types of software development; sometimes things go wrong and it can be tough to work out why. Right then, I was really quite unhappy. Things weren't working code-wise and I was at a loss to say why. This is not something that I dig.

I also wasn't sleeping amazingly at this point. It was winter and we'd been in lockdown in the UK for three months; as the COVID-19 pandemic ground relentlessly on. I love my family dearly. I really do. With that said, having my children around whilst I attempted to work was remarkably tough. I love those guys but, woah, was it stressful.

I was feeling at a low ebb. And I wasn't sure what to do next. So, feeling tired and pretty fed up, I took a break.

"Anybody down there?"

Time passed. In March Alexander Akait checked in to see how things were going and volunteered to help. He also suggested what turned out to be the fix; namely replacing usage of '\' with '/' in the assets supplied back to webpack. But crucially I implemented this wrong. Observe this commit:

const assetPath = path
.relative(compilation.compiler.outputPath, outputFile.name)
// According to @alexander-akait we should always '/' https://github.com/TypeStrong/ts-loader/pull/1251#issuecomment-799606985
.replace(/\//g, '/');

If you look closely at the replace you'll see that I'm globally replacing '/' with '/' rather than globally replacing '\' with '/'. The wasted time this caused... I could weep.

I generally thrashed around for a bit after this. Going in circles, like a six year old swimming wearing one armband. Then Tobias kindly volunteered to help. This much I've learned from a career in software: if talented people offer their assistance, grab it with both hands!

I'd been trying be as "learn in public" as possible about the issues I was facing on the pull request. The idea being, to surface the problems in a public forum where others can read and advise. And also to attempt a textual kind of rubber duck debugging.

When Tobias pitched in, I wanted to make it as easy as possible for him to help. So I wrote up a full description of what had changed. What the divergent behaviour in test packs looked like. I shared my speculation for what might be causing the issue (I was wrong by the way). Finally I provided a simple way to get up and running with the broken code. The easier I could make it for others to collaborate on this, I figured, the greater the likelihood of an answer. Tobias got to an answer quickly:

The problem is introduced due to some normalization logic in the test case: see #1273

While the PR fixes the problem, I think the paths should be normalized earlier in the pipeline to make this normalization code unnecessary. Note that asset names should have only / as they are filenames and not paths. Only absolute paths have \.

Tobias had raised a PR which introduced a workaround to resolved things in the test pack. This made me happy. More than that, he also identified that the issue lay in ts-loader itself. This caused me to look again at the changes I'd made, including my replace addition. With fresh eyes, I now realised this was a bug, and fixed it.

I found then that I could revert Tobias' workaround and still have passing tests. Result!

Release details

Now that we've got there; we've shipped. You can get the latest version of ts-loader on npm and you can find the release details on GitHub.

Thanks everyone - I couldn't have done it without your help. 🌻❤️

· 9 min read

Generating clients for APIs is a tremendous way to reduce the amount of work you have to do when you're building a project. Why handwrite that code when it can be auto-generated for you quickly and accurately by a tool like NSwag? To quote the docs:

The NSwag project provides tools to generate OpenAPI specifications from existing ASP.NET Web API controllers and client code from these OpenAPI specifications. The project combines the functionality of Swashbuckle (OpenAPI/Swagger generation) and AutoRest (client generation) in one toolchain.

There's some great posts out there that show you how to generate the clients with NSwag using an nswag.json file directly from a .NET project.

However, what if you want to use NSwag purely for its client generation capabilities? You may have an API written with another language / platform that exposes a Swagger endpoint, that you simply wish to create a client for. How do you do that? Also, if you want to do some special customisation of the clients you're generating, you may find yourself struggling to configure that in nswag.json. In that case, it's possible to hook into NSwag directly to do this with a simple .NET console app.

This post will:

  • Create a .NET API which exposes a Swagger endpoint. (Alternatively, you could use any other Swagger endpoint; for example an Express API.)
  • Create a .NET console app which can create both TypeScript and CSharp clients from a Swagger endpoint.
  • Create a script which, when run, creates a TypeScript client.
  • Consume the API using the generated client in a simple TypeScript application.

You will need both Node.js and the .NET SDK installed.

Create an API

We'll now create an API which exposes a Swagger / Open API endpoint. Whilst we're doing that we'll create a TypeScript React app which we'll use later on. We'll drop to the command line and enter the following commands which use the .NET SDK, node and the create-react-app package:

mkdir src
cd src
npx create-react-app client-app --template typescript
mkdir server-app
cd server-app
dotnet new api -o API
cd API
dotnet add package NSwag.AspNetCore

We now have a .NET API with a dependency on NSwag. We'll start to use it by replacing the Startup.cs that's been generated with the following:

using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;

namespace API
{
public class Startup
{
const string ALLOW_DEVELOPMENT_CORS_ORIGINS_POLICY = "AllowDevelopmentSpecificOrigins";
const string LOCAL_DEVELOPMENT_URL = "http://localhost:3000";

public Startup(IConfiguration configuration)
{
Configuration = configuration;
}

public IConfiguration Configuration { get; }

// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{

services.AddControllers();

services.AddCors(options => {
options.AddPolicy(name: ALLOW_DEVELOPMENT_CORS_ORIGINS_POLICY,
builder => {
builder.WithOrigins(LOCAL_DEVELOPMENT_URL)
.AllowAnyMethod()
.AllowAnyHeader()
.AllowCredentials();
});
});

// Register the Swagger services
services.AddSwaggerDocument();
}

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure (IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler("/Error");
app.UseHsts ();
app.UseHttpsRedirection();
}

app.UseDefaultFiles();
app.UseStaticFiles();

app.UseRouting();

app.UseAuthorization();

// Register the Swagger generator and the Swagger UI middlewares
app.UseOpenApi();
app.UseSwaggerUi3();

if (env.IsDevelopment())
app.UseCors(ALLOW_DEVELOPMENT_CORS_ORIGINS_POLICY);

app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
});
}
}
}

The significant changes in the above Startup.cs are:

  1. Exposing a Swagger endpoint with UseOpenApi and UseSwaggerUi3. NSwag will automagically create Swagger endpoints in your application for all your controllers. The .NET template ships with a WeatherForecastController.
  2. Allowing Cross-Origin Requests (CORS) which is useful during development (and will facilitate a demo later).

Back in the root of our project we're going to initialise an npm project. We're going to use this to put in place a number of handy npm scripts that will make our project easier to work with. So we'll npm init and accept all the defaults.

Now we're going add some dependencies which our scripts will use: npm install cpx cross-env npm-run-all start-server-and-test

We'll also add ourselves some scripts to our package.json:

"scripts": {
"postinstall": "npm run install:client-app && npm run install:server-app",
"install:client-app": "cd src/client-app && npm install",
"install:server-app": "cd src/server-app/API && dotnet restore",
"build": "npm run build:client-app && npm run build:server-app",
"build:client-app": "cd src/client-app && npm run build",
"postbuild:client-app": "cpx \"src/client-app/build/**/*.*\" \"src/server-app/API/wwwroot/\"",
"build:server-app": "cd src/server-app/API && dotnet build --configuration release",
"start": "run-p start:client-app start:server-app",
"start:client-app": "cd src/client-app && npm start",
"start:server-app": "cross-env ASPNETCORE_URLS=http://*:5000 ASPNETCORE_ENVIRONMENT=Development dotnet watch --project src/server-app/API run --no-launch-profile"
}

Let's walk through what the above scripts provide us with:

  • Running npm install in the root of our project will not only install dependencies for our root package.json, thanks to our postinstall, install:client-app and install:server-app scripts it will install the React app and .NET app dependencies as well.
  • Running npm run build will build our client and server apps.
  • Running npm run start will start both our React app and our .NET app. Our React app will be started at http://localhost:3000. Our .NET app will be started at http://localhost:5000 (some environment variables are passed to it with cross-env ).

Once npm run start has been run, you will find a Swagger endpoint at http://localhost:5000/swagger:

swagger screenshot

The client generator project

Now we've scaffolded our Swagger-ed API, we want to put together the console app that will generate our typed clients.

cd src/server-app
dotnet new console -o APIClientGenerator
cd APIClientGenerator
dotnet add package NSwag.CodeGeneration.CSharp
dotnet add package NSwag.CodeGeneration.TypeScript
dotnet add package NSwag.Core

We now have a console app with dependencies on the code generation portions of NSwag. Now let's change up Program.cs to make use of this:

using System;
using System.IO;
using System.Threading.Tasks;
using NJsonSchema;
using NJsonSchema.CodeGeneration.TypeScript;
using NJsonSchema.Visitors;
using NSwag;
using NSwag.CodeGeneration.CSharp;
using NSwag.CodeGeneration.TypeScript;

namespace APIClientGenerator
{
class Program
{
static async Task Main(string[] args)
{
if (args.Length != 3)
throw new ArgumentException("Expecting 3 arguments: URL, generatePath, language");

var url = args[0];
var generatePath = Path.Combine(Directory.GetCurrentDirectory(), args[1]);
var language = args[2];

if (language != "TypeScript" && language != "CSharp")
throw new ArgumentException("Invalid language parameter; valid values are TypeScript and CSharp");

if (language == "TypeScript")
await GenerateTypeScriptClient(url, generatePath);
else
await GenerateCSharpClient(url, generatePath);
}

async static Task GenerateTypeScriptClient(string url, string generatePath) =>
await GenerateClient(
document: await OpenApiDocument.FromUrlAsync(url),
generatePath: generatePath,
generateCode: (OpenApiDocument document) =>
{
var settings = new TypeScriptClientGeneratorSettings();

settings.TypeScriptGeneratorSettings.TypeStyle = TypeScriptTypeStyle.Interface;
settings.TypeScriptGeneratorSettings.TypeScriptVersion = 3.5M;
settings.TypeScriptGeneratorSettings.DateTimeType = TypeScriptDateTimeType.String;

var generator = new TypeScriptClientGenerator(document, settings);
var code = generator.GenerateFile();

return code;
}
);

async static Task GenerateCSharpClient(string url, string generatePath) =>
await GenerateClient(
document: await OpenApiDocument.FromUrlAsync(url),
generatePath: generatePath,
generateCode: (OpenApiDocument document) =>
{
var settings = new CSharpClientGeneratorSettings
{
UseBaseUrl = false
};

var generator = new CSharpClientGenerator(document, settings);
var code = generator.GenerateFile();
return code;
}
);

private async static Task GenerateClient(OpenApiDocument document, string generatePath, Func<OpenApiDocument, string> generateCode)
{
Console.WriteLine($"Generating {generatePath}...");

var code = generateCode(document);

await System.IO.File.WriteAllTextAsync(generatePath, code);
}
}
}

We've created ourselves a simple .NET console application that creates TypeScript and CSharp clients for a given Swagger URL. It expects three arguments:

  • url - the url of the swagger.json file to generate a client for.
  • generatePath - the path where the generated client file should be placed, relative to this project.
  • language - the language of the client to generate; valid values are "TypeScript" and "CSharp".

To create a TypeScript client with it then we'd use the following command:

dotnet run --project src/server-app/APIClientGenerator http://localhost:5000/swagger/v1/swagger.json src/client-app/src/clients.ts TypeScript

However, for this to run successfully, we'll first have to ensure the API is running. It would be great if we had a single command we could run that would:

  • bring up the API
  • generate a client
  • bring down the API

Let's make that.

Building a "make a client" script

In the root of the project we're going to add the following scripts:

"generate-client:server-app": "start-server-and-test generate-client:server-app:serve http-get://localhost:5000/swagger/v1/swagger.json generate-client:server-app:generate",
"generate-client:server-app:serve": "cross-env ASPNETCORE_URLS=http://*:5000 ASPNETCORE_ENVIRONMENT=Development dotnet run --project src/server-app/API --no-launch-profile",
"generate-client:server-app:generate": "dotnet run --project src/server-app/APIClientGenerator http://localhost:5000/swagger/v1/swagger.json src/client-app/src/clients.ts TypeScript",

Let's walk through what's happening here. Running npm run generate-client:server-app will:

  • Use the start-server-and-test package to spin up our server-app by running the generate-client:server-app:serve script.
  • start-server-and-test waits for the Swagger endpoint to start responding to requests. When it does start responding, start-server-and-test runs the generate-client:server-app:generate script which runs our APIClientGenerator console app and provides it with the URL where our swagger can be found, the path of the file to generate and the language of "TypeScript"

If you were wanting to generate a C# client (say if you were writing a Blazor app) then you could change the generate-client:server-app:generate script as follows:

"generate-client:server-app:generate": "dotnet run --project src/server-app/ApiClientGenerator http://localhost:5000/swagger/v1/swagger.json clients.cs CSharp",

Consume our generated API client

Let's run the npm run generate-client:server-app command. It creates a clients.ts file which nestles nicely inside our client-app. We're going to exercise that in a moment. First of all, let's enable proxying from our client-app to our server-app following the instructions in the Create React App docs and adding the following to our client-app/package.json:

"proxy": "http://localhost:5000"

Now let's start our apps with npm run start. We'll then replace the contents of App.tsx with:

import React from "react";
import "./App.css";
import { WeatherForecast, WeatherForecastClient } from "./clients";

function App() {
const [weather, setWeather] = React.useState<WeatherForecast[] | null>();
React.useEffect(() => {
async function loadWeather() {
const weatherClient = new WeatherForecastClient(/* baseUrl */ "");
const forecast = await weatherClient.get();
setWeather(forecast);
}
loadWeather();
}, [setWeather]);

return (
<div className="App">
<header className="App-header">
{weather ? (
<table>
<thead>
<tr>
<th>Date</th>
<th>Summary</th>
<th>Centigrade</th>
<th>Fahrenheit</th>
</tr>
</thead>
<tbody>
{weather.map(({ date, summary, temperatureC, temperatureF }) => (
<tr key={date}>
<td>{new Date(date).toLocaleDateString()}</td>
<td>{summary}</td>
<td>{temperatureC}</td>
<td>{temperatureF}</td>
</tr>
))}
</tbody>
</table>
) : (
<p>Loading weather...</p>
)}
</header>
</div>
);
}

export default App;

Inside the React.useEffect above you can see we create a new instance of the auto-generated WeatherForecastClient. We then call weatherClient.get() which sends the GET request to the server to acquire the data and provides it in a strongly typed fashion (get() returns an array of WeatherForecast). This is then displayed on the page like so:

load data from server

As you an see we're loading data from the server using our auto-generated client. We're reducing the amount of code we have to write and we're reducing the likelihood of errors.

This post was originally posted on LogRocket.

· 4 min read

Create React App is a fantastic way to get up and running building a web app with React. It also supports using TypeScript with React. Simply entering the following:

npx create-react-app my-app --template typescript

Will give you a great TypeScript React project to get building with. There's two parts to the TypeScript support that exist:

  1. Transpilation AKA "turning our TypeScript into JavaScript". Back since Babel 7 launched, Babel has enjoyed great support for transpiling TypeScript into JavaScript. Create React App leverages this; using the Babel webpack loader, babel-loader, for transpilation.
  2. Type checking AKA "seeing if our code compiles". Create React App uses the fork-ts-checker-webpack-plugin to run the TypeScript type checker on a separate process and report any issues that may exist.

This is a great setup and works very well for the majority of use cases. However, what if we'd like to tweak this setup? What if we'd like to swap out babel-loader for ts-loader for compilation purposes? Can we do that?

Yes you can! And that's what we're going to do using a tool named CRACO - the pithy shortening of "Create React App Configuration Override". This is a tool that allows us to:

Get all the benefits of create-react-app and customization without using 'eject' by adding a single craco.config.js file at the root of your application and customize your eslint, babel, postcss configurations and many more.

babel-loader ts-loader

So let's do the swap. First of all we're going to need to add CRACO and ts-loader to our project:

npm install @craco/craco ts-loader --save-dev

Then we'll swap over our various scripts in our package.json to use CRACO:

"start": "craco start",
"build": "craco build",
"test": "craco test",

Finally we'll add a craco.config.js file to the root of our project. This is where we swap out babel-loader for ts-loader:

const {
addAfterLoader,
removeLoaders,
loaderByName,
getLoaders,
throwUnexpectedConfigError,
} = require('@craco/craco');

const throwError = (message) =>
throwUnexpectedConfigError({
packageName: 'craco',
githubRepo: 'gsoft-inc/craco',
message,
githubIssueQuery: 'webpack',
});

module.exports = {
webpack: {
configure: (webpackConfig, { paths }) => {
const { hasFoundAny, matches } = getLoaders(
webpackConfig,
loaderByName('babel-loader')
);
if (!hasFoundAny) throwError('failed to find babel-loader');

console.log('removing babel-loader');
const { hasRemovedAny, removedCount } = removeLoaders(
webpackConfig,
loaderByName('babel-loader')
);
if (!hasRemovedAny) throwError('no babel-loader to remove');
if (removedCount !== 2)
throwError('had expected to remove 2 babel loader instances');

console.log('adding ts-loader');

const tsLoader = {
test: /\.(js|mjs|jsx|ts|tsx)$/,
include: paths.appSrc,
loader: require.resolve('ts-loader'),
options: { transpileOnly: true },
};

const { isAdded: tsLoaderIsAdded } = addAfterLoader(
webpackConfig,
loaderByName('url-loader'),
tsLoader
);
if (!tsLoaderIsAdded) throwError('failed to add ts-loader');
console.log('added ts-loader');

console.log('adding non-application JS babel-loader back');
const { isAdded: babelLoaderIsAdded } = addAfterLoader(
webpackConfig,
loaderByName('ts-loader'),
matches[1].loader // babel-loader
);
if (!babelLoaderIsAdded)
throwError('failed to add back babel-loader for non-application JS');
console.log('added non-application JS babel-loader back');

return webpackConfig;
},
},
};

So what's happening here? The script looks for babel-loader usages in the default Create React App config. There will be two; one for TypeScript / JavaScript application code (we want to replace this) and one for non application JavaScript code. I'm actually not too clear what non application JavaScript code there is or can be, but we'll leave it in place; it may be important.

You cannot remove a single loader using CRACO, so instead we'll remove both and we'll add back the non application JavaScript babel-loader. We'll also add ts-loader with the transpileOnly: true option set (to ensure ts-loader doesn't do type checking).

Now the next time we run npm start we'll have Create React App running using ts-loader and without having ejected. If we want to adjust the options of ts-loader further then we're completely at liberty to do so, adjusting the options in our craco.config.js.

If you value debugging your original source code rather than the transpiled JavaScript, remember to set the "sourceMap": true property in your tsconfig.json.

Finally, if we wanted to go even further, we could remove the fork-ts-checker-webpack-plugin and move ts-loader to use transpileOnly: false so it performs type checking also. However, generally it may be better to stay with the setup with post outlines for performance reasons.

· 4 min read

Never neglect the possibilities of a code review. There are times when you raise a PR and all you want is for everyone to hit approve so you can merge, merge and ship, ship! This can be a missed opportunity. For as much as I'd like to imagine my code is perfect, it's patently not. There's always scope for improvement.

"What's this?"

This week afforded me that opportunity. I was walking through a somewhat complicated PR on a call and someone said "what's this?". They'd spotted an expression much like this in my code:

const myValues = [...new Set(allTheValuesSupplied)];

What is that? Well, it's a number of things:

  1. It's a way to get the unique values in a collection.
  2. It's a pro-tip and a coding BMX trick.

What do I mean? Well, this is indeed a technique for getting the unique values in a collection. But it relies upon you knowing a bunch of things:

  • Set contains unique values. If you add multiple identical values, only a single value will be stored.
  • The Set constructor takes iterable objects. This means we can new up a Set with an array that we want to "unique-ify" and we will have a Set that contains those unique values.
  • If you want to go on to do filtering / mapping etc on your unique values, you'll need to get them out of the Set. This is because (regrettably) ECMAScript iterables don't implicitly support these operations and neither are methods such as these part of the Set API. The easiest way to do that is to spread into a new array which you can then operate upon.

I have this knowledge. Lots of people have this knowledge. But whilst this may be the case, using this technique goes against what I would generally consider to be a good tenet of programming: comprehensibility. When you read this code above, it doesn't immediately tell you what it's doing. This is a strike against it.

Further to that, it's "noisy". Even if the reader does have this knowledge, as they digest the code, they have to mentally unravel it. "Oh it's a Set, we're passing in values, then spreading it out, it's probably intended to get the unique values.... Right, cool, cool.... Continue!"

Margarida Pereira explicitly called this out and I found myself agreeing. Let's make a uniq function!

uniq v1

I wrote a very simple uniq function which looked like this:

/**
* Return the unique values found in the passed iterable
*/
function uniq<TElement>(iterableToGetUniqueValuesOf: Iterable<TElement>) {
return [...new Set(iterableToGetUniqueValuesOf)];
}

Usage of this was simple:

uniq([1, 1, 1, 3, 1, 1, 2]); // produces [1, 3, 2]
uniq(['John', 'Guida', 'Ollie', 'Divya', 'John']); // produces ["John", "Guida", "Ollie", "Divya"]

And I thought this was tremendous. I committed and pushed. I assumed there was no more to be done. Guida (Margarida) then made this very helpful comment:

BTW, I found a big bold warning that new Set() compares objects by reference (unless they're primitives) so it might be worth adding a comment to warn people that uniq/distinct compares objects by reference: https://codeburst.io/javascript-array-distinct-5edc93501dc4

She was right! If a caller was to, say, pass a collection of objects to uniq then they'd end up highly disappointed. Consider:

uniq([{ name: 'John' }, { name: 'John' }]); // produces [{ name: "John" }, { name: "John" }]

We can do better!

uniq v2

I like compilers shouting at me. Or more accurately, I like compilers telling me when something isn't valid / supported / correct. I wanted uniq to mirror the behaviour of Set - to only support primitives such as string, number etc. So I made a new version of uniq that hardened up the generic contraints:

/**
* Return the unique values found in the passed iterable
*/
function uniq<TElement extends string | number | bigint | boolean | symbol>(
iterableToGetUniqueValuesOf: Iterable<TElement>
) {
return [...new Set(iterableToGetUniqueValuesOf)];
}

With this in place, the compiler started shouting in the most helpful way. When I re-attemped [{ name: "John" }, { name: "John" }] the compiler hit me with:

Argument of type '{ name: string; }[]' is not assignable to parameter of type 'Iterable&lt;string | number | bigint | boolean | symbol&gt;'.

Take a look.

This is good. This is descriptive code that only allows legitimate inputs. It should lead to less confusion and a reduced likelihood of issues in Production. It's also a nice example of how code review can result in demonstrably better code. Thanks Guida!

· 10 min read

JavaScript is famously single threaded. However, if you're developing for the web, you may well know that this is not quite accurate. There are Web Workers:

A worker is an object created using a constructor (e.g. Worker()) that runs a named JavaScript file — this file contains the code that will run in the worker thread; workers run in another global context that is different from the current window.

Given that there is a way to use other threads for background processing, why doesn't this happen all the time? Well there's a number of reasons; not the least of which is the ceremony involved in interacting with Web Workers. Consider the following example that illustrates moving a calculation into a worker:

// main.js
function add2NumbersUsingWebWorker() {
const myWorker = new Worker('worker.js');

myWorker.postMessage([42, 7]);
console.log('Message posted to worker');

myWorker.onmessage = function (e) {
console.log('Message received from worker', e.data);
};
}

add2NumbersUsingWebWorker();

// worker.js
onmessage = function (e) {
console.log('Worker: Message received from main script');
const result = e.data[0] * e.data[1];
if (isNaN(result)) {
postMessage('Please write two numbers');
} else {
const workerResult = 'Result: ' + result;
console.log('Worker: Posting message back to main script');
postMessage(workerResult);
}
};

This is not simple. It's hard to understand what's happening. Also, this approach only supports a single method call. I'd much rather write something that looked more like this:

// main.js
function add2NumbersUsingWebWorker() {
const myWorker = new Worker('worker.js');

const total = myWorker.add2Numbers([42, 7]);
console.log('Message received from worker', total);
}

add2NumbersUsingWebWorker();

// worker.js
export function add2Numbers(firstNumber, secondNumber) {
const result = firstNumber + secondNumber;
return isNaN(result) ? 'Please write two numbers' : 'Result: ' + result;
}

There's a way to do this using a library made by Google called comlink. This post will demonstrate how we can use this. We'll use TypeScript and webpack. We'll also examine how to integrate this approach into a React app.

A use case for a Web Worker

Let's make ourselves a TypeScript web app. We're going to use create-react-app for this:

npx create-react-app webworkers-comlink-typescript-react --template typescript

Create a takeALongTimeToDoSomething.ts file alongside index.tsx:

export function takeALongTimeToDoSomething() {
console.log('Start our long running job...');
const seconds = 5;
const start = new Date().getTime();
const delay = seconds * 1000;

while (true) {
if (new Date().getTime() - start > delay) {
break;
}
}
console.log('Finished our long running job');
}

To index.tsx add this code:

import { takeALongTimeToDoSomething } from './takeALongTimeToDoSomething';

// ...

console.log('Do something');
takeALongTimeToDoSomething();
console.log('Do another thing');

When our application runs we see this behaviour:

The app starts and logs Do something and Start our long running job... to the console. It then blocks the UI until the takeALongTimeToDoSomething function has completed running. During this time the screen is empty and unresponsive. This is a poor user experience.

To start using comlink we're going to need to eject our create-react-app application. The way create-react-app works is by giving you a setup that handles a high percentage of the needs for a typical web app. When you encounter an unsupported use case, you can run the yarn eject command to get direct access to the configuration of your setup.

Web Workers are not that commonly used in day to day development at present. Consequently there isn't yet a "plug'n'play" solution for workers supported by create-react-app. There's a number of potential ways to support this use case and you can track the various discussions happening against create-react-app that covers this. For now, let's eject with:

yarn eject

Then let's install the packages we're going to be using:

  • worker-plugin - this webpack plugin automatically compiles modules loaded in Web Workers
  • comlink - this library provides the RPC-like experience that we want from our workers
yarn add comlink worker-plugin

We now need to tweak our webpack.config.js to use the worker-plugin:

const WorkerPlugin = require('worker-plugin');

// ....

plugins: [
new WorkerPlugin(),

// ....

Do note that there's a number of plugins statements in the webpack.config.js. You want the top level one; look out for the new HtmlWebpackPlugin statement and place your new WorkerPlugin(), before that.

Workerize our slow process

Now we're ready to take our long running process and move it into a worker. Inside the src folder, create a new folder called my-first-worker. Our worker is going to live in here. Into this folder we're going to add a tsconfig.json file:

{
"compilerOptions": {
"strict": true,
"target": "esnext",
"module": "esnext",
"lib": [
"webworker",
"esnext"
],
"moduleResolution": "node",
"noUnusedLocals": true,
"sourceMap": true,
"allowJs": false,
"baseUrl": "."
}
}

This file exists to tell TypeScript that this is a Web Worker. Do note the "lib": [ "webworker" usage which does exactly that.

Alongside the tsconfig.json file, let's create an index.ts file. This will be our worker:

import { expose } from 'comlink';
import { takeALongTimeToDoSomething } from '../takeALongTimeToDoSomething';

const exports = {
takeALongTimeToDoSomething,
};
export type MyFirstWorker = typeof exports;

expose(exports);

There's a number of things happening in our small worker file. Let's go through this statement by statement:

import { expose } from 'comlink';

Here we're importing the expose method from comlink. Comlink’s goal is to make exposed values from one thread available in the other. The expose method can be viewed as the comlink equivalent of export. It is used to export the RPC style signature of our worker. We'll see it's use later.

import { takeALongTimeToDoSomething } from '../takeALongTimeToDoSomething';

Here we're going to import our takeALongTimeToDoSomething function that we wrote previously, so we can use it in our worker.

const exports = {
takeALongTimeToDoSomething,
};

Here we're creating the public facing API that we're going to expose.

export type MyFirstWorker = typeof exports;

We're going to want our worker to be strongly typed. This line creates a type called MyFirstWorker which is derived from our exports object literal.

expose(exports);

Finally we expose the exports using comlink. We're done; that's our worker finished. Now let's consume it. Let's change our index.tsx file to use it. Replace our import of takeALongTimeToDoSomething:

import { takeALongTimeToDoSomething } from './takeALongTimeToDoSomething';

With an import of wrap from comlink that creates a local takeALongTimeToDoSomething function that wraps interacting with our worker:

import { wrap } from 'comlink';

function takeALongTimeToDoSomething() {
const worker = new Worker('./my-first-worker', {
name: 'my-first-worker',
type: 'module',
});
const workerApi = wrap<import('./my-first-worker').MyFirstWorker>(worker);
workerApi.takeALongTimeToDoSomething();
}

Now we're ready to demo our application using our function offloaded into a Web Worker. It now behaves like this:

There's a number of exciting things to note here:

  1. The application is now non-blocking. Our long running function is now not preventing the UI from updating
  2. The functionality is lazily loaded via a my-first-worker.chunk.worker.js that has been created by the worker-plugin and comlink.

Using Web Workers in React

The example we've showed so far demostrates how you could use Web Workers and why you might want to. However, it's a far cry from a real world use case. Let's take the next step and plug our Web Worker usage into our React application. What would that look like? Let's find out.

We'll return index.tsx back to it's initial state. Then we'll make a simple adder function that takes some values and returns their total. To our takeALongTimeToDoSomething.ts module let's add:

export function takeALongTimeToAddTwoNumbers(number1: number, number2: number) {
console.log('Start to add...');
const seconds = 5;
const start = new Date().getTime();
const delay = seconds * 1000;
while (true) {
if (new Date().getTime() - start > delay) {
break;
}
}
const total = number1 + number2;
console.log('Finished adding');
return total;
}

Let's start using our long running calculator in a React component. We'll update our App.tsx to use this function and create a simple adder component:

import React, { useState } from 'react';
import './App.css';
import { takeALongTimeToAddTwoNumbers } from './takeALongTimeToDoSomething';

const App: React.FC = () => {
const [number1, setNumber1] = useState(1);
const [number2, setNumber2] = useState(2);

const total = takeALongTimeToAddTwoNumbers(number1, number2);

return (
<div className="App">
<h1>Web Workers in action!</h1>

<div>
<label>Number to add: </label>
<input
type="number"
onChange={(e) => setNumber1(parseInt(e.target.value))}
value={number1}
/>
</div>
<div>
<label>Number to add: </label>
<input
type="number"
onChange={(e) => setNumber2(parseInt(e.target.value))}
value={number2}
/>
</div>
<h2>Total: {total}</h2>
</div>
);
};

export default App;

When you try it out you'll notice that entering a single digit locks the UI for 5 seconds whilst it adds the numbers. From the moment the cursor stops blinking to the moment the screen updates the UI is non-responsive:

So far, so classic. Let's Web Workerify this!

We'll update our my-first-worker/index.ts to import this new function:

import { expose } from 'comlink';
import {
takeALongTimeToDoSomething,
takeALongTimeToAddTwoNumbers,
} from '../takeALongTimeToDoSomething';

const exports = {
takeALongTimeToDoSomething,
takeALongTimeToAddTwoNumbers,
};
export type MyFirstWorker = typeof exports;

expose(exports);

Alongside our App.tsx file let's create an App.hooks.ts file.

import { wrap, releaseProxy } from 'comlink';
import { useEffect, useState, useMemo } from 'react';

/**
* Our hook that performs the calculation on the worker
*/
export function useTakeALongTimeToAddTwoNumbers(
number1: number,
number2: number
) {
// We'll want to expose a wrapping object so we know when a calculation is in progress
const [data, setData] = useState({
isCalculating: false,
total: undefined as number | undefined,
});

// acquire our worker
const { workerApi } = useWorker();

useEffect(() => {
// We're starting the calculation here
setData({ isCalculating: true, total: undefined });

workerApi
.takeALongTimeToAddTwoNumbers(number1, number2)
.then((total) => setData({ isCalculating: false, total })); // We receive the result here
}, [workerApi, setData, number1, number2]);

return data;
}

function useWorker() {
// memoise a worker so it can be reused; create one worker up front
// and then reuse it subsequently; no creating new workers each time
const workerApiAndCleanup = useMemo(() => makeWorkerApiAndCleanup(), []);

useEffect(() => {
const { cleanup } = workerApiAndCleanup;

// cleanup our worker when we're done with it
return () => {
cleanup();
};
}, [workerApiAndCleanup]);

return workerApiAndCleanup;
}

/**
* Creates a worker, a cleanup function and returns it
*/
function makeWorkerApiAndCleanup() {
// Here we create our worker and wrap it with comlink so we can interact with it
const worker = new Worker('./my-first-worker', {
name: 'my-first-worker',
type: 'module',
});
const workerApi = wrap<import('./my-first-worker').MyFirstWorker>(worker);

// A cleanup function that releases the comlink proxy and terminates the worker
const cleanup = () => {
workerApi[releaseProxy]();
worker.terminate();
};

const workerApiAndCleanup = { workerApi, cleanup };

return workerApiAndCleanup;
}

The useWorker and makeWorkerApiAndCleanup functions make up the basis of a shareable worker hooks approach. It would take very little work to paramaterise them so this could be used elsewhere. That's outside the scope of this post but would be extremely straightforward to accomplish.

Time to test! We'll change our App.tsx to use the new useTakeALongTimeToAddTwoNumbers hook:

import React, { useState } from 'react';
import './App.css';
import { useTakeALongTimeToAddTwoNumbers } from './App.hooks';

const App: React.FC = () => {
const [number1, setNumber1] = useState(1);
const [number2, setNumber2] = useState(2);

const total = useTakeALongTimeToAddTwoNumbers(number1, number2);

return (
<div className="App">
<h1>Web Workers in action!</h1>

<div>
<label>Number to add: </label>
<input
type="number"
onChange={(e) => setNumber1(parseInt(e.target.value))}
value={number1}
/>
</div>
<div>
<label>Number to add: </label>
<input
type="number"
onChange={(e) => setNumber2(parseInt(e.target.value))}
value={number2}
/>
</div>
<h2>
Total:{' '}
{total.isCalculating ? (
<em>Calculating...</em>
) : (
<strong>{total.total}</strong>
)}
</h2>
</div>
);
};

export default App;

Now our calculation takes place off the main thread and the UI is no longer blocked!

This post was originally published on LogRocket.

The source code for this project can be found here.

· 48 min read

I'd like to tell you a story. It's the tale of the ecosystem that grew up around a language: TypeScript. TypeScript is, for want of a better description, JavaScript after a trip to Saville Row. Essentially the same language, but a little more together, a little less wild west. JS with a decent haircut and a new suit. These days, the world seems to be written in TypeScript. And when you pause to consider just how young the language is, well, that's kind of amazing.

Who could have predicted it would end up like this? When I was a boy I remember coming down the stairs in my childhood home. Shuffling to the edge of each step on my bottom before thumping down to the one beneath. When I look at those same stairs now they're so small. I barely notice the difference between one step and the next. But back then each step seemed giant, each one so far apart. Definitely Typed had any number of steps in its evolution. They all seemed so significant then; whereas now they're just a memory. Let's remember together…

A title image that reads &quot;Definitely Typed: The Movie&quot;

Prolog(ue)

When it was first unveiled to the world by Anders Hejlsberg back in 2012, there was nothing to suggest TypeScript was going to be seismic in its effects. The language brought two important things to the table. First of all, the ability to write JavaScript with optional static typing (imagine this as "belts and braces" for JS). The second feature was interoperability with existing JavaScript.

The reason TypeScript has the traction that it does, is a consequence of the latter feature. The JavaScript ecosystem was already a roaring success by 2012. Many useful libraries were out there, authored in vanilla JavaScript. jQuery, Backbone, Knockout were all going concerns. People were building things.

Wisely, having TypeScript able to work with existing JavaScript libraries was a goal of the language right from the off. This made sense; otherwise it would have been like unveiling Netflix to the world whilst saying "sorry you can't use a television set to watch this". Remember, JS was great as is - people wanted static typing so they could be more productive and so they could sleep better at night. ("Oh wait, did I write that unit test to check all the properties? Dammit, it's 3am!") If TypeScript had hove onto the scene requiring that everything was written in TypeScript then I would not be writing this. It didn't.

Interoperability was made possible by the concept of "type definitions". Analogous to header files in C, these are TypeScript files with a .d.ts suffix that tell the compiler about an existing JavaScript library which is in scope. This means you can write TypeScript and use jQuery or [insert your favourite library name here]. Even though they are not written in TypeScript.

At the time of the initial TypeScript announcement (v0.8.1) there was no concept of a repository of type definitions. I mean, there was every chance that TypeScript wasn't going to be a big deal. Success wasn't guaranteed. But it happened. You're reading this in a world where Definitely Typed is one of the most popular repos on GitHub and where type definitions from it are published out to npm for consumption by developers greedy for static types. A world where the TypeScript team has pretty much achieved its goal of "types on every desk".

I want to tell you the story of the history of type definitions in the TypeScript world. I'm pretty well placed to do this since I've been involved since the early days. Others involved have been kind enough to give me their time and tell me their stories. There's likely to be errors and omissions, and that's on me. It's an amazing tale though; I'm fortunate to get to tell it.

The First Type Definition

I was hanging out for something like TypeScript. I'd been busily developing rich client applications in JS and, whilst I loved the language, I was dearly missing static typing. All the things broke all of the time and I wanted help. I wanted a compiler to take me by the hand and say "hey John, you just did a silly thing. Don't do it John; you'll only be filled with regret...". The TypeScript team wrote that compiler.

When TypeScript was announced, it was important that the world could see that interop with JS was a first class citizen. Accordingly, a jQuery type definition was demonstrated as well. At the time, jQuery was the number one JavaScript library downloaded on the internet. So naturally it was the obvious choice for a demo. The type definition was fairly rough and ready but it worked. You can see Anders Hejlsberg showing off the jQuery definition 45 minutes into this presentation introducing TypeScript.

Consumption was straightforward, if perhaps quirky. You took the jquery.d.ts file, copied it into your project location. Back then, to let the compiler know that a JS library had come to the party you had to use a kind of comment pragma in the header of your TypeScript files. For example: /// <reference path="jquery/jquery.d.ts" />. This let TypeScript know that the type definition living at that path was relevant for the current script and it should scope it in.

There was no discussion of “how do we type the world”? Even if they wanted to, the TypeScript team didn't really have the resources at that point to support this. They'd got as far as they had on the person power of four or five developers and some testers as well. There was a problem clearly waiting to be solved. As luck would have it, in Bulgaria a man named Boris Yankov had been watching the TypeScript announcement.

Boris Yankov

photograph of Boris Yankov looking mean, moody and magnificent

Boris Yankov was a handsome thirty year old man, living in the historic Bulgarian city of Plovdiv. He was swarthy with dark hair; like Ben Affleck if had been hanging out in Eastern Europe for a couple of years.

Boris was a backend developer who'd found himself doing more and more frontend. More JavaScript. He was accustomed to C# on the backend with static typing a-gogo. From his point of view JS was brittle. It was super easy to break things and have no idea until runtime that you'd done so. It seemed so backward. He was ready for something TypeScript shaped.

"What people forget is how different it was back then. Microsoft made this announcement, but probably most of the people that were listening were part of the MS ecosystem. I certainly was. Remember, back then if you had a Mac or did Linux you probably didn't think about MS too much."

Boris thought TypeScript just seemed like this interesting and weird thing that Microsoft were doing. He was excited by types; he was missing them and there was a real need there. A problem to solve. There were already people trying to address this. But the attempts so far had been underwhelming. Boris had encountered Google Closure Compiler; a tool built by Google which, amongst other things, introduces some measure of type safety to JavaScript by reading annotations in JSDoc format. Boris viewed GCC as a tentative first step. One which lead the way for things like TypeScript and Flow to follow.

The other aspect of TypeScript that excited Boris was transpilation. Transpilation is the term coined to describe what TypeScript does when it comes to emit output. It takes in TypeScript and pumps out JavaScript. The question is: what sort of JavaScript? One choice the TypeScript team could have made was just having the compiler stripping out types from the codebase. If it worked that way then you'd get out the JavaScript equivalent of the TypeScript you wrote. You wrote a class? TypeScript emits a class; just; one shorn of types and interfaces.

The TypeScript team made a different choice. They wrote the compiler such that the user could write ES6 style TypeScript syntax and have the TypeScript compiler transpile that down to ES5 or even ES3 syntax. This made TypeScript a much more interesting proposition than it already was, for a couple of reasons.

ES6 had been in the works for some time at this point. The release was shaping up to be the biggest incremental change to JavaScript that had so far happened. Or that would ever happen. Prior to this, JavaScript had experienced no small amount of tension and disagreement as it sought to evolve and develop. These played out in the form of the abandoned fourth edition of the language. There were arguments, harsh words, public disagreements and finally a failure to ship ECMAScript 4. In an alternate universe this was the end of the road for JavaScript. However, in our universe JavaScript got another throw of the dice.

It's telling that ES5 was for a long time known also as ES3.1; reflecting that it was initially planned to be the stepping stone between ES3 and ES4. In reality it ended up being the stepping stone between ES3 and ES6. As it turned out, it was a vital one too, it allowed the TC39 to recalibrate after a very public shelving of plans.

The band was back together (albeit with a new rhythm section) and ES6 was going to be massive. JavaScript was going to get new constructs such as Map, Set, new scoping possibilities with let and const, Promises which paved the way for new kinds of async programming, the contentious classes…. And who can forget where they were when they first heard about "fat" arrow functions?

People salivated at the idea of it all. Such new shiny toys! But how could we use them? Whilst all this new hotness was on the way, where could you actually run your new style code? Complete browser implementations of ES6 wouldn't start to materialise until 2018. Given the slowness of people to upgrade and the need to support the lowest common denominator of browser this could have meant that all the excitement was trapped in a never tomorrow situation.

Back to TypeScript. The team had a solution for this issue. In their wisdom, the TypeScript team allowed us to write ES6 TypeScript and the compiler could (with some limitations) transpile it down to ES3 JavaScript. The audacity of this was immense. The TypeScript team brought the future back to the past. What's more, they made it work in Internet Explorer 6. Now that's rock'n'roll. It's nothing short of miraculous!

The significance of transpilation to TypeScript cannot be overstated.

You might be thinking to yourself, "that's just Babel, right?" Right. It's just that Babel didn't exist then. 6to5 was still an idea waiting for Sebastian McKenzie to think of. Even if you were kind of "meh" on types, the attraction of using a tool which allowed you to use new JavaScript constructs without breaking your customers was a significant draw. People may have come for types, but once they'd experienced the joy of a lexically bound this in a fat arrow function they were never going back.

Success has many parents. TypeScript is a successful project. One reason for this is that it's an excellent product that fills a definite need. Another reason is one that can't be banked upon; timing. TypeScript has enjoyed phenomenal timing. Appearing just when JavaScript was going off like a rocket and having the twin benefits of types and future JS today when nothing else offered anything close, that's perfect timing. It got people's curiosity. Now it got Boris's attention.

Definitely Typed

The Definitely Typed logo

Boris had been feeling unproductive. He would build applications in JS and watch them unaccountably break as he made simple tweaks to them. He was constantly changing things, breaking them, fixing them and hoping he hadn't broken something else along the way. It was exhausting. He saw the promise in what TypeScript was offering and decided to give it a go.

It was great. He fired up Visual Studio and converted a .js file to end with the mystical TypeScript suffix of .ts. In front of his eyes, red squiggly lines started to appear here and there in his code. As he looked at the visual noise he could see this was TypeScript delivering on its promise. It was finding the bugs he hadn't spotted. These migrations were also addictive; the more information you could feed the compiler, the more problems it found. Boris felt it was time to start writing type definitions, whatever they were.

Boris quickly learned how to write a type definition and set to work. Most libraries weren't well documented and so he found himself reading the source code of libraries he used in order that he could write the definitions. At first, the definitions were just files dropped in his ASP.NET MVC projects that he copied around. That wasn't going to scale; there needed to be somewhere he could go to grab type definitions when he needed them. And so on October 5th 2012 he created a repository under his profile at GitHub called "DefinitelyTyped": https://github.com/DefinitelyTyped/DefinitelyTyped/commit/647369a322be470d84f8d226e297267a7d1a0796

Boris took his type definitions and put them into this repository. Were you ever curious what the first definition added was? Close your eyes and think... You might imagine it was the (then number one JavaScript library on the web) jQuery. In fact it was Modernizr. Then Underscore followed, and then jQuery. Take a look:

https://github.com/DefinitelyTyped/DefinitelyTyped/commits?after=4a4cf23ff4301835a45bb138bbb62bf5f0759255+699&author=borisyankov

A screenshot of the initial commits to Definitely Typed on GitHub

It wasn't complicated; it was just a folder with subfolders underneath; each folder representing a project. One for jQuery, one for jQuery UI, one for Knockout.... You get the idea. It's not so different now.

Boris had laid simple but dependable foundations. Definitely Typed had been born.

How Do You Test a Type Definition?

Boris was careful too. Right from the first type definition he added tests alongside them. Now tests for a type definition were a conundrum. How do you write a test for interfaces that don't exist in the runtime environment? Code that is expunged as part of the compilation process. Well, the answer Boris came to was this: a compilation test.

Someone once said: compilation is the first unit test... But it's a doozy. They're right. The value you get from compilation, from a computer checking the assertions your code makes, is significant. Simply put, it takes a large amount of tests to get the same level of developer confidence. Computers are wonderful at attention to detail in a way that puts even the most anally retentive human being to shame.

So if Boris had written a definition called mylib.d.ts, he'd write a file that exercises this type definition. A mylib.tests.ts if you will. This file would contain code that exercises the type definition in the way that it should correctly be used. This is code that will never be executed in the way that tests normally are; a test program is never actually run. Rather these tests exist solely for compilation time. (In much the same way that TypeScript types only exist for compilation time.) Boris's plan was this: no compilation errors in mylib.tests.ts represents passing tests. Compilation errors in mylib.tests.ts represents failing tests. It was functional, brutal and also beautiful in it's simplicity.

So, imagine your definition looked like this:

declare function turnANumberIntoAString (numberToMakeStringOutOf: number): string

You might write a compilation test that looks like this:

const itIsAString: string = turnANumberIntoAString(42);

This test ensures that you can use your function in the way you'd expect. It returns the types you'd desire (a string in this case) and it accepts the parameters you'd expect (a single number for this example). If someone changed the definition in future, such that a different type was returned or a different set of parameters was required it would break the test. The test code wouldn't compile anymore. That's the nature of our "test". It's blunt but effective.

This is the very first test committed to Definitely Typed; a test for Modernizr.

https://github.com/DefinitelyTyped/DefinitelyTyped/blob/a976315cacbcfc151d5f57b25f4325dc7deca6f2/Tests/modernizr.ts

This idea represents what tests look like throughout Definitely Typed today. They're now "run" as part of Continuous Integration and type definitions are tested in concert with one another to ensure that a change to one type definition doesn't break another. But there is nothing fundamentally different in place today to what Boris originally came up with.

Independence

Very quickly, Definitely Typed became a known project. People like Steve Fenton (author of the first book about TypeScript) were vocal supporters of the project. The TypeScript team talked up the project and were entirely supportive of its existence. In fact, at every given opportunity Anders Hejlsberg would sing its praises. For a while you could guarantee that any TypeScript talk by Anders would include a variant of "this guy called Boris started a project called Definitely Typed". The impression he gave was that he was kind of amazed, and thoroughly delighted, the project existed.

The TypeScript team were completely uninvolved with Definitely Typed. That in itself is worth considering. The perception of Microsoft by developers generally in 2012 was at best, highly suspicious. "Embrace, extend, extinguish" - a strategy attributed to MS was very much a current perspective. This was born out in online comments and conversations at meetups. The Hacker News comments on the TypeScript release were a mixed bag. The reaction on social media was rather less generous. Certainly it was harsh enough to prompt Scott Hanselman to write something of a defence of TypeScripts right to exist.

Given that TypeScript had arrived with the promise of transforming the JavaScript developer experience, the developer community was understandably cautious. Was Microsoft doing a good or ill? Could they be trusted? There were already signs that MS was changing. For example, it had been shipping open source libraries such as jQuery with ASP.Net MVC for some time. Microsoft was starting to engage with the world of open source software.

How Microsoft interacted with the (very open source driven) JS community was going to be key to the success (or not) of TypeScript. What happened with the establishment of Definitely Typed very much indicated TypeScripts direction of travel.

On day one of its existence, Boris took type definitions written by Microsoft and made them available via Definitely Typed. A ballsy move. It would have been completely possible for MS to object to this. They didn't.

People like Diullei Gomes started submitting pull requests to improve the existing definitions and add new ones. Diullei even wrote the first command line tooling which allowed people to install type definitions: TSD. Within a surprisingly short period, DT had become the default home of type definitions on the web. There were briefly alternative Definitely Typed styled collections of type definitions elsewhere on GitHub but they didn't last.

This all happened completely independently of the TypeScript team. Definitely Typed existing actually allowed TypeScript itself to prosper. It was worth persevering with this bleeding edge language because of the interoperability Definitely Typed was providing to the community. So the hands off attitude of MS was both surprising and encouraging. It showed trust of the community; something that hadn't hitherto been a commonly noted characteristic of MS.

Boris started adding contributors to Definitely Typed to help him with the work. Definitely Typed was no longer a one man band, it had taken an important step. It was built and maintained by an increasing number of creative and generous people. All motivated by a simple aim: the best developer experience when working with TypeScript and existing JS libraries.

Basarat Ali Syed

A photograph of Basarat

Basarat Ali Syed was a 27 year old who had recently moved to Melbourne, Australia from Pakistan. You might know of him for a number of reasons, not least being the TypeScript equivalent of Jon Skeet. That, incidentally, is not a coincidence. Basarat had watched Jon Skeet's impressive work, being the gold standard in C# answers and thought "there's something worth emulating here".

Bas was working for a startup who had a JS frontend. About six months before TypeScript was announced to the world he watched Anders Hejlsberg do a presentation on JavaScript which included Anders saying to the audience "don't you just wish you had type safety?" with a twinkle in his eye. TypeScript was of course well underway by this time; just not yet public. Bas remembered the comment and, when TypeScript was announced, he was ready. He made it his personal mission to be the goto person answering questions about TypeScript on Stack Overflow.

In those early days of TypeScript, if you put a question about TypeScript onto Stack Overflow there was a very good chance that Bas would answer it. And Bas was more helpful than your typical SO answerer. Not only would he provide helpful commentary and useful guidance, he would often find him answering "yeah, the problem isn't your code, it's the type definition. It needs improvement. In fact, I've raised a PR to fix it here…"

Boris saw the drip, drip of Basarat PRs turning into a flood. So, very quickly, he invited Basarat join Definitely Typed. Now Bas could not just suggest changes, he could ensure they were made. Step by step the quality of type definitions improved.

Basarat describes himself as a "serial OSS contributor and mover on-er". It's certainly true. As well as his Stack Overflow work, he's been someone involved in the early days of any number of open source projects. Not just Definitely Typed. Bas also worked on the TypeScript port of the JavaScript task runner; Grunt TS. He met up with Pete Hunt (he of React) at a Decompress conference and together they hacked together a POC webpack TypeScript loader. (That POC ultimately lead to James Brantly creating ts-loader which I maintain.) Bas wrote the atom-typescript plugin which offers first class support for TypeScript in Atom. Not content with that he went on to write a full blown editor of his own called alm-tools.

This is not an exhaustive list of his achievements and already I'm tired. Besides this he wrote the TypeScript Deep Dive book and the VS Code TypeScript God extension. And more.

Bas had the level of self knowledge required to realise that getting others involved was key to the success of open source projects. Particularly given that he knew he had a predilection to eventually move on, to work on other things. So Bas kept his eyes open and welcomed in new maintainers for projects he was working on. Bas' actions in particular were to be crucial. Bas grew the Definitely Typed team; he invited others in, he got people involved.

On December 28th 2013 Basarat decided that a regular contributor to Definitely Typed might be a potential team member. Bas opened up Twitter and sent a Direct Message to John Reilly.

A screenshot of direct message Basarat sent to John Reilly in Twitter

John Reilly

A photograph of John Reilly

That's me. Or johnny_reilly on Twitter and johnnyreilly on GitHub (as John Papa and I have learned to our chagrin; GitHub don't support the "_" character in usernames). Relatively few people call me Johnny. I'm named that online because back when I applied for an email address, someone had already bagsied john_[email protected] So rather than sully my handle with a number or a middle name I settled for johnny_reilly. I haven't looked back and have generally tried to keep that nom de plume wherever I lay my hat online.

In contrast to others I was a relatively late starter to TypeScript. I was intrigued right from the initial announcement, but held off from properly getting my hands dirty until generics was added to the language in 0.9. (This predisposition towards generics in a language perhaps explains why I didn't get too far with Golang.)

At that point I was working in London for a private equity house. It was based in the historic and affluent area of St James. St James is an interesting part of London, caught midway between the Government, Buckingham Palace and the heart of the West End. It's old fashioned, dripping with money and physically delightful. It's the sort of place film crews dash towards when they're called upon to show old fashioned London in all its pomp. It rocks.

My team hated JavaScript. Absolutely loathed it. I was the solo voice saying "but it's really cool!" whilst they all but burned effigies of Brendan Eich in each code review. However, to my delight (and their abject horror) the project we were working on could only be implemented using JS. Essentially the house wanted an application offering rich interactivity which had to be a web app. So… JS. We were coding then with a combination of jQuery and Knockout JS. And, in large part due to the majority of the team being unfamiliar with JS, we were shipping bugs. The kind of bugs that could be caught by a compiler. By static typing. Not to put too fine a point on it; by TypeScript.

So I proposed an experiment: "Let's take one screen and develop it with TypeScript. Let's leave the rest of the app as is; JavaScript as usual. And then once we're done with that screen let's see how we feel about it. TypeScript might not be that great. But that's fine, if it isn't we'll take the generated JS, keep that and throw away the TypeScript. Deal?"

The team were on board and, one sprint review later, we decided that all future JS functionality would be implemented with TypeScript. We were in!

From day one of using TypeScript I was in love. I had the functionality of JavaScript, the future semantics of JavaScript and I was making less mistakes. Our team had become more productive. We were shipping faster and more reliably with fewer errors. People were noticing; our reputation as a team was improving, in part due to our usage of TypeScript. We had a jetpack.

However. I wasn't satisfied. As I tapped away at my keyboard I found type definitions to be… imperfect. And that niggled. Did it ever niggle. By then Jason Jarrett had wired up Definitely Typed packages to be published out to Nuget. Devs using ASP.NET MVC 4 (as I then was) were busily installing type definitions alongside AutoFac and other dependencies. Whilst most of those dependencies arrived like polished diamonds, finished products ready to be plugged into the project and start adding value. The type definitions by contrast felt very beta. And of course, they were. TypeScript was beta. The definitions reflected the newness of the language.

I could make it better.

I started submitting pull requests. The first problem I decided to solve was IntelliSense. I wanted IntelliSense for jQuery. If you went to https://api.jquery.com there was rich documentation for every method jQuery exposed. I wanted to see that documentation inside Visual Studio as I coded. If I keyed in $.appendTo( I wanted VS to be filled with the content from https://api.jquery.com/appendTo/ . That was my mission. For each overload of the method I'd add something akin to this to the type definition file:

/**
* Insert every element in the set of matched elements to the end of the target.
*
* @param value A selector, element, HTML string, array of elements, or jQuery
* object; the matched set of elements will be inserted at the end
* of the element(s) specified by this parameter.
*/
appendTo(target: string): JQuery;

It was a tedious task plugging it all in, but the pleasure I got from having rich IntelliSense in VS more than made up for it to me. Along the way I added and fixed sections of the jQuery API that hadn't been implemented, or had been implemented incorrectly. It got to a point where jQuery was a good example of what a type definition should look like. That remains the case to this day; surprisingly few type definitions enjoy the JSDoc richness of jQuery. I have tried to encourage more use of this with blog posts code reviews and the like, but it's never got the traction I'd hoped.

I'm fairly relentless when I put my mind to something. I work very hard to make things come to pass. What this meant at one point was the Definitely Typed maintainers receiving multiple PRs a day. Which prompted Bas to wonder "I wonder if he'd like to join us?"

I happily accepted Bas' invitation and soon found myself reading this email:

From: Bas

Sent: 28 December 2013 11:47

To: Boris Yankov; johnny_[email protected]; Bas; vvakame; Bart van der Schoor; Diullei Gomes; steve fenton; Jason Jarret Subject: DefinitelyTyped team introduction

Dear All,

Meet John Reilly (github : https://github.com/johnnyreilly , twitter : https://twitter.com/johnny\_reilly) who will be helping with Definitely Typed definitions.

Boris manages the project and he can add you as a collaborator.

Additional team member introductions:

Admin : Boris Yankov

TSD package manager : https://github.com/DefinitelyTyped/tsd : Diullei / Bart van der Schoor

NUGET: https://github.com/DefinitelyTyped/NugetAutomation : Json Jarret

Passionate TypeScript users like yourself: Wakame, Myself and SteveFenton .

Cheers, Bas (Basarat)

Some of those names you'll recognise; some perhaps not. Jason Jarrett wrote the Nuget distribution mechanism for type definitions that ended up existing for far longer than anyone (least of all Jason) anticipated. Steve Fenton was largely a cheerleader for Definitely Typed in its early days. Diullei and Bart, amongst other things, worked on the initial command line tooling for DT: TSD.

After being powered up in Definitely Typed, my contributions only increased. Anything that I was using in my day to day work, I wanted to have an amazing TypeScript experience. I wanted the language to thrive and I was pretty sure I could help by trying to get users the best-in-class developer experience as they used JS libraries. I've always found good developer experience a strong motivation; the idea being, if someone loves their tools, they'll do great work. The end customer (of whatever they're building) gets a better product sooner. Great developer experience is a force multiplier for building software.

Policy time

TypeScript was now at version 0.9.1. Still very much beta. Back then every release was breaking. Breaking. Very much with a capital "B".

TypeScript had, since the very early days, made a commitment to track the ECMAScript standard. All JavaScript is valid TypeScript. However, there was briefly a period where this might not have been so. One of the things people most remember from the initial release is that they could now write classes. These were already the standardised classes of ES6 but it almost wasn't to be. For a brief period there had been consideration of doing something subtly different. In fact Anders would describe the TypeScript team's journey towards embracing the standards as a tale tinged with regret. In doing so they'd had to say goodbye to a different implementation of classes which he'd preferred but which they'd ditched because they weren't standard.

Alongside differences like this there were other delineations. Types had different names in the past which, as time went by, were renamed to align with standards. boolean was originally bool for instance; likely a reflection of Anders involvement with C#.

These sorts of changes, alongside any number of others, meant that each release of TypeScript sometimes entirely broke the definitions in Definitely Typed. Most notable was the 0.9.1 -> 0.9.5 migration. This was both an exercise in serious pain endurance and also a testament to the already strong commitment to TypeScript that existed. The reason people were willing to put the effort in to keep these migrations going was because they believed it was worth it. They believed in TypeScript. This PR is testament to that: https://github.com/DefinitelyTyped/DefinitelyTyped/pull/1385

A level of flux meant that for a long time Definitely Typed committed only to support the latest version of TypeScript and the latest version of packages therein. These days it's not so brutal, but then it had to be as a matter of necessity.

The compiler was changing too fast and there were too few people involved to allow for any realistic alternative. As is often the case in software development, it was "good enough". Any other choice would probably have increased the workload of maintainers to a point where the project would no longer be a going concern. It was a choice with downsides; trade-offs. But it was the choice that best served the future of Definitely Typed and TypeScript.

Masahiro Wakame

A photograph of Masahiro Wakame

Time passed. Autumn turned into winter, winter into spring. TypeScript reached 1.0. It wasn't beta anymore. As each release came, the changes in the compiler became more gradual. This was a blessing for the Definitely Typed team. The projects popularity was ticking up and up. New definitions were added each day. The trickle of issues and PRs had become a stream, then a river. A river very much ready to burst its banks.

It was taking its toll. Inside Definitely Typed roles were shifting. Boris was starting to step back from day to day reviewing of PRs. New members were joining the project, like Igor Oleinikov. But the pace was insatiable.

Some people left the project entirely, burned out by the never ending issues and PRs. Basarat started contributing less, beginning to turn his attention to one of his many sidejams. Fortunately, it turned out that before Basarat stepped back, he had done a very fine thing. In Tokyo, Japan was a 28 year old developer named Masahiro Wakame.

Mas was using JS to build the web applications he worked on. But ECMAScript 5 wasn't hitting the mark for him. For a time Masahiro used CoffeeScript (Jeremy Ashkenas Ruby style JS alternative). He liked it, but, as he put it: "I was shooting my foot everyday". Looking out for that elusive solution he landed on Dart. It looked amazing. But it wasn't ECMAScript. Masahiro worried he'd be locked in. He'd built some libraries and a testing framework using Dart. But he didn't feel he could suggest that his company adopted it; it was too different and only he knew it. He was left with the "what if I go under a bus?" problem. If he left the company, his colleagues would find it hard to move away from using Dart. This made him very hesitant. He didn't feel he could justify the choice.

Then Masahiro heard about TypeScript. Like Goldilocks and the three bears, this third language sounded just right. He loved the type safety. It also had a compelling proposition: the transpiled JS that TypeScript generated was human readable and idiomatic. Generating idiomatic JS as opposed to some kind of strange byte code was a goal of the language from the early days, as Anders Hejlsberg would repeatedly explain. This generation of "real JS" made test driving TypeScript a low risk proposition. One that appealed to the likes of Masahiro. No lock-in. You decide TypeScript isn't for you? Fine. Take the generated JS files and shake the TypeScript dust off your sandals. Masahiro consequently went all in on TypeScript. This was his bet. And he was going to cover his bet by trying to make the ecosystem even stronger.

Masahiro started out trying to improve the testing framework in DT; sending in pull requests. Before too long, Basarat messaged him to say "do you wanna become a committer?" Masahiro became a committer.

It turned out that MH had a special qualities that DT was going to sorely need: he was willing and able to review PR after PR, day after day. His stamina was incredible.

Whilst it may not have been obvious from the outside, by now Definitely Typed was a slightly troubled project. The speed at which issues and PRs landed was relentless. Anyone who had once set GitHub to "watching" for Definitely Typed soon unsubscribed. It was becoming unmanageable. And whilst almost everyone else in the project was in the process of burning out / moving on / stepping back and similar, Masahiro kept going. He kept showing up. He kept reviewing. He kept merging. At his peek he was spending 2 hours a day, every day, glued to his screen in Tokyo and reviewing PRs for GitHub. The pulse of Definitely Typed may have slowed. But Masahiro kept the heart beating.

As Masahiro kept the lights on, in a hotel room in Buenos Aires an Australian named Blake Embrey was making plans...

Blake Embrey

A photograph of Blake Embrey

Blake was a 21 year old Australian. He was a nomadic developer, travelling around the world and working remotely. He travelled from country to country armed with a suitcase and his trusty MacBook in search of WiFi. He found himself dialing into standups from cafés in Vietnam at 1am to provide Jira updates, coding from airports as he criss-crossed the globe. It was an unusual life.

A friend showed Blake TypeScript somewhere around the TypeScript 1.2 era. He was interested. He was mostly working on backend NodeJS at the time. He could see the potential that TypeScript had to help him. Around the TypeScript 1.5 era Blake started to take a really good look at what was possible. From his vantage point, there was good and there was also bad. And he thought he could help.

As a module author and developer, he loved TypeScript. It allowed him to write, publish and consume 100% type-safe JavaScript. Features like autocompletion, type validation and ES6 features became part of his typical workflow. It was so good!

However, one step in this development lifecycle was broken to his mind. The problem was module shaped. Yeah, modules. Wade into the controversy!

Thanks again to Basarat, Blake soon became a DT contributor. Of all the contributors to Definitely Typed, Blake was the first one who was looking hard at the module problem. This was because whilst he wanted TypeScript to solve the same problems as everyone else, he wanted to solve them in a world of package dependencies. He wanted to solve for Node.

Now it's worth taking a moment to draw a comparison between web development then, and web development now. Because it's changed. The phrase "web development fatigue" exists with good reason. Web development in 2014 as compared to web development in 2019 is a very different proposition. Historically, JavaScript has not had a good story around modularisation. The language meandered forwards without ever gaining an official approach to modularisation until ES6. So for twenty years, if you wanted to write a large JS application you had to think hard about how to solve this problem. And even when modules were nailed down it was longer still until module loading was standardised.

But that didn't stop us. JavaScript apps were still being built on the frontend and the backend. On the frontend an approach to modularisation emerged called the Asynchronous Module Definition. It had some adoption but in the main that wasn't how people rolled. The frontend was generally a sea of global variables. People would write programs that depended upon global variables and worked hard to make sure that they didn’t collide. Everything did this. Everything. Underscore? It was a global variable called _. jQuery? It was two global variables: $ and jQuery. That's just what people did. I'm a person. I did that. If you were there you probably did too.

On the server side, in Node JS land, a different standard had emerged: CommonJS. Unlike AMD, CommonJS was simply how the Node JS community worked. Everything was a CommonJS module. Alongside Node, npm was growing and growing. Exposing Node developers to a rich ecosystem of modules or packages that they could drop into their apps with merely a tap tap tap of npm install super-cool-package and then var scp = require('super-cool-package').

And therein, as the Bard would have it, lay the rub. You see, in the frontend it was simpler. Uglier but simpler. By and large, the global variables were fine. They weren't beautiful but they were functional. It may have impaired the development of frontend apps, but it certainly didn't stop it.

And since a design goal of TypeScript was to meet JavaScript developers where they were and try and make their lives better, the initial focus of Definitely Typed was necessarily types that existed in the global namespace. So jquery.d.ts would declare global $ and jQuery variables and underneath them all the jQuery methods and variables that were implemented. Alongside jQuery, maybe an application would have jQuery UI which would extend the $ variable and add extra functionality. In addition maybe there'd be a couple of jQuery plugins in play too. (It's worth saying that jQuery was the crack cocaine of web development back in the day. People just couldn't get enough.)

TypeScript catered for this world by allowing type definitions to extend interfaces created by other definition files. The focus of most of the Definitely Typed contributors up to this point was frontend and hence DT was an ocean of global type definitions.

Of course, this is not what the frontend world looks like these days. The frontend now is all about npm thanks to tools like Browserify, webpack, Rollup and the like. Client and server side development is mighty similar these days. Or at least, it's swimming in more of the same waters. There's a good TypeScript story to tell about this as well. But there wasn't always. Back to Blake.

Blake had published a bunch of modules on npm. But no one had ever been able to consume the type definitions from them. Why was that? Well, without delving into great detail it comes down to type definitions of a package generally conflicting with type definitions that a user installs themselves.

This essentially came down to how TSD worked and what Definitely Typed contained. TSD was a pretty simple tool; by and large it worked by copying files from Definitely Typed into a users project. The files copied would contain type definitions which contained global types. So even though you cared solely about external modules, because of Definitely Typed you found yourself installing globals alongside which lead to conflicts between different type definitions. Different type definitions punching it out whilst the TypeScript compiler stood in between ineffectually shouting "leave it alone mate - it's not worth it!"

How could we have a world where external modules and global were treated distinctly? Blake had ideas… Plan one was to rewrite TSD to support external modules; the type of modules that were standard in Node land. After working hard on that for some time, Blake came to conclusion that solving global variables alongside external modules was a hard problem. A very hard problem. And perhaps that just running with external modules, a new start if you will, represented the best way forwards.

Typings

A screenshot of the Typings project

Blake made typings. Typings was a number of things; it was a new command line tool to replace TSD, it was a new approach to distributing type definitions and it was a registry. But Typings was a registry which pointed out to the web. Typings installation was entirely decentralized and the typings themselves could be downloaded from almost anywhere - GitHub, NPM, Bower and even over HTTP or the filesystem. Those type definitions could be external modules or globals.

It was radical. From centralisation to decentralisation. As Blake described it:

This decentralization solves the biggest pain point I see with maintaining DefinitelyTyped. How does an author of one typings package maintain their file in DefinitelyTyped when they get notifications on thousands of others? How do you make sure typings maintain quality when you have 1000s to review? The solution in typings is you don’t, the community does. If typings are incorrect, I can just write and install my own from wherever I want, something that TSD doesn’t really allow. There’s no merge or review process you need to wait for (300+ open pull requests!).

However, decentralization comes with the cost of discoverability. To solve this, a registry exists that maintains locations of where the best typing can currently be installed from, for any version. If there’s a newer typing, patches, or the old typing author has somehow disappeared, you can replace the entry with your own so people will be directed to your typings from now on.

The world started to use Typings as the default CLI for type definitions. typings.json files started appearing in people's repos. Typings allowed consumption of types both from the Typings registry and from Definitely Typed and so there was an easy on ramp for people to start using Typings.

Little by little, people started consuming type definitions that came from the typings registry rather than from Definitely Typed. Typings began to thrive whilst DT continued to choke. The community was beginning to diverge.

The TypeScript Team

A photograph of the TypeScript team

Over in Seattle, the TypeScript team was thinking hard about the type definition ecosystem. About Definitely Typed and Typings. And about tooling and distribution.

At this point, there wasn't a dedicated registry for type definitions. There was GitHub. By and large, all type definitions lived in GitHub. Since GitHub is a git based source control provider it was possible for it to be used as a makeshift registry. So that's exactly what Definitely Typed and Typings were doing; piggy backing on GitHub and MacGyvering "infrastructure". It worked.

There wasn't a great versioning story. Definitely Typed just didn't do versioning. The latest and greatest was supported. Nothing else. The Typings approach was more nuanced. It did have an approach for versioning. It supported it by dint of allowing a version number in the registry to point to a specific git hash in a repo. It was an elegant and smart approach. Blake Embrey was one sharp cat.

Innovative though it was, the decentralised Typings approach presented potential security risks as it pointed out to the web making auditing harder. Alongside this, The TypeScript team was pondering ways they could reduce friction for developers that wanted to use TypeScript.

By now, the JavaScript ecosystem had started to coalesce around npm as the registry du jour. Bower and jspm were starting to fade in popularity. NuGet (the .NET package manager) was no longer being encouraged as a place to house JS. npm was standard. TypeScript users found themselves using npm to install jQuery and reaching for tsd or Typings to install the associated type definitions. That's two commands. With two package managers. Each with subtly different syntax. And then perhaps you had to fiddle with the tsconfig.json to get the compiler looking in the right places. It worked. But it didn't feel …. idiomatic. It didn't feel like TypeScript was meeting their users where they were.

The likes of Daniel Rosenwasser, Mohamed Hegazy and Ryan Cavanaugh found themselves pondering the problem. Alongside this, they were thinking more about what a first class module support experience in TypeScript would look like, motivated in part by the critical mass around npm, which was entirely module / package based.

That wasn’t the only thing on their minds; there was also the testing story. Definitely Typed had a straightforward testing story due to being a real mega-repo. Everything lived together and could be tested together. Thanks to the hard work of the Definitely Typed team this was already in place; every PR spun up Travis and tested all the type definitions individually and in concert with one another. Typings didn’t have this. What’s more, it would be hard to build. The decentralised nature of Typings meant that you’d need to build infrastructure to crawl the Typings registry, download the type definitions and then perform the tests. It was non-trivial and unlikely to be speedy.

There was one more factor in play. The TypeScript team were aware that for the longest time they'd been working on the language. But they'd become distant from one of the most significant aspects of how the language was used. They weren’t well enough informed about the rough edges in the type definition space. They weren’t feeling their users pain. They needed to address this and really there was only one thing to do... It was time for the TypeScript team to start eating their own dogfood.

A Plan Emerges

The TypeScript team reached out to Blake Embrey and started to talk about ways forward. They started collaborating over Slack.

A screenshot of the collaboration on Slack

The TypeScript team had also been in contact with the Definitely Typed team. They were, at this point, aware that Definitely Typed was being kept going mainly due to the hard graft of Masahiro Wakame. As Daniel observed “vvakame was a champ”.

At this point I have to stick my own hand up and confess to thinking that Definitely Typed was not long for this world. Steve Ognibene (another DT member) and others were all feeling similarly. It seemed inevitable.

A photograph of Steve Ognibe

The TypeScript team were about to change that. After talking, thinking, thinking and talking they put together a plan. It was going to change TypeScript and change Definitely Typed. It was also going to effectively end Typings.

It’s worth saying at this point that the TypeScript team didn’t enter into this lightly. They were hugely impressed by Typings. It was, to quote Daniel Rosenwasser, “an impressive piece of work”. It also had the most amazing command line experience. Everyone on the team felt that it was an incredible endeavour and had their proverbial hats off to Blake Embrey. But Definitely Typed had critical mass and, whilst it had known problems, they were problems that could be likely solved (or ameliorated) through automation. The Typings approach was very innovative, but it presented other issues which seemed harder to solve. The TypeScript team made a bet. They placed their money on Definitely Typed.

To remove friction in the type acquisition space they decided to change the compiler. It would now look out for a special scoped namespace on npm named @types. Type definitions from Definitely Typed would be published out to @types. They would land as type definition packages that matched the non @types package. This meant that TypeScript was now sharing the same infrastructure as the rest of the JS ecosystem: npm. And consequently, installation of a package like jQuery in a TypeScript workflow now looked like this: npm install jquery @types/jquery. One command, one tool, one registry.

They published their plans here: https://github.com/Microsoft/TypeScript/issues/9184

There was more. The TypeScript team had really enjoyed knowing that this open source project which ran completely independently from the TypeScript team existed. And whilst they were focused directly on the language that was reasonable. But with the changes that were being planned, TypeScript was about to start explicitly depending upon Definitely Typed. It had been unofficially true up until that point. But now it was different; TypeScript were going to automate publishing Definitely Typed packages to the special @types scope in npm which the TypeScript compiler gave preference to. TypeScript and Definitely Typed were going from dating to being engaged.

It was time for the TypeScript team to get involved.

The team committed to doing weekly rotations of a TypeScript team member working on Definitely Typed. Reviewing PRs, merging them and, crucially, helping with automation and testing.

TypeScript was now part of Definitely Typed. Definitely Typed was part of TypeScript.

TypeScript 2.0 / Definitely Typed 2.0

Blake was immensely disappointed. He'd put his heart and soul into Typings. It was a massive amount of work and he'd not only started a project, he'd started a community that he felt responsible for.

Although that work had arguably kickstarted the discussion of what the future of type acquisition in TypeScript should look like, Typings wouldn't be coming along for the ride. It was a burner rocket, carrying the good ship TypeScript into outer orbit, dropping back to Earth once it's job was done.

Very much, Blake had in mind all the people that had contributed to Typings. That all their work was going to be abandoned. He felt a sense of responsibility. It was both frustrating and heartbreaking.

When TypeScript 2.0 shipped, in the release announcement was the following statement: https://devblogs.microsoft.com/typescript/announcing-typescript-2-0/

We’d like to thank Blake Embrey for his work on Typings and helping us bring this solution forward.

Blake really appreciated the recognition. In years to come Blake would come to feel that the decisions made were the right ones. That they lead to TypeScripts continued success and served the community well. But he has regrets. He says now "I am disappointed we didn't get to integrate the two philosophies for managing types. It hurt Typings registry contributors without a story in place, I didn’t want to let down and alienate potential contributors of type definitions."

A young Australian man had helped change the direction of TypeScript. It was time for him to take a well earned rest.

In the meantime, the TypeScript team was starting to get stuck into the work of giving Definitely Typed a make-over.

Screenshot of the rota for Definitely Typed work for the TypeScript team

At this point, Definitely Typed had more than 500 open pull requests. Most of which had been open for a very long time. The most urgent and pressing problem was getting that down. The TypeScript team committed to, in perpetuity, a weekly rotation where one team member would review PRs. This would, in future, mean that PRs were handled in a timely fashion and that the number of open PRs was generally kept beneath 100.

Alongside this, changes were being made to the TypeScript compiler. In large part these related to enabling automatic type acquisition through the @types scope. To make that work, the TypeScript team realised pretty quickly that many of the type definitions would not work as is. Ryan wrote up this report:

Ryan Cavanagh&#39;s report on what to do in Definitely Typed

At this point in time there were around 1700 type definitions. Pretty much all of them required some massaging. Roughly speaking, with TS 2.0, the language was going to move from a name based type acquisition approach to a file based one. New features were added to TypeScript 2.0 such as the export as namespace syntax to support a type definition supporting both being used in modules (where there are import / exports) but also in script files (where there aren't)

Ryan Cavanaugh put together scripts that migrated 1200 of the type definitions to TypeScript 2.0 syntax. The remaining 500 were delicately transitioned by hand by diligent TypeScript team members. It was a task of utter drudgery that still sparks flickers of PTSD in those who were involved. It was like being in the digital equivalent of a Dickensian workhouse.

This was one of the reasons why going with the centralised approach of Definitely Typed instead of the decentralised one of Typings was necessary. Because the TypeScript team were involved in DT they could help make things happen. They could do the hard work. In a decentralised world that wouldn't be possible; everything would constantly be held up, waiting.

It took a long time to get the types 2.0 branch to a point where CI went green. All this time, merges we're taking place between the master branch and the future one. It was hard, unglamorous work. As Ryan put it, "I partied hard when CI went green for the first time on types 2.0."

Screenshot of @types going green

The first and most obvious addition was the automation of TypeScript definitions being published out to npm.

Next came a solution for the "notification flood" issue. It was no longer feasible for a user to have Definitely Typed set up as "watching" in GitHub. That way lead an unstoppable deluge of information about issues and pull requests. The result of that was that users were generally unaware of changes / issues and so on. People, as much as they wanted to be, were becoming disconnected from the type definitions they were interested in in DT.

The solution for this problem was, as with so many problems, a bot. It would send notifications to the users who had historically worked on a type definition when someone sent a PR. This was hugely useful. It made it possible for people to become effective stewards of the type definitions they knew about. It meant people could effectively remain involved with DT; giving them targeted information. It was the solution to a communications problem.

As Ryan Cavanaugh put it when he looked back upon TypeScripts story, he had this to say: “Definitely Typed is the best thing that could exist from our perspective”.

He was speaking from the perspective of a TypeScript team member. He could as well be speaking for the developer world at large. Definitely Typed is an organic monster of open source goodness; bringing types to the world thanks to nearly 10,000 contributors. Each person of which has donated at least an hour or their time for the greater good. Far more than that in many cases. It’s incredible. I’m glad I get to be part of it. I never would have guessed it would have turned out like this.

· 2 min read

With TypeScript 3.4, a new behaviour landed and a magical new file type appeared; .tsbuildinfo

TypeScript 3.4 introduces a new flag called --incremental which tells TypeScript to save information about the project graph from the last compilation. The next time TypeScript is invoked with --incremental, it will use that information to detect the least costly way to type-check and emit changes to your project.

...

These .tsbuildinfo files can be safely deleted and don’t have any impact on our code at runtime - they’re purely used to make compilations faster.

This was all very exciting, but until the release of TypeScript 3.6 there were no APIs available to allow third party tools like ts-loader to hook into them. The wait is over! Because with TypeScript 3.6 the APIs landed: https://www.typescriptlang.org/docs/handbook/release-notes/typescript-3-6.html#apis-to-support---build-and---incremental

This was the handiwork of the very excellent @sheetalkamat of the TypeScript team - you can see her PR here: https://github.com/microsoft/TypeScript/pull/31432

What's more, Sheetal took the PR for a test drive using ts-loader, and her hard work has just shipped with <a href="https://github.com/TypeStrong/ts-loader/releases/tag/v6.2.0">v6.2.0</a>:

If you're a ts-loader user, and you're using TypeScript 3.6+ then you can get the benefit of this now. That is, if you make use of the experimentalWatchApi: true option. With this set:

  1. ts-loader will both emit and consume the .tsbuildinfo artefact.

  2. This applies both when a project has tsconfig.json options composite or incremental set to true.

  3. The net result of people using this should be faster cold starts in build time where a previous compilation has taken place.

ts-loader v7.0.0

We would love for you to take this new functionality for a spin. Partly because we think it will make your life better. And partly because we're planning to make using the watch API the default behaviour of ts-loader when we come to ship v7.0.0.

If you can take this for a spin before we make that change we'd be so grateful. Thanks so much to Sheetal for persevering away on this feature. It's amazing work and so very appreciated.

· One min read

A long time ago (well, 2012) in a galaxy far, far away (okay; Plovdiv, Bulgaria)....

Definitely Typed began!

This is a project that set out to provide type definitions for every JavaScript library that lacked them. An ambitious goal. Have you ever wondered what the story that lay behind it was?

Perhaps you know that the project was started by a shadowy figure named "Boris Yankov". And maybe you know that the TypeScript team is now part of the Definitely Typed team. There's a lot more to tell.

This autumn, I'd like to tell you the story of how Definitely Typed came to be what it is. From an individual commit in a repo that Boris created in 2012 to the number 10 project by contributions on GitHub in 2018. I'm part of that story. Basarat Ali Syed is part of that story. Masahi Wakame too. Blake Embrey. Steve Fenton. Igor Oleinikov. It's an amazing and unexpected tale. One that turns upon the actions of individuals. They changed your life and I'd love you to learn how.

So, coming soon to a blog post near you, is the story of Definitely Typed. It's very exciting! Stay tuned...

· 6 min read

I did ponder calling this post "how to enable a good TypeScript developer experience for npm modules that aren't written in TypeScript"... Not exactly pithy though.

Definitely Typed is the resource which allows developers to use TypeScript with existing JavaScript libraries that ship without their own type definitions.

DT began as a way to enable interop between JS and TS. When DT started, everything on npm was JavaScript. Over time it has become more common for libraries (eg Mobx / Angular) to be written (or rewritten) in TypeScript. For publishing, they are compiled down to JS with perfect type definitions generated from the TypeScript alongside the compiled JavaScript. These libraries do not need to exist in Definitely Typed anymore.

Another pattern that has emerged over time is that of type definitions being removed from Definitely Typed to live and be maintained alongside the libraries they support. An example of this is MomentJS.

This week, I think for the first time, there emerged another approach. Kent C Dodds' react-testing-library had started out with the MomentJS approach of hosting type definitions alongside the JavaScript source code. Alex Krolic raised a PR which proposed removing the type definitions from the RTL repo in favor of having the community maintain them at DefinitelyTyped.

I'll directly quote Kent's explanation of the motivation for this:

We were getting a lot of drive-by contributions to the TypeScript typings and many pull requests would either sit without being reviewed by someone who knows TypeScript well enough, or be merged by a maintainer who just hoped the contributor knew what they were doing. This resulted in a poor experience for TypeScript users who could experience type definition churn and delays, and it became a burden on project maintainers as well (most of us don't know TypeScript very well). Moving the type definitions to DefinitelyTyped puts the maintenance in much more capable hands.

I have to admit I was reticent about this idea in the first place. I like the idea that types ship with the package they support. It's a good developer experience; users install your package and it works with TypeScript straight out of the box. However Alex's PR addressed a real issue: what do you do when the authors of a package aren't interested / equipped / don't have the time to support TypeScript? Or don't want to deal with the noise of TypeScript related PRs which aren't relevant to them. What then?

Alex was saying, let's not force it. Let the types and the library be maintained separately. This can and is done well already; React is a case in point. The React team does not work on the type definitions for React, that's done (excellently) by a crew of dedicated React lovers in Definitely Typed.

It's a fair point. The thing that was sad about this move was that the developer experience was going to have more friction. Users would have to yarn add -D @testing-library/react and then subsequently yarn add -D @types/testing-library__react to get the types.

This two step process isn't the end of the world, but it does make it marginally harder for TypeScript users to get up and running. It reduces the developer joy. As a side note, this is made more unlovely by @testing-library/react being a scoped package. Types for a scoped package have a quirky convention for publishing. A fictional scoped package of @foo/bar would be published to npm as: @types/foo__bar. This is functional but non-obvious; it's tricky to discover. A two step process instead of a one step process is a non-useful friction that it would be great to eliminate.

Fortunately, Kent and Daniel K had one of these moments:

Kent suggested that at the same time as dropping the type definitions that were shipped with the library, we try making @types/testing-library__react a dependency of @testing-library/react. This would mean that people installing @testing-library/react would get @types/testing-library__react installed automatically. So from the developers point of view, it's as though the type definitions shipped with the package directly.

To cut a long story short reader, that's what happened. If you're using @testing-library/react from 9.1.2 you're getting Definitely Typed under the covers. This was nicely illustrated by Kent showing what the TypeScript consumption experience looked like before the Definitely Typed switch:

And here's what it looked like after:

Identical! i.e it worked. I grant you this is one of the more boring before / after comparisons there is… But hopefully you can see it demonstrates that this is giving us exactly what we need.

To quote Kent once more:

By adding the type definitions to the dependencies of React Testing Library, the experience for users is completely unchanged. So it's a huge improvement for the maintenance of the type definitions without any breaking changes for the users of those definitions.

This is clearly an approach that's useful; it adds value. It would be tremendous to see other libraries that aren't written in TypeScript but would like to enable a good TypeScript experience for those people that do use TS also adopting this approach.

Update: Use a Loose Version Range in package.json

When I tweeted this article it prompted this helpful response from Andrew Branch of the TypeScript team:

> use a loose version range This is my advice as well and should probably be mentioned in the article TBH.

— Kent C. Dodds (@kentcdodds) August 18, 2019

Andrew makes the useful point that if you are adding support for TypeScript via an @types/... dependency then it's wise to do so with a loose version range. In the case of RTL we did it like this:

"@types/testing-library__react": "^9.1.0"

i.e. Any type definition with a version of 9.1 or greater (whilst still lower than 10.0.0) is considered valid. You could go even looser than that. If you really don't want to think about TypeScript beyond adding the dependency then a completely loose version range would do:

"@types/testing-library__react": "*"

This will always install the latest version of the @types/testing-library__react dependency and (importantly) allow users to override if there's a problematic @types/testing-library__react out there. This level of looseness is not really advised though. As in the scenario when a library (and associated type definitions) do a major release, users of the old major would get the wrong definitions by default when installing or upgrading (in range).

Probably the most helpful approach is the approach followed by RTL; fixing the major version but allowing all minor and patch releases inside a major version.

Update 2: Further Discussions!

The technique used in this blog post sparked an interesting conversation with members of the TypeScript team when it was applied to <a href="https://github.com/testing-library/jest-dom">https://github.com/testing-library/jest-dom</a>. The conversation can be read here.

· 5 min read

The fork-ts-checker-webpack-plugin has, since its inception, performed two classes of checking:

  1. Compilation errors which the TypeScript compiler surfaces up
  2. Linting issues which TSLint reports

You may have caught the announcement that TSLint is being deprecated and ESLint is the future of linting in the TypeScript world. This plainly has a bearing on linting in fork-ts-checker-webpack-plugin.

I've been beavering away at adding support for ESLint to the fork-ts-checker-webpack-plugin. I'm happy to say, the plugin now supports ESLint. Do you want to get your arms all around ESLint with fork-ts-checker-webpack-plugin? Read on!

How do you migrate from TSLint to ESLint?

Well, first of all you need the latest and greatest fork-ts-checker-webpack-plugin. Support for ESLint shipped with v1.4.0.

You need to change the options you supply to the plugin in your webpack.config.js to look something like this:

new ForkTsCheckerWebpackPlugin({ eslint: true });

You'll also need the various ESLint related packages to your package.json:

yarn add eslint @typescript-eslint/parser @typescript-eslint/eslint-plugin --dev
  • eslint
  • @typescript-eslint/parser: The parser that will allow ESLint to lint TypeScript code
  • @typescript-eslint/eslint-plugin: A plugin that contains ESLint rules that are TypeScript specific

If you want, you can pass options to ESLint using the eslintOptions option as well. These will be passed through to the underlying ESLint CLI Engine when it is instantiated. Docs on the supported options are documented here.

Go Configure

Now you're ready to use ESLint, you just need to give it some configuration. Typically, an .eslintrc.js is what you want here.

const path = require('path');
module.exports = {
parser: '@typescript-eslint/parser', // Specifies the ESLint parser
plugins: ['@typescript-eslint'],
env: {
browser: true,
jest: true,
},
extends: [
'plugin:@typescript-eslint/recommended', // Uses the recommended rules from the @typescript-eslint/eslint-plugin
],
parserOptions: {
project: path.resolve(__dirname, './tsconfig.json'),
tsconfigRootDir: __dirname,
ecmaVersion: 2018, // Allows for the parsing of modern ECMAScript features
sourceType: 'module', // Allows for the use of imports
},
rules: {
// Place to specify ESLint rules. Can be used to overwrite rules specified from the extended configs
// e.g. "@typescript-eslint/explicit-function-return-type": "off",
'@typescript-eslint/explicit-function-return-type': 'off',
'@typescript-eslint/no-unused-vars': 'off',
},
};

If you're a React person (and I am!) then you'll also need: yarn add eslint-plugin-react. Then enrich your eslintrc.js a little:

const path = require('path');
module.exports = {
parser: '@typescript-eslint/parser', // Specifies the ESLint parser
plugins: [
'@typescript-eslint',
'react',
// 'prettier' commented as we don't want to run prettier through eslint because performance
],
env: {
browser: true,
jest: true,
},
extends: [
'plugin:@typescript-eslint/recommended', // Uses the recommended rules from the @typescript-eslint/eslint-plugin
'prettier/@typescript-eslint', // Uses eslint-config-prettier to disable ESLint rules from @typescript-eslint/eslint-plugin that would conflict with prettier
// 'plugin:react/recommended', // Uses the recommended rules from @eslint-plugin-react
'prettier/react', // disables react-specific linting rules that conflict with prettier
// 'plugin:prettier/recommended' // Enables eslint-plugin-prettier and displays prettier errors as ESLint errors. Make sure this is always the last configuration in the extends array.
],
parserOptions: {
project: path.resolve(__dirname, './tsconfig.json'),
tsconfigRootDir: __dirname,
ecmaVersion: 2018, // Allows for the parsing of modern ECMAScript features
sourceType: 'module', // Allows for the use of imports
ecmaFeatures: {
jsx: true, // Allows for the parsing of JSX
},
},
rules: {
// Place to specify ESLint rules. Can be used to overwrite rules specified from the extended configs
// e.g. "@typescript-eslint/explicit-function-return-type": "off",
'@typescript-eslint/explicit-function-return-type': 'off',
'@typescript-eslint/no-unused-vars': 'off',

// These rules don't add much value, are better covered by TypeScript and good definition files
'react/no-direct-mutation-state': 'off',
'react/no-deprecated': 'off',
'react/no-string-refs': 'off',
'react/require-render-return': 'off',

'react/jsx-filename-extension': [
'warn',
{
extensions: ['.jsx', '.tsx'],
},
], // also want to use with ".tsx"
'react/prop-types': 'off', // Is this incompatible with TS props type?
},
settings: {
react: {
version: 'detect', // Tells eslint-plugin-react to automatically detect the version of React to use
},
},
};

You can add Prettier into the mix too. You can see how it is used in the above code sample. But given the impact that has on performance I wouldn't recommend it; hence it's commented out. There's a good piece by Rob Cooper's for more details on setting up Prettier and VS Code with TypeScript and ESLint.

Performance and Power Tools

It's worth noting that support for TypeScript in ESLint is still brand new. As such, the rule of "Make it Work, Make it Right, Make it Fast" applies.... ESLint with TypeScript still has some performance issues which should be ironed out in the fullness of time. You can track them here.

This is important to bear in mind as, when I converted a large codebase over to using ESLint, I discovered that initial performance of linting was terribly slow. Something that's worth doing right now is identifying which rules are costing you most timewise and tweaking based on whether you think they're earning their keep.

The TIMING environment variable can be used to provide a report on the relative cost performance wise of running each rule. A nice way to plug this into your workflow is to add the cross-env package to your project: yarn add cross-env -D and then add 2 scripts to your package.json:

"lint": "eslint ./",
"lint-rule-timings": "cross-env TIMING=1 yarn lint"
  • lint - just runs the linter standalone
  • lint-rule-timings - does the same but with the TIMING environment variable set to 1 so a report will be generated.

I'd advise, making use of lint-rule-timings to identify which rules are costing you performance and then turning off rules as you need to. Remember, different rules have different value.

Finally, if you'd like to see how it's done, here's an example of porting from TSLint to ESLint.

· 6 min read

Yarn PnP is an innovation by the Yarn team designed to speed up module resolution by node. To quote the (excellent) docs:

Plug’n’Play is an alternative installation strategy unveiled in September 2018...

The way regular installs work is simple: Yarn generates a node_modules directory that Node is then able to consume. In this context, Node doesn’t know the first thing about what a package is: it only reasons in terms of files. “Does this file exist here? No? Let’s look in the parent node_modules then. Does it exist here? Still no? Too bad… parent folder it is!” - and it does this until it matches something that matches one of the possibilities. That’s vastly inefficient.

When you think about it, Yarn knows everything about your dependency tree - it evens installs it! So why is Node tasked with locating your packages on the disk? Why don’t we simply query Yarn, and let it tell us where to look for a package X required by a package Y? That’s what Plug’n’Play (abbreviated PnP) is. Instead of generating a node_modules directory and leaving the resolution to Node, we now generate a single .pnp.js file and let Yarn tell us where to find our packages.

Yarn has been worked upon, amongst others, by the excellent Maël Nison. You can hear him talking about it in person in this talk at JSConfEU.

Thanks particularly to Maël's work, it's possible to use Yarn PnP with TypeScript using webpack with ts-loader andfork-ts-checker-webpack-plugin. This post intends to show you just how simple it is to convert a project that uses either to work with Yarn PnP.

Vanilla ts-loader

Your project is built using standalone ts-loader; i.e. a simple setup that handles both transpilation and type checking.

First things first, add this property to your package.json: (this is only required if you are using Yarn 1; this tag will be optional starting from the v2, where projects will switch to PnP by default.)

{
"installConfig": {
"pnp": true
}
}

Also, because this is webpack, we're going to need to add an extra dependency in the form of pnp-webpack-plugin:

yarn add -D pnp-webpack-plugin

To quote the excellent docs, make the following amends to your webpack.config.js:

const PnpWebpackPlugin = require(`pnp-webpack-plugin`);

module.exports = {
module: {
rules: [{
test: /\.ts$/,
loader: require.resolve('ts-loader'),
options: PnpWebpackPlugin.tsLoaderOptions(),
}],
},
resolve: {
plugins: [ PnpWebpackPlugin, ],
},
resolveLoader: {
plugins: [ PnpWebpackPlugin.moduleLoader(module), ],
},
};

If you have any options you want to pass to ts-loader, just pass them as parameter of pnp-webpack-plugin's tsLoaderOptions function and it will take care of forwarding them properly. Behind the scenes the tsLoaderOptions function is providing ts-loader with the options necessary to switch into Yarn PnP mode.

Congratulations; you now have ts-loader functioning with Yarn PnP support!

fork-ts-checker-webpack-plugin with ts-loader

You may well be using fork-ts-checker-webpack-plugin to handle type checking whilst ts-loader gets on with the transpilation. This workflow is also supported using pnp-webpack-plugin. You'll have needed to follow the same steps as the ts-loader setup. It's just the webpack.config.js tweaks that will be different.

const PnpWebpackPlugin = require(`pnp-webpack-plugin`);

module.exports = {
plugins: {
new ForkTsCheckerWebpackPlugin(PnpWebpackPlugin.forkTsCheckerOptions({
useTypescriptIncrementalApi: false, // not possible to use this until: https://github.com/microsoft/TypeScript/issues/31056
})),
}
module: {
rules: [{
test: /\.ts$/,
loader: require.resolve('ts-loader'),
options: PnpWebpackPlugin.tsLoaderOptions({ transpileOnly: true }),
}],
},
resolve: {
plugins: [ PnpWebpackPlugin, ],
},
resolveLoader: {
plugins: [ PnpWebpackPlugin.moduleLoader(module), ],
},
};

Again if you have any options you want to pass to ts-loader, just pass them as parameter of pnp-webpack-plugin's tsLoaderOptions function. As we're using fork-ts-checker-webpack-plugin we're going to want to stop ts-loader doing type checking with the transpileOnly: true option.

We're now initialising fork-ts-checker-webpack-plugin with pnp-webpack-plugin's forkTsCheckerOptions function. Behind the scenes the forkTsCheckerOptions function is providing the fork-ts-checker-webpack-plugin with the options necessary to switch into Yarn PnP mode.

And that's it! You now have ts-loader and fork-ts-checker-webpack-plugin functioning with Yarn PnP support!

Living on the Bleeding Edge

Whilst you can happily develop and build using Yarn PnP, it's worth bearing in mind that this is a new approach. As such, there's some rough edges right now.

If you're interested in Yarn PnP, it's worth taking the v2 of Yarn (Berry) for a spin. You can find it here: https://github.com/yarnpkg/berry. It's where most of the Yarn PnP work happens, and it includes zip loading - two birds, one stone!

Because there isn't first class support for Yarn PnP in TypeScript itself yet, you cannot make use of the Watch API through fork-ts-checker-webpack-plugin. (You can read about that issue here)

As you've likely noticed, the webpack configuration required makes for a noisy webpack.config.js. Further to that, VS Code (which is powered by TypeScript remember) has no support for Yarn PnP yet and so will present resolution errors to you. If you can ignore the sea of red squigglies all over your source files in the editor and just look at your webpack build you'll be fine.

There is a tool called PnPify that adds support for PnP to TypeScript (in particular tsc). You can find more information here: https://yarnpkg.github.io/berry/advanced/pnpify. For tsc it would be:

$> yarn pnpify tsc [...]

The gist is that it simulates the existence of node_modules by leveraging the data from the PnP file. As such it's not a perfect fix (pnp-webpack-plugin is a better integration), but it's a very useful tool to have to unblock yourself when using a project that doesn't support it.

PnPify actually allows us to use TypeScript in VSCode with PnP! Its documentation is here: https://yarnpkg.github.io/berry/advanced/pnpify#vscode-support

All of these hindrances should hopefully be resolved in future. Ideally, one day a good developer experience can be the default experience. In the meantime, you can still dev - just be prepared for the rough edges. Here's some useful resources to track the future of support:

This last one would be nice because:

  • We'd stop having to patch require
  • We probably wouldn't have to use yarn node if Node itself was able to find the loader somehow (such as if it was listed in the package.json metadata)

Thanks to Maël for his tireless work on Yarn. To my mind Maël is certainly a candidate for the hardest worker in open source. I've been shamelessly borrowing his excellent docs for this post - thanks for writing so excellently Maël!

· 3 min read

I'm one of the maintainers of the fork-ts-checker-webpack-plugin. Hi there!

Recently, various issues have been raised against create-react-app (which uses fork-ts-checker-webpack-plugin) as well as against the plugin itself. They've been related to the level of CPU usage in watch mode on idle; i.e. it's high!

Why High?

Now, under the covers, the fork-ts-checker-webpack-plugin uses the TypeScript watch API.

The marvellous John (not me - another John) did some digging and discovered the root cause came down to the way that the TypeScript watch API watches files:

TS uses internally the fs.watch and fs.watchFile API functions of nodejs for their watch mode. The latter function is even not recommended by nodejs documentation for performance reasons, and urges to use fs.watch instead.

NodeJS doc:

Using fs.watch() is more efficient than fs.watchFile and fs.unwatchFile. fs.watch should be used instead of fs.watchFile and fs.unwatchFile when possible.

"there is another"

John also found that there are other file watching behaviours offered by TypeScript. What's more, the file watching behaviour is configurable with an environment variable. That's right, if an environment variable called TSC_WATCHFILE is set, it controls the file watching approach used. Big news!

John did some rough benchmarking of the performance of the different options that be set on his PC running linux 64 bit. Here's how it came out:

ValueCPU usage on idle
TS default (TSC_WATCHFILE not set)7.4%
UseFsEventsWithFallbackDynamicPolling0.2%
UseFsEventsOnParentDirectory0.2%
PriorityPollingInterval6.2%
DynamicPriorityPolling0.5%
UseFsEvents0.2%

As you can see, the default performs poorly. On the other hand, an option like UseFsEventsWithFallbackDynamicPolling is comparative greasy lightning.

workaround!

To get this better experience into your world now, you could just set an environment variable on your machine. However, that doesn't scale; let's instead look at introducing the environment variable into your project explicitly.

We're going to do this in a cross platform way using <a href="https://github.com/kentcdodds/cross-env">cross-env</a>. This is a mighty useful utility by Kent C Dodds which allows you to set environment variables in a way that will work on Windows, Mac and Linux. Imagine it as the jQuery of the environment variables world :-)

Let's add it as a devDependency:

yarn add -D cross-env

Then take a look at your package.json. You've probably got a start script that looks something like this:

"start": "webpack-dev-server --progress --color --mode development --config webpack.config.development.js",

Or if you're a create-react-app user maybe this:

"start": "react-scripts start",

Prefix your start script with cross-env TSC_WATCHFILE=UseFsEventsWithFallbackDynamicPolling. This will, when run, initialise an environment variable called TSC_WATCHFILE with the value UseFsEventsWithFallbackDynamicPolling. Then it will start your development server as it did before. When TypeScript is fired up by webpack it will see this environment variable and use it to configure the file watching behaviour to one of the more performant options.

So, in the case of a create-react-app user, your finished start script would look like this:

"start": "cross-env TSC_WATCHFILE=UseFsEventsWithFallbackDynamicPolling react-scripts start",

The Future

There's a possibility that the default watch behaviour may change in TypeScript in future. It's currently under discussion, you can read more here.

· 2 min read

It's time for the first major version of fork-ts-checker-webpack-plugin. It's been a long time coming :-)

A Little History

The fork-ts-checker-webpack-plugin was originally the handiwork of Piotr Oleś. He raised an issue with ts-loader suggesting it could be the McCartney to ts-loader's Lennon:

Hi everyone!

I've created webpack plugin: fork-ts-checker-webpack-plugin that plays nicely with ts-loader. The idea is to compile project with transpileOnly: true and check types on separate process (async). With this approach, webpack build is not blocked by type checker and we have semantic check with fast incremental build. More info on github repo :)

So if you like it and you think it would be good to add some info in README.md about this plugin, I would be greatful.

Thanks :)

We did like it. We did think it would be good. We took him up on his kind offer.

Since that time many people have had their paws on the fork-ts-checker-webpack-plugin codebase. We love them all.

One Point Oh

We could have had our first major release a long time ago. The idea first occurred when webpack 5 alpha appeared. "Huh, look at that, a major version number.... Maybe we should do that?" "Great idea chap - do it!" So here it is; fresh out the box: v1.0.0

There are actually no breaking changes that we're aware of; users of 0.x fork-ts-checker-webpack-plugin should be be able to upgrade without any drama.

Incremental Watch API on by Default

Users of TypeScript 3+ may notice a performance improvement as by default the plugin now uses the incremental watch API in TypeScript.

Should this prove problematic you can opt out of using it by supplying useTypescriptIncrementalApi: false. We are aware of an issue with Vue and the incremental API. We hope it will be fixed soon - a generous member of the community is taking a look. In the meantime, we will not default to using the incremental watch API when in Vue mode.

Compatibility

As it stands, the plugin supports webpack 2, 3, 4 and 5 alpha. It is compatible with TypeScript 2.1+ and TSLint 4+.

Right that's it - enjoy it! And thanks everyone for contributing - we really dig your help. Much love.

· 3 min read

All I ask for is a compiler and a tight feedback loop. Narrowing the gap between making a change to a program and seeing the effect of that is a productivity boon. The TypeScript team are wise cats and dig this. They've taken strides to improve the developer experience of TypeScript users by introducing a "watch" API which can be leveraged by other tools. To quote the docs:

TypeScript 2.7 introduces two new APIs: one for creating "watcher" programs that provide set of APIs to trigger rebuilds, and a "builder" API that watchers can take advantage of... This can speed up large projects with many files.

Recently the wonderful 0xorial opened a PR to add support for the watch API to the fork-ts-checker-webpack-plugin.

I took this PR for a spin on a large project that I work on. With my machine, I was averaging 12 seconds between incremental builds. (I will charitably describe the machine in question as "challenged"; hobbled by one of the most aggressive virus checkers known to mankind. Fist bump InfoSec 🤜🤛😉) Switching to using the watch API dropped this to a mere 1.5 seconds!

You Can Watch Too

0xorial's PR was merged toot suite and was been released as [email protected]. If you'd like to take this for a spin then you can. Just:

  1. Up your version of the plugin to [email protected] in your package.json
  2. Add useTypescriptIncrementalApi: true to the plugin when you initialise it in your webpack.config.js.

That's it.

Mary Poppins

Sorry, I was trying to paint a word picture of something you might watch that was also comforting. Didn't quite work...

Anyway, you might be thinking "wait, just hold on a minute.... he said @next - I am not that bleeding edge." Well, it's not like that. Don't be scared.

fork-ts-checker-webpack-plugin has merely been updated for webpack 5 (which is in alpha) and the @next reflects that. To be clear, the @next version of the plugin still supports (remarkably!) webpack 2, 3 and 4 as well as 5 alpha. Users of current and historic versions of webpack should feel safe using the @next version; for webpack 2, 3 and 4 expect stability. webpack 5 users should expect potential changes to align with webpack 5 as it progresses.

Roadmap

This is available now and we'd love for you to try it out. As you can see, at the moment it's opt-in. You have to explicitly choose to use the new behaviour. Depending upon how testing goes, we may look to make this the default behaviour for the plugin in future (assuming users are running a high enough version of TypeScript). It would be great to hear from people if they have any views on that, or feedback in general.

Much ❤️ y'all. And many thanks to the very excellent 0xorial for the hard work.

· 4 min read

So project references eh? They shipped with TypeScript 3. We've just shipped initial support for project references in ts-loader v5.2.0. All the hard work was done by the amazing Andrew Branch. In fact I'd recommend taking a gander at the PR. Yay Andrew!

This post will take us through the nature of the support for project references in ts-loader now and what we hope the future will bring. It rips off shamelessly

borrows from the README.md documentation that Andrew wrote as part of the PR. Because I am not above stealing.

TL;DR

Using project references currently requires building referenced projects outside of ts-loader. We don’t want to keep it that way, but we’re releasing what we’ve got now. To try it out, you’ll need to pass projectReferences: true to loaderOptions.

Like tsc, but not like tsc --build

ts-loader has partial support for project references in that it will load dependent composite projects that are already built, but will not currently build/rebuild those upstream projects. The best way to explain exactly what this means is through an example. Say you have a project with a project reference pointing to the lib/ directory:

tsconfig.json
app.ts
lib/
tsconfig.json
niftyUtil.ts

And we’ll assume that the root tsconfig.json has { "references": { "path": "lib" } }, which means that any import of a file that’s part of the lib sub-project is treated as a reference to another project, not just a reference to a TypeScript file. Before discussing how ts-loader handles this, it’s helpful to review at a really basic level what tsc itself does here. If you were to run tsc on this tiny example project, the build would fail with the error:

error TS6305: Output file 'lib/niftyUtil.d.ts' has not been built from source file 'lib/niftyUtil.ts'.

Using project references actually instructs tscnot to build anything that’s part of another project from source, but rather to look for any .d.ts and .js files that have already been generated from a previous build. Since we’ve never built the project in lib before, those files don’t exist, so building the root project fails. Still just thinking about how tsc works, there are two options to make the build succeed: either run tsc -p lib/tsconfig.jsonfirst, or simply run tsc --build, which will figure out that lib hasn’t been built and build it first for you.

Ok, so how is that relevant to ts-loader? Because the best way to think about what ts-loader does with project references is that it acts like tsc, but not like tsc --build. If you run ts-loader on a project that’s using project references, and any upstream project hasn’t been built, you’ll get the exact same error TS6305 that you would get with tsc. If you modify a source file in an upstream project and don’t rebuild that project, ts-loader won’t have any idea that you’ve changed anything—it will still be looking at the output from the last time you built that file.

“Hey, don’t you think that sounds kind of useless and terrible?”

Well, sort of. You can consider it a work-in-progress. It’s true that on its own, as of today, ts-loader doesn’t have everything you need to take advantage of project references in webpack. In practice, though, consuming upstream projects and building upstream projects are somewhat separate concerns. Building them will likely come in a future release. For background, see the original issue.

outDir Windows problemo.

At the moment, composite projects built using the outDir compiler option cannot be consumed using ts-loader on Windows. If you try to, ts-loader throws a "has not been built from source file" error. You can see Andrew and I puzzling over it in the PR. We don't know why yet; it's possible there's a bug in tsc. It's more likely there's a bug in ts-loader. Hopefully it's going to get solved at some point. (Hey, maybe you're the one to solve it!) Either way, we didn't want to hold back from releasing. So if you're building on Windows then avoid building composite projects using outDir.

· 5 min read

This a tale of things that are and things that aren't. It's a tale of semantic versioning, the lack thereof and heartbreak. It's a story of terror and failing builds. But it has a bittersweet ending wherein our heroes learn a lesson and understand the need for compromise. We all come out better and wiser people. Hopefully there's something for everybody; let's start with an exciting opener and see where it goes...

Definitely Typed

This is often the experience people have of using type definitions from Definitely Typed:

Ivan Drago saying &quot;I must break you&quot;

Specifically, people are used to the idea of semantic versioning and expect it from types published to npm by Definitely Typed. They wait in vain. I've written before about the Definitely Typed / @types semantic version compromise. And I wanted to talk about it a little further as (watching the issues raised on DT) I don't think the message has quite got out there. To summarise:

  1. npm is built on top of semantic versioning and they take it seriously. When a package is published it should be categorised as a major release (breaking changes), a minor release (extra functionality which is backwards compatible) or a patch release (backwards compatible bug fixes).

  2. Definitely Typed publishes type definitions to npm under the @types namespace

  3. To make consumption of type definitions easier, the versioning of a type definition package will seek to emulate the versioning of the npm package it supports. For example, right now <a href="https://www.npmjs.com/package/react-router">react-router</a>'s latest version is 4.3.1. The corresponding type definition <a href="https://www.npmjs.com/package/@types/react-router">@types/react-router</a>'s latest version is 4.0.31. (It's fairly common for type definition versions to lag behind the package they type.)

    If there's a breaking change to the react-router type definition then the new version published will have a version number that begins "4.0.". If you are relying on semantic versioning this will break you.

I Couldn't Help But Notice Your Pain

If you're reading this and can't quite believe that @types would be so inconsiderate as to break the conventions of the ecosystem it lives in, I understand. But hopefully you can see there are reasons for this. In the end, being able to use npm as a delivery mechanism for versioned type definitions associated with another package has a cost; that cost is semantic versioning for the type definitions themselves. It wasn't a choice taken lightly; it's a pragmatic compromise.

"But what about my failing builds? Fine, people are going to change type definitions, but why should I burn because of their choices?"

Excellent question. Truly. Well here's my advice: don't expect semantic versioning where there is none. Use specific package versions. You can do that directly with your package.json. For example replace something like this: "@types/react-router": "^4.0.0" with a specific version number: "@types/react-router": "4.0.31". With this approach it's a specific activity to upgrade your type definitions. A chore if you will; but a chore that guarantees builds will not fail unexpectedly due to changing type defs.

My own personal preference is yarn. Mother, I'm in love with a yarn.lock file. It is the alternative npm client that came out of Facebook. It pins the exact versions of all packages used in your yarn.lock file and guarantees to install the same versions each time. Problem solved; and it even allows me to keep the semantic versioning in my package.json as is.

This has some value in that when I upgrade I probably want to upgrade to a newer version following the semantic versioning convention. I should just expect that I'll need to check valid compilation when I do so. yarn even has it's own built in utility that tells you when things are out of date: yarn outdated:

Screenshot of outdated dependencies in yarn

So lovely.

You Were Already Broken - I Just Showed You How

Before I finish I wanted to draw out one reason why breaking changes can be a reason for happiness. Because sometimes your code is wrong. An update to a type definition may highlight that. This is analogous to when the TypeScript compiler ships a new version. When I upgrade to a newer version of TypeScript it lights up errors in my codebase that I hadn't spotted. Yay compiler!

An example of this is a PR I submitted to DefinitelyTyped earlier this week. This PR changed how react-router models the parameters of a Match. Until now, an object was expected; the user could define any object they liked. However, react-router will only produce string values for a parameter. If you look at the underlying code it's nothing more than an exec on a regular expression.

My PR enforces this at type level by changing this:

export interface match<P> {
params: P;
...
}

To this

export interface match<Params extends { [K in keyof Params]?: string } = {}> {
params: Params;

So any object definition supplied must have string values (and you don't actually need to supply an object definition; that's optional now).

I expected this PR to break people and it did. But this is a useful break. If they were relying upon their parameters to be types other than strings they would be experiencing some unexpected behaviour. In fact, it's exactly this that prompted my PR in the first place. A colleague had defined his parameters as numbers and couldn't understand why they weren't behaving like numbers. Because they weren't numbers! And wonderfully, this will now be caught at compile time; not runtime. Yay!

· 3 min read

This post shows how you can use TypeScript with webpack alias to move away from using relative paths in your import statements.

Long relative paths

I write a lot of TypeScript. Because I like modularity, I split up my codebases into discreet modules and import from them as necessary.

Take a look at this import:

import * as utils from '../../../../../../../shared/utils';

Now take a look at this import:

import * as utils from 'shared/utils';

Which do you prefer? If the answer was "the first" then read no further. You have all you need, go forth and be happy. If the answer was "the second" then stick around; I can help!

TypeScript

There's been a solution for this in TypeScript-land for some time. You can read the detail in the "path mapping" docs here.

Let's take a slightly simpler example; we have a folder structure that looks like this:

projectRoot
├── components
│ └── page.tsx (imports '../shared/utils')
├── shared
│ ├── folder1
│ └── folder2
│ └── utils.ts
└── tsconfig.json

We would like page.tsx to import 'shared/utils' instead of '../shared/utils'. We can, if we augment our tsconfig.json with the following properties:

{
"compilerOptions": {
"baseUrl": ".",
"paths": {
"components/*": ["components/*"],
"shared/*": ["shared/*"]
}
}
}

Then we can use option 2. We can happily write:

import * as utils from 'shared/utils';

My code compiles, yay.... Ship it!

Let's not get over-excited. Actually, we're only part-way there; you can compile this code with the TypeScript compiler.... But is that enough?

I bundle my TypeScript with ts-loader and webpack. If I try and use my new exciting import statement above with my build system then disappointment is in my future. webpack will be all like "import whuuuuuuuut?"

You see, webpack doesn't know what we told the TypeScript compiler in the tsconfig.json. Why would it? It was our little secret.

webpack resolve.alias to the rescue!

This same functionality has existed in webpack for a long time; actually much longer than it has existed in TypeScript. It's the resolve.alias functionality.

So, looking at that I should be able to augment my webpack.config.js like so:

module.exports = {
//...
resolve: {
alias: {
components: path.resolve(process.cwd(), 'components/'),
shared: path.resolve(process.cwd(), 'shared/'),
},
},
};

And now both webpack and TypeScript are up to speed with how to resolve modules.

DRY with the tsconfig-paths-webpack-plugin

When I look at the tsconfig.json and the webpack.config.js something occurs to me: I don't like to repeat myself. As well as that, I don't like to repeat myself. It's so... Repetitive.

The declarations you make in the tsconfig.json are re-stated in the webpack.config.js. Who wants to maintain two sets of code where one would do? Not me.

Fortunately, you don't have to. There's the tsconfig-paths-webpack-plugin for webpack which will do the job for you. You can replace your verbose resolve.alias with this:

module.exports = {
//...
resolve: {
plugins: [
new TsconfigPathsPlugin({
/*configFile: "./path/to/tsconfig.json" */
}),
],
},
};

This does the hard graft of reading your tsconfig.json and translating path mappings into webpack aliases. From this point forward, you need only edit the tsconfig.json and everything else will just work.

Thanks to Jonas Kello, author of the plugin; it's tremendous! Thanks also to Sean Larkin and Stanislav Panferov (of awesome-typescript-loader) who together worked on the original plugin that I understand the tsconfig-paths-webpack-plugin is based on. Great work!

· 10 min read

Most applications I write have some need for authentication and perhaps authorisation too. In fact, most apps most people write fall into that bracket. Here's the thing: Auth done well is a *big* chunk of work. And the minute you start thinking about that you almost invariably lose focus on the thing you actually want to build and ship.

So this Christmas I decided it was time to take a look into offloading that particular problem onto someone else. I knew there were third parties who provided Auth-As-A-Service - time to give them a whirl. On the recommendation of a friend, I made Auth0 my first port of call. Lest you be expecting a full breakdown of the various players in this space, let me stop you now; I liked Auth0 so much I strayed no further. Auth0 kicks AAAS. (I'm so sorry)

What I wanted to build

My criteria for "auth success" was this:

  • I want to build a SPA, specifically a React SPA. Ideally, I shouldn't need a back end of my own at all
  • I want to use TypeScript on my client.

But, for when I do implement a back end:

  • I want that to be able to use the client side's Auth tokens to allow access to Auth routes on my server.
  • ‎I want to able to identify the user, given the token, to provide targeted data
  • Oh, and I want to use .NET Core 2 for my server.

And in achieving all of the I want to add minimal code to my app. Not War and Peace. My code should remain focused on doing what it does.

Boil a Plate

I ended up with unqualified ticks for all my criteria, but it took some work to find out. I will say that Auth0 do travel the extra mile in terms of getting you up and running. When you create a new Client in Auth0 you're given the option to download a quick start using the technology of your choice.

This was a massive plus for me. I took the quickstart provided and ran with it to get me to the point of meeting my own criteria. You can use this boilerplate for your own ends. Herewith, a walkthrough:

The Walkthrough

Fork and clone the repo at this location: https://github.com/johnnyreilly/auth0-react-typescript-asp-net-core.

What have we got? 2 folders, ClientApp contains the React app, Web contains the ASP.NET Core app. Now we need to get setup with Auth0 and customise our config.

Setup Auth0

Here's how to get the app set up with Auth0; you're going to need to sign up for a (free) Auth0 account. Then login into Auth0 and go to the management portal.

Client

  • Create a Client with the name of your choice and use the Single Page Web Applications template.
  • From the new Client Settings page take the Domain and Client ID and update the similarly named properties in the appsettings.Development.json and appsettings.Production.json files with these settings.
  • To the Allowed Callback URLs setting add the URLs: http://localhost:3000/callback,http://localhost:5000/callback - the first of these faciliates running in Debug mode, the second in Production mode. If you were to deploy this you'd need to add other callback URLs in here too.

API

  • Create an API with the name of your choice (I recommend the same as the Client to avoid confusion), an identifier which can be anything you like; I like to use the URL of my app but it's your call.
  • From the new API Settings page take the Identifier and update the Audience property in the appsettings.Development.json and appsettings.Production.json files with that value.

Running the App

Production build

Build the client app with yarn build in the ClientApp folder. (Don't forget to yarn install first.) Then, in the Web folder dotnet restore, dotnet run and open your browser to http://localhost:5000

Debugging

Run the client app using webpack-dev-server using yarn start in the ClientApp folder. Fire up VS Code in the root of the repo and hit F5 to debug the server. Then open your browser to http://localhost:3000

The Tour

When you fire up the app you're presented with "you are not logged in!" message and the option to login. Do it, it'll take you to the Auth0 "lock" screen where you can sign up / login. Once you do that you'll be asked to confirm access:

All this is powered by Auth0's auth0-js npm package. (Excellent type definition files are available from Definitely Typed; I'm using the @types/auth0-js package DT publishes.) Usage of which is super simple; it exposes an authorize method that when called triggers the Auth0 lock screen. Once you've "okayed" you'll be taken back to the app which will use the parseHash method to extract the access token that Auth0 has provided. Take a look at how our authStore makes use of auth0-js: (don't be scared; it uses mobx - but you could use anything)

authStore.ts

import { Auth0UserProfile, WebAuth } from 'auth0-js';
import { action, computed, observable, runInAction } from 'mobx';
import { IAuth0Config } from '../../config';
import { StorageFacade } from '../storageFacade';

interface IStorageToken {
accessToken: string;
idToken: string;
expiresAt: number;
}

const STORAGE_TOKEN = 'storage_token';

export class AuthStore {
@observable.ref auth0: WebAuth;
@observable.ref userProfile: Auth0UserProfile;
@observable.ref token: IStorageToken;

constructor(config: IAuth0Config, private storage: StorageFacade) {
this.auth0 = new WebAuth({
domain: config.domain,
clientID: config.clientId,
redirectUri: config.redirectUri,
audience: config.audience,
responseType: 'token id_token',
scope: 'openid email profile do:admin:thing', // the do:admin:thing scope is custom and defined in the scopes section of our API in the Auth0 dashboard
});
}

initialise() {
const token = this.parseToken(this.storage.getItem(STORAGE_TOKEN));
if (token) {
this.setSession(token);
}
this.storage.addEventListener(this.onStorageChanged);
}

parseToken(tokenString: string) {
const token = JSON.parse(tokenString || '{}');
return token;
}

onStorageChanged = (event: StorageEvent) => {
if (event.key === STORAGE_TOKEN) {
this.setSession(this.parseToken(event.newValue));
}
};

@computed get isAuthenticated() {
// Check whether the current time is past the
// access token's expiry time
return this.token && new Date().getTime() < this.token.expiresAt;
}

login = () => {
this.auth0.authorize();
};

handleAuthentication = () => {
this.auth0.parseHash((err, authResult) => {
if (authResult && authResult.accessToken && authResult.idToken) {
const token = {
accessToken: authResult.accessToken,
idToken: authResult.idToken,
// Set the time that the access token will expire at
expiresAt: authResult.expiresIn * 1000 + new Date().getTime(),
};

this.setSession(token);
} else if (err) {
// tslint:disable-next-line:no-console
console.log(err);
alert(`Error: ${err.error}. Check the console for further details.`);
}
});
};

@action
setSession(token: IStorageToken) {
this.token = token;
this.storage.setItem(STORAGE_TOKEN, JSON.stringify(token));
}

getAccessToken = () => {
const accessToken = this.token.accessToken;
if (!accessToken) {
throw new Error('No access token found');
}
return accessToken;
};

@action
loadProfile = async () => {
const accessToken = this.token.accessToken;
if (!accessToken) {
return;
}

this.auth0.client.userInfo(accessToken, (err, profile) => {
if (err) {
throw err;
}

if (profile) {
runInAction(() => (this.userProfile = profile));
return profile;
}

return undefined;
});
};

@action
logout = () => {
// Clear access token and ID token from local storage
this.storage.removeItem(STORAGE_TOKEN);

this.token = null;
this.userProfile = null;
};
}

Once you're logged in the app offers you more in the way of navigation options. A "Profile" screen shows you the details your React app has retrieved from Auth0 about you. This is backed by the client.userInfo method on auth0-js. There's also a "Ping" screen which is where your React app talks to your ASP.NET Core server. The screenshot below illustrates the result of hitting the "Get Private Data" button:

The "Get Server to Retrieve Profile Data" button is interesting as it illustrates that the server can get access to your profile data as well. There's nothing insecure here; it gets the details using the access token retrieved from Auth0 by the ClientApp and passed to the server. It's the API we set up in Auth0 that is in play here. The app uses the Domain and the access token to talk to Auth0 like so:

UserController.cs

// Retrieve the access_token claim which we saved in the OnTokenValidated event
var accessToken = User.Claims.FirstOrDefault(c => c.Type == "access_token").Value;

// If we have an access_token, then retrieve the user's information
if (!string.IsNullOrEmpty(accessToken))
{
var domain = _config["Auth0:Domain"];
var apiClient = new AuthenticationApiClient(domain);
var userInfo = await apiClient.GetUserInfoAsync(accessToken);

return Ok(userInfo);
}

We can also access the sub claim, which uniquely identifies the user:

UserController.cs

// We're not doing anything with this, but hey! It's useful to know where the user id lives
var userId = User.Claims.FirstOrDefault(c => c.Type == System.Security.Claims.ClaimTypes.NameIdentifier).Value; // our userId is the sub value

The reason our ASP.NET Core app works with Auth0 and that we have access to the access token here in the first place is because of our startup code:

Startup.cs

public void ConfigureServices(IServiceCollection services)
{
var domain = $"https://{Configuration["Auth0:Domain"]}/";
services.AddAuthentication(options =>
{
options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
}).AddJwtBearer(options =>
{
options.Authority = domain;
options.Audience = Configuration["Auth0:Audience"];
options.Events = new JwtBearerEvents
{
OnTokenValidated = context =>
{
if (context.SecurityToken is JwtSecurityToken token)
{
if (context.Principal.Identity is ClaimsIdentity identity)
{
identity.AddClaim(new Claim("access_token", token.RawData));
}
}

return Task.FromResult(0);
}
};
});

// ....

Authorization

We're pretty much done now; just one magic button to investigate: "Get Admin Data". If you presently try and access the admin data you'll get a 403 Forbidden. It's forbidden because that endpoint relies on the "do:admin:thing" scope in our claims:

UserController.cs

[Authorize(Scopes.DoAdminThing)]
[HttpGet("api/userDoAdminThing")]
public IActionResult GetUserDoAdminThing()
{
return Ok("Admin endpoint");
}

Scopes.cs

public static class Scopes
{
// the do:admin:thing scope is custom and defined in the scopes section of our API in the Auth0 dashboard
public const string DoAdminThing = "do:admin:thing";
}

This wired up in our ASP.NET Core app like so:

Startup.cs

services.AddAuthorization(options =>
{
options.AddPolicy(Scopes.DoAdminThing, policy => policy.Requirements.Add(new HasScopeRequirement(Scopes.DoAdminThing, domain)));
});

// register the scope authorization handler
services.AddSingleton<iauthorizationhandler, hasscopehandler="">();
</iauthorizationhandler,>

HasScopeHandler.cs

public class HasScopeHandler : AuthorizationHandler<hasscoperequirement>
{
protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, HasScopeRequirement requirement)
{
// If user does not have the scope claim, get out of here
if (!context.User.HasClaim(c => c.Type == "scope" && c.Issuer == requirement.Issuer))
return Task.CompletedTask;

// Split the scopes string into an array
var scopes = context.User.FindFirst(c => c.Type == "scope" && c.Issuer == requirement.Issuer).Value.Split(' ');

// Succeed if the scope array contains the required scope
if (scopes.Any(s => s == requirement.Scope))
context.Succeed(requirement);

return Task.CompletedTask;
}
}
</hasscoperequirement>

The reason we're 403ing at present is because when our HasScopeHandler executes, requirement.Scope has the value of "do:admin:thing" and our scopes do not contain that value. To add it, go to your API in the Auth0 management console and add it:

Note that you can control how this scope is acquired using "Rules" in the Auth0 management portal.

You won't be able to access the admin endpoint yet because you're still rocking with the old access token; pre-newly-added scope. But when you next login to Auth0 you'll see a prompt like this:

Which demonstrates that you're being granted an extra scope. With your new shiny access token you can now access the oh-so-secret Admin endpoint.

I had some more questions about Auth0 as I'm still new to it myself. To see my question (and the very helpful answer!) go here: https://community.auth0.com/questions/13786/get-user-data-server-side-what-is-a-good-approach

· 3 min read

2017 is drawing to a close, and it's been a big, big year in webpack-land. It's been a big year for ts-loader too. At the start of the year v1.3.3 was the latest version available, officially supporting webpack 1. (Old school!) We end the year with ts-loader sitting pretty at v3.2.0 and supporting webpack 2 and 3.

Many releases were shipped and that was down to a whole bunch of folk. People helped out with bug fixes, features, advice and docs improvements. All of these help.ts-loader wouldn't be where it is without you so thanks to everyone that helped out - you rock!

I'm really grateful to all of you. Thanks so much! (Apologies for those I've missed anyone out - I know there's more still.)

fork-ts-checker-webpack-plugin build speed improvements

Alongside other's direct contributions to ts-loader, other projects improved the experience of using ts-loader. Piotr Oleś dropped his <a href="https://github.com/Realytics/fork-ts-checker-webpack-plugin">fork-ts-checker-webpack-plugin</a> this year which nicely increased build speed when used with ts-loader.

That opened up the possibility of adding HappyPack support. I had the good fortune to work with webpack's Tobias Koppers and ExtraHop's Alex Birmingham on improving TypeScript build speed further.

So what does the future hold?

ts-loader 4.0 (Live webpack or Die Hard)

The web marches on and webpack gallops alongside. Here's what's in the pipeline for ts-loader in 2018:

Start using the new watch API

A new watch API is being made available in the TypeScript API. We have a PR from the amazing Sheetal Nandi which adds support to ts-loader. Given that's quite a big PR we want to merge that before anything else lands. The watch API is still being finalised but once it lands in TypeScript we'll look to merge the PR and ship a new version of ts-loader.

Drop custom module resolution

Historically ts-loader has had it's own module resolution mechanism in place. We're going to look to move to use the TypeScript mechanism instead. The old module resolution be deprecated but will remain available behind a flag for a time. In future we'll look to drop the old mechanism entirely.

Drop support for TypeScript 2.3 and below

The codebase can be made simpler if we drop support for older versions of TypeScript so that's what we plan to do with our next breaking changes release.

webpack v4 is in alpha now

If any changes need to happen to ts-loader to support webpack 4 then they will be. Personally I'm planning to help out with <a href="https://github.com/Realytics/fork-ts-checker-webpack-plugin">fork-ts-checker-webpack-plugin</a> as there will likely be some changes required there.

contextAsConfigBasePath will be replaced with a context

The option that landed in the last month doesn't quite achieve the aims of the original PR's author Christian Tinauer. Consequently it's going to be replaced with a new option. This is queued up and ready to go here.

reportFiles option to be added

Michel Rasschaert is presently working on adding a reportFiles option to ts-loader. You can see the PR in progress here.

Merry Christmas!

You can expect to see the first releases of ts-loader 4.0 in 2018. In the meantime, I'd like to wish you Merry Christmas and a Happy New Year! And once more, thanks and thanks again to all you generous people who help build ts-loader. You're wonderful and so I'm glad you do what you do... joyeux Noel!

· 4 min read

So, there you sit, conflicted. You've got a lovely build setup; it's a thing of beauty. Precious, polished like a diamond, sharpened like a circular saw. There at the core of your carefully crafted setup sits webpack. Heaving, mysterious... powerful.

There's more. Not only are you sold on webpack, you're all in TypeScript too. But now you've heard tell of "Progressive Web Applications" and "Service Workers".... And you want to be dealt in. You want to build web apps that work offline. It can't work can it? Your build setup's going to be like the creature in the episode where they've taken one too many jumps and it's gone into the foetal position.

So this is the plan kids. Let's take a simple TypeScript, webpack setup and make it a PWA. Like Victoria Wood said...

Let's Do It Tonight

How to begin? Well first comes the plagiarism; here's a simple TypeScript webpack setup. Rob it. Stick a gun to its head and order it onto your hard drive. yarn install to pick up your dependencies and then yarn start to see what you've got. Something like this:

Beautiful right? And if we yarn build we end up with a simple output:

To test what we've built out we want to use a simple web server to serve up the dist folder. I've got the npm package http-server installed globally for just such an eventuality. So let's http-server ./dist and I'm once again looking at our simple app; it looks exactly the same as when I yarn start. Smashing. What would we see if we were offline? Well thanks to the magic of Chrome DevTools we can find out. Offline and refresh our browser...

Not very user friendly. Once we're done, we should be able to refresh and still see our app.

Work(box) It

Workbox is a project that makes the setting up of Service Workers (aka the magic that powers PWAs) easier. It supports webpack use cases through the workbox-webpack-plugin; so let's give it a whirl. Incidentally, there's a cracking example on the Workbox site.

yarn add workbox-webpack-plugin --dev adds the plugin to our project. To make use of it, punt your way over to the webpack.production.config.js and add an entry for the plugin. We also need to set the hash parameter of the html-webpack-plugin to be false; if it's true it'll cause problems for the ServiceWorker.

const WorkboxPlugin = require('workbox-webpack-plugin');

//...

module.exports = {
//...

plugins: [
//...

new HtmlWebpackPlugin({
hash: false,
inject: true,
template: 'src/index.html',
minify: {
removeComments: true,
collapseWhitespace: true,
removeRedundantAttributes: true,
useShortDoctype: true,
removeEmptyAttributes: true,
removeStyleLinkTypeAttributes: true,
keepClosingSlash: true,
minifyJS: true,
minifyCSS: true,
minifyURLs: true,
},
}),

new WorkboxPlugin({
// we want our service worker to cache the dist directory
globDirectory: 'dist',
// these are the sorts of files we want to cache
globPatterns: ['**/*.{html,js,css,png,svg,jpg,gif,json}'],
// this is where we want our ServiceWorker to be created
swDest: path.resolve('dist', 'sw.js'),
// these options encourage the ServiceWorkers to get in there fast
// and not allow any straggling "old" SWs to hang around
clientsClaim: true,
skipWaiting: true,
}),
],

//...
};

With this in place, yarn build will generate a ServiceWorker. Now to alter our code to register it. Open up index.tsx and add this to the end of the file:

if ('serviceWorker' in navigator) {
window.addEventListener('load', () => {
navigator.serviceWorker
.register('/sw.js')
.then((registration) => {
// tslint:disable:no-console
console.log('SW registered: ', registration);
})
.catch((registrationError) => {
console.log('SW registration failed: ', registrationError);
});
});
}

Put it together and...

What Have We Got?

Let's yarn build again.

Oooohh look! A service worker is with us. Does it work? Let's find out... http-server ./dist Browse to http://localhost:8080 and let's have a look at the console.

Looks very exciting. So now the test; let's go offline and refresh:

You are looking at the 200s of success. You're now running with webpack and TypeScript and you have built a Progressive Web Application. Feel good about life.

· 4 min read

A funny thing happened on the way to the registry the other day. Something changed in an npm package I was using and confusion arose. You can read my unfiltered confusion here but here's the slightly clearer explanation.

The TL;DR

When modules are imported, your loader will decide which module format it wants to use. CommonJS / AMD etc. The loader decides. It's important that the export is of the same "shape" regardless of the module format. For 2 reasons:

  1. You want to be able to reliably use the module regardless of the choice that your loader has made for which export to use.
  2. Because when it comes to writing type definition files for modules, there is support for a single external definition. Not one for each module format.

The DR

Once upon a time we decided to use big.js in our project. It's popular and my old friend Steve Ognibene apparently originally wrote the type definitions which can be found here. Then the definitions were updated by Miika Hänninen. And then there was pain.

UMD / CommonJS **and** Global exports oh my!

My usage code was as simple as this:

import * as BigJs from 'big.js';
const lookABigJs = new BigJs(1);

If you execute it in a browser it works. It makes me a Big. However the TypeScript compiler is **not** happy. No siree. Nope. It's bellowing at me:

[ts] Cannot use 'new' with an expression whose type lacks a call or construct signature.

So I think: "Huh! I guess Miika just missed something off when he updated the definition files. No bother. I'll fix it." I take a look at how big.js exposes itself to the outside world. At the time, thusly:

//AMD.
if (typeof define === 'function' && define.amd) {
define(function () {
return Big;
});

// Node and other CommonJS-like environments that support module.exports.
} else if (typeof module !== 'undefined' && module.exports) {
module.exports = Big;
module.exports.Big = Big;
//Browser.
} else {
global.Big = Big;
}

Now, we were using webpack as our script bundler / loader. webpack is supersmart; it can take all kinds of module formats. So although it's more famous for supporting CommonJS, it can roll with AMD. That's exactly what's happening here. When webpack encounters the above code, it goes with the AMD export. So at runtime, import * as BigJs from 'big.js'; lands up resolving to the return Big; above.

Now this turns out to be super-relevant. I took a look at the relevant portion of the definition file and found this:

export const Big: BigConstructor;

Which tells me that Big is being exported as a subproperty of the module. That makes sense; that lines up with the module.exports.Big = Big; statement in the the big.js source code. There's a "gotcha" coming; can you guess what it is?

The problem is that our type definition is not exposing Big as a default export. So even though it's there; TypeScript won't let us use it. What's killing us further is that webpack is loading the AMD export which doesn't have Big as a subproperty of the module. It only has it as a default.

Kitson Kelly expressed the problem well when he said:

there is a different shape depending on which loader is being used and I am not sure that makes a huge amount of sense. The AMD shape is different than the CommonJS shape. While that is technically possible, that feels like that is an issue.

One Definition to Rule Them All

He's right; it is an issue. From a TypeScript perspective there is no way to write a definition file that allows for different module "shapes" depending upon the module type. If you really wanted to do that you're reduced to writing multiple definition files. That's blind alley anyway; what you want is a module to expose itself with the same "shape" regardless of the module type. What you want is this:

AMD === CommonJS === Global

And that's what we now have! Thanks to Michael McLaughlin, author of big.js, version 4.0 unified the export shape of the package. Miika Hänninen submitted another PR which fixed up the type definitions. And once again the world is a beautiful place!

· 7 min read

This post also featured as a webpack Medium publication.

If you're like me then you'll like TypeScript and you'll like module bundling with webpack. You may also like speedy builds. That's completely understandable. The fact of the matter is, you sacrifice a bit of build speed to have webpack in the mix. Wouldn't it be great if we could even up the difference?

I'm the primary maintainer of ts-loader, a TypeScript loader for webpack. Just recently a couple of PRs were submitted that said, in other words: ts-loader is like this:

But it could be like this:

Apologies for the image quality above; there appear to be no high quality pictures out there of KITT in Super Pursuit Mode for me to defame with Garan Jenkin's atrocious puns.

fork-ts-checker-webpack-plugin

"Faster type checking with forked process" read the enticing name of the issue. It turned out to be Piotr Oleś (@OlesDev) telling the world about his beautiful creation. He'd put together a mighty fine plugin that can be used alongside ts-loader called the fork-ts-checker-webpack-plugin. The name is a bit of a mouthful but the purpose is mouth-watering. To quote the README, it is a:

Webpack plugin that runs typescript type checker on a separate process.

What does this mean and how does this fit with ts-loader? Well, ts-loader does 2 jobs:

  1. It transpiles your TypeScript into JavaScript and hands it off to webpack
  2. It collects any TypeScript compilation errors and reports them to webpack

What this plugin does is say, "forget about #2 - we've got this." It removes the responsibility for type checking from ts-loader, so the only work ts-loader does is transpilation. In the meantime, the all important type checking is still happening. To be honest, there would be little reason to recommend this approach otherwise. The difference is fork-ts-checker-webpack-plugin is doing the heavy lifting in a separate process. This provides a nice performance boost to your workflow. ts-loader is doing less and that's a good thing

.

The approach used here is similar to that employed by awesome-typescript-loader. ATL is another TypeScript loader for webpack by the excellent Stanislav Panferov. ATL also has a technique for performing typechecking in a forked process. fork-ts-checker-webpack-plugin was an effort by Piotr to implement something similar but with improved incremental build performance.

How do we use it? Add fork-ts-checker-webpack-plugin as a devDependency of your project and then amend the webpack.config.js to set ts-loader into transpileOnly mode and drop the plugin into the mix:

var ForkTsCheckerWebpackPlugin = require('fork-ts-checker-webpack-plugin');

var webpackConfig = {
// other config...
context: __dirname, // to automatically find tsconfig.json
module: {
rules: [
{
test: /\.tsx?$/,
loader: 'ts-loader',
options: {
// disable type checker - we will use it in fork plugin
transpileOnly: true,
},
},
],
},
plugins: [new ForkTsCheckerWebpackPlugin()],
};

If you'd like to see an example of how to use the plugin then take a look at a simple example and a more involved one.

HappyPack

Not so long ago I didn't know what happyness

HappyPack was. "Happiness in the form of faster webpack build times." That's what it is.

HappyPack makes webpack builds faster by allowing you to transform multiple files in parallel.

It does this by spinning up multiple threads, each with their own loaders inside. We wanted to do this with ts-loader; to have multiple instances of ts-loader running. Work can then be divided up across these separate loaders. Isn't multi-threading great?

ts-loader did not initially play nicely with HappyPack; essentially this is because ts-loader touches parts of webpack's API that HappyPack replaces. The entirely wonderful Artem Kozlov submitted a PR which added HappyPack support to ts-loader. Support essentially amounts to switching ts-loader to run in transpileOnly mode and ensuring that there is no attempt to talk to parts of the webpack API that HappyPack removes.

It would be hard to recommend using HappyPack as is because, as with transpileOnly mode you lose all typechecking. Where it becomes worthwhile is where it is combined with the fork-ts-checker-webpack-plugin so you keep the typechecking.

Enough with the chitter chatter; how can we achieve this? Add HappyPack as a devDependency of your project and then amend the webpack.config.js as follows:

var HappyPack = require('happypack');
var ForkTsCheckerWebpackPlugin = require('fork-ts-checker-webpack-plugin');

module.exports = {
// other config...
context: __dirname, // to automatically find tsconfig.json
module: {
rules: [
{
test: /\.tsx?$/,
exclude: /node_modules/,
loader: 'happypack/loader?id=ts',
},
],
},
plugins: [
new HappyPack({
id: 'ts',
threads: 2,
loaders: [
{
path: 'ts-loader',
query: { happyPackMode: true },
},
],
}),
new ForkTsCheckerWebpackPlugin({ checkSyntacticErrors: true }),
],
};

Note that the ts-loader options are now configured via the HappyPack query and that we're setting ts-loader with the happyPackMode option set.

There's one other thing to note which is important; we're now passing the checkSyntacticErrors option to the fork plugin. This ensures that the plugin checks for both syntactic errors (eg const array = [{} {}];) and semantic errors (eg const x: number = '1';). By default the plugin only checks for semantic errors. This is because when ts-loader is used with transpileOnly set, ts-loader will still report syntactic errors. But when used in happyPackMode it does not.

If you'd like to see an example of how to use HappyPack then once again we have a simple example and a more involved one.

thread-loader + cache-loader

You might have some reservations about using HappyPack. First of all the quirky configuration required makes your webpack config rather less comprehensible. Also, HappyPack is not officially blessed by webpack. It is a side project developed externally from webpack and there's no guarantees that new versions of webpack won't break it. Neither of these are reasons not to use HappyPack but they are things to bear in mind.

What if there were a way to parallelise our builds which dealt with these issues? Well, there is! By using thread-loader and cache-loader in combination you can both feel happy that you're using an official webpack workflow and you can have a config that's less confusing.

What would that config look like? This:

var ForkTsCheckerWebpackPlugin = require('fork-ts-checker-webpack-plugin');

module.exports = {
// other config...
context: __dirname, // to automatically find tsconfig.json
module: {
rules: {
test: /\.tsx?$/,
use: [
{ loader: 'cache-loader' },
{
loader: 'thread-loader',
options: {
// there should be 1 cpu for the fork-ts-checker-webpack-plugin
workers: require('os').cpus().length - 1,
},
},
{
loader: 'ts-loader',
options: {
happyPackMode: true, // IMPORTANT! use happyPackMode mode to speed-up compilation and reduce errors reported to webpack
},
},
],
},
},
plugins: [new ForkTsCheckerWebpackPlugin({ checkSyntacticErrors: true })],
};

As you can see the configuration is much cleaner than with HappyPack. Interestingly ts-loader still needs to run in "happyPackMode" and that's because thread-loader is essentially behaving in the same fashion as with HappyPack and so ts-loader needs to behave in the same way. Probably ts-loader should have a more generic flag name than "happyPackMode". (Famously, naming things is hard; so if you've a good idea, tell me!)

These loaders are new and so tread carefully. My own experiences have been pretty positive but your mileage may vary. Do note that, as with HappyPack, the thread-loader is highly configurable.

If you'd like to see an example of how to use thread-loader and cache-loader then once again we have a simple example and a more involved one.

All This Could Be Yours...

Wow! It looks like we can cut our build time by 4 minutes! #webpack@typescriptlang // cc @johnny_reillypic.twitter.com/gjvy9SLBAT

— Donald Pipowitch (@PipoPeperoni) June 23, 2017

In this post we're improving build speeds with TypeScript and webpack in 3 ways:

fork-ts-checker-webpack-plugin
With this plugin in play ts-loader only performs transpilation. ts-loader is doing less so the build is faster.
HappyPack
With HappyPack in the mix, the build is parallelised. That parallelisation means the build is faster.
thread-loader / cache-loader
With thread-loader and cache-loader, again the build is parallelised and the build is faster.

· 6 min read

One of the most exciting features to ship with TypeScript 2.4 was support for the dynamic import expression. To quote the release blog post:

Dynamic import expressions are a new feature in ECMAScript that allows you to asynchronously request a module at any arbitrary point in your program. These modules come back as Promises of the module itself, and can be await-ed in an async function, or can be given a callback with .then.

...

Many bundlers have support for automatically splitting output bundles (a.k.a. “code splitting”) based on these import() expressions, so consider using this new feature with the esnext module target. Note that this feature won’t work with the es2015 module target, since the feature is anticipated for ES2018 or later.

As the post makes clear, this adds support for a very bleeding edge ECMAScript feature. This is not fully standardised yet; it's currently at stage 3 on the TC39 proposals list. That means it's at the Candidate stage and is unlikely to change further. If you'd like to read more about it then take a look at the official proposal here.

Whilst this is super-new, we are still able to use this feature. We just have to jump through a few hoops first.

TypeScript Setup

First of all, you need to install TypeScript 2.4. With that in place you need to make some adjustments to your tsconfig.json in order that the relevant compiler switches are flipped. What do you need? First of all you need to be targeting ECMAScript 2015 as a minimum. That's important specifically because ES2015 contained Promises which is what dynamic imports produce. The second thing you need is to target the module type of esnext. You're likely targeting es2015 now, esnext is that plus dynamic imports.

Here's a tsconfig.json I made earlier which has the relevant settings set:

{
"compilerOptions": {
"allowSyntheticDefaultImports": true,
"lib": ["dom", "es2015"],
"target": "es2015",
"module": "esnext",
"moduleResolution": "node",
"noImplicitAny": true,
"noUnusedLocals": true,
"noUnusedParameters": true,
"removeComments": false,
"preserveConstEnums": true,
"sourceMap": true,
"skipLibCheck": true
}
}

Babel Setup

At the time of writing, browser support for dynamic import is non-existent. This will likely be the case for some time but it needn't hold us back. Babel can step in here and compile our super-new JS into JS that will run in our browsers today.

You'll need to decide for yourself how much you want Babel to do for you. In my case I'm targeting old school browsers which don't yet support ES2015. You may not need to. However, the one thing that you'll certainly need is the Syntax Dynamic Import plugin. It's this that allows Babel to process dynamic import statements.

These are the options I'm passing to Babel:

var babelOptions = {
plugins: ['syntax-dynamic-import'],
presets: [
[
'es2015',
{
modules: false,
},
],
],
};

You're also going to need something that actually execute the imports. In my case I'm using webpack...

webpack

webpack 2 supports import(). So if you webpack set up with ts-loader (or awesome-typescript-loader etc), chaining into babel-loader you should find you have a setup that supports dynamic import. That means a webpack.config.js that looks something like this:

var path = require('path');
var webpack = require('webpack');

var babelOptions = {
plugins: ['syntax-dynamic-import'],
presets: [
[
'es2015',
{
modules: false,
},
],
],
};

module.exports = {
entry: './app.ts',
output: {
filename: 'bundle.js',
},
module: {
rules: [
{
test: /\.ts(x?)$/,
exclude: /node_modules/,
use: [
{
loader: 'babel-loader',
options: babelOptions,
},
{
loader: 'ts-loader',
},
],
},
{
test: /\.js$/,
exclude: /node_modules/,
use: [
{
loader: 'babel-loader',
options: babelOptions,
},
],
},
],
},
resolve: {
extensions: ['.ts', '.tsx', '.js'],
},
};

ts-loader example

I'm one of the maintainers of ts-loader which is a TypeScript loader for webpack. When support for dynamic imports landed I wanted to add a test to cover usage of the new syntax with ts-loader.

We have 2 test packs for ts-loader, one of which is our "execution" test pack. It is so named because it works by spinning up webpack with ts-loader and then using karma to execute a set of tests. Each "test" in our execution test pack is actually a mini-project with its own test suite (generally jasmine but that's entirely configurabe). Each complete with its own webpack.config.js, karma.conf.js and either a typings.json or package.json for bringing in dependencies. So it's a full test of whether code slung with ts-loader and webpack actually executes when the output is plugged into a browser.

This is the test pack for dynamic imports:

import a from "../src/a";
import b from "../src/b";

describe("app", () => {
it("a to be 'a' and b to be 'b' (classic)", () => {
expect(a).toBe("a");
expect(b).