Saturday, 13 July 2019

Using TypeScript and ESLint with webpack (fork-ts-checker-webpack-plugin new feature!)

The fork-ts-checker-webpack-plugin has, since its inception, performed two classes of checking:

  1. Compilation errors which the TypeScript compiler surfaces up
  2. Linting issues which TSLint reports

You may have caught the announcement that TSLint is being deprecated and ESLint is the future of linting in the TypeScript world. This plainly has a bearing on linting in fork-ts-checker-webpack-plugin.

I've been beavering away at adding support for ESLint to the fork-ts-checker-webpack-plugin. I'm happy to say, the plugin now supports ESLint. Do you want to get your arms all around ESLint with fork-ts-checker-webpack-plugin? Read on!

How do you migrate from TSLint to ESLint?

Well, first of all you need the latest and greatest fork-ts-checker-webpack-plugin. Support for ESLint shipped with v1.4.0.

You need to change the options you supply to the plugin in your webpack.config.js to look something like this:

new ForkTsCheckerWebpackPlugin({ eslint: true })

You'll also need the various ESLint related packages to your package.json:

yarn add eslint @typescript-eslint/parser @typescript-eslint/eslint-plugin --dev

  • eslint
  • @typescript-eslint/parser: The parser that will allow ESLint to lint TypeScript code
  • @typescript-eslint/eslint-plugin: A plugin that contains ESLint rules that are TypeScript specific

If you want, you can pass options to ESLint using the eslintOptions option as well. These will be passed through to the underlying ESLint CLI Engine when it is instantiated. Docs on the supported options are documented here.

Go Configure

Now you're ready to use ESLint, you just need to give it some configuration. Typically, an .eslintrc.js is what you want here.

const path = require('path');
module.exports = {
    parser: '@typescript-eslint/parser', // Specifies the ESLint parser
    plugins: [
        '@typescript-eslint',
    ],
    env: {
        browser: true,
        jest: true
    },
    extends: [
        'plugin:@typescript-eslint/recommended', // Uses the recommended rules from the @typescript-eslint/eslint-plugin
    ],
    parserOptions: {
        project: path.resolve(__dirname, './tsconfig.json'),
        tsconfigRootDir: __dirname,
        ecmaVersion: 2018, // Allows for the parsing of modern ECMAScript features
        sourceType: 'module', // Allows for the use of imports
    },
    rules: {
        // Place to specify ESLint rules. Can be used to overwrite rules specified from the extended configs
        // e.g. "@typescript-eslint/explicit-function-return-type": "off",
        '@typescript-eslint/explicit-function-return-type': 'off',
        '@typescript-eslint/no-unused-vars': 'off',
    },
};

If you're a React person (and I am!) then you'll also need: yarn add eslint-plugin-react. Then enrich your eslintrc.js a little:

const path = require('path');
module.exports = {
    parser: '@typescript-eslint/parser', // Specifies the ESLint parser
    plugins: [
        '@typescript-eslint',
        'react'
        // 'prettier' commented as we don't want to run prettier through eslint because performance
    ],
    env: {
        browser: true,
        jest: true
    },
    extends: [
        'plugin:@typescript-eslint/recommended', // Uses the recommended rules from the @typescript-eslint/eslint-plugin
        'prettier/@typescript-eslint', // Uses eslint-config-prettier to disable ESLint rules from @typescript-eslint/eslint-plugin that would conflict with prettier
        // 'plugin:react/recommended', // Uses the recommended rules from @eslint-plugin-react
        'prettier/react', // disables react-specific linting rules that conflict with prettier
        // 'plugin:prettier/recommended' // Enables eslint-plugin-prettier and displays prettier errors as ESLint errors. Make sure this is always the last configuration in the extends array.
    ],
    parserOptions: {
        project: path.resolve(__dirname, './tsconfig.json'),
        tsconfigRootDir: __dirname,
        ecmaVersion: 2018, // Allows for the parsing of modern ECMAScript features
        sourceType: 'module', // Allows for the use of imports
        ecmaFeatures: {
            jsx: true // Allows for the parsing of JSX
        }
    },
    rules: {
        // Place to specify ESLint rules. Can be used to overwrite rules specified from the extended configs
        // e.g. "@typescript-eslint/explicit-function-return-type": "off",
        '@typescript-eslint/explicit-function-return-type': 'off',
        '@typescript-eslint/no-unused-vars': 'off',

        // These rules don't add much value, are better covered by TypeScript and good definition files
        'react/no-direct-mutation-state': 'off',
        'react/no-deprecated': 'off',
        'react/no-string-refs': 'off',
        'react/require-render-return': 'off',

        'react/jsx-filename-extension': [
            'warn',
            {
                extensions: ['.jsx', '.tsx']
            }
        ], // also want to use with ".tsx"
        'react/prop-types': 'off' // Is this incompatible with TS props type?
    },
    settings: {
        react: {
            version: 'detect' // Tells eslint-plugin-react to automatically detect the version of React to use
        }
    }
};

You can add Prettier into the mix too. You can see how it is used in the above code sample. But given the impact that has on performance I wouldn't recommend it; hence it's commented out. There's a good piece by Rob Cooper's for more details on setting up Prettier and VS Code with TypeScript and ESLint.

Performance and Power Tools

It's worth noting that support for TypeScript in ESLint is still brand new. As such, the rule of "Make it Work, Make it Right, Make it Fast" applies.... ESLint with TypeScript still has some performance issues which should be ironed out in the fullness of time. You can track them here.

This is important to bear in mind as, when I converted a large codebase over to using ESLint, I discovered that initial performance of linting was terribly slow. Something that's worth doing right now is identifying which rules are costing you most timewise and tweaking based on whether you think they're earning their keep.

The TIMING environment variable can be used to provide a report on the relative cost performance wise of running each rule. A nice way to plug this into your workflow is to add the cross-env package to your project: yarn add cross-env -D and then add 2 scripts to your package.json:

"lint": "eslint ./", 
"lint-rule-timings": "cross-env TIMING=1 yarn lint"
  • lint - just runs the linter standalone
  • lint-rule-timings - does the same but with the TIMING environment variable set to 1 so a report will be generated.

I'd advise, making use of lint-rule-timings to identify which rules are costing you performance and then turning off rules as you need to. Remember, different rules have different value.

Finally, if you'd like to see how it's done, here's an example of porting from TSLint to ESLint.

Friday, 7 June 2019

TypeScript / webpack - "you down with PnP? Yarn, you know me!"

Yarn PnP is an innovation by the Yarn team designed to speed up module resolution by node. To quote the (excellent) docs:

Plug’n’Play is an alternative installation strategy unveiled in September 2018...

The way regular installs work is simple: Yarn generates a node_modules directory that Node is then able to consume. In this context, Node doesn’t know the first thing about what a package is: it only reasons in terms of files. “Does this file exist here? No? Let’s look in the parent node_modules then. Does it exist here? Still no? Too bad… parent folder it is!” - and it does this until it matches something that matches one of the possibilities. That’s vastly inefficient.

When you think about it, Yarn knows everything about your dependency tree - it evens installs it! So why is Node tasked with locating your packages on the disk? Why don’t we simply query Yarn, and let it tell us where to look for a package X required by a package Y? That’s what Plug’n’Play (abbreviated PnP) is. Instead of generating a node_modules directory and leaving the resolution to Node, we now generate a single .pnp.js file and let Yarn tell us where to find our packages.

Yarn has been worked upon, amongst others, by the excellent Maël Nison. You can hear him talking about it in person in this talk at JSConfEU.

Thanks particularly to Maël's work, it's possible to use Yarn PnP with TypeScript using webpack with ts-loader and fork-ts-checker-webpack-plugin. This post intends to show you just how simple it is to convert a project that uses either to work with Yarn PnP.

Vanilla ts-loader

Your project is built using standalone ts-loader; i.e. a simple setup that handles both transpilation and type checking.

First things first, add this property to your package.json: (this is only required if you are using Yarn 1; this tag will be optional starting from the v2, where projects will switch to PnP by default.)

{
    "installConfig": {
        "pnp": true 
    }
}

Also, because this is webpack, we're going to need to add an extra dependency in the form of pnp-webpack-plugin:

yarn add -D pnp-webpack-plugin

To quote the excellent docs, make the following amends to your webpack.config.js:

const PnpWebpackPlugin = require(`pnp-webpack-plugin`);

module.exports = { 
    module: { 
        rules: [{ 
            test: /\.ts$/, 
            loader: require.resolve('ts-loader'), 
            options: PnpWebpackPlugin.tsLoaderOptions(), 
        }], 
    },
    resolve: { 
        plugins: [ PnpWebpackPlugin, ], 
    },
    resolveLoader: { 
        plugins: [ PnpWebpackPlugin.moduleLoader(module), ],
    },
};

If you have any options you want to pass to ts-loader, just pass them as parameter of pnp-webpack-plugin's tsLoaderOptions function and it will take care of forwarding them properly. Behind the scenes the tsLoaderOptions function is providing ts-loader with the options necessary to switch into Yarn PnP mode.

Congratulations; you now have ts-loader functioning with Yarn PnP support!

fork-ts-checker-webpack-plugin with ts-loader

You may well be using fork-ts-checker-webpack-plugin to handle type checking whilst ts-loader gets on with the transpilation. This workflow is also supported using pnp-webpack-plugin. You'll have needed to follow the same steps as the ts-loader setup. It's just the webpack.config.js tweaks that will be different.

const PnpWebpackPlugin = require(`pnp-webpack-plugin`);

module.exports = { 
    plugins: {
        new ForkTsCheckerWebpackPlugin(PnpWebpackPlugin.forkTsCheckerOptions({
            useTypescriptIncrementalApi: false, // not possible to use this until: https://github.com/microsoft/TypeScript/issues/31056
        })),
    }
    module: { 
        rules: [{ 
            test: /\.ts$/, 
            loader: require.resolve('ts-loader'), 
            options: PnpWebpackPlugin.tsLoaderOptions({ transpileOnly: true }), 
        }], 
    },
    resolve: { 
        plugins: [ PnpWebpackPlugin, ], 
    },
    resolveLoader: { 
        plugins: [ PnpWebpackPlugin.moduleLoader(module), ],
    },
};

Again if you have any options you want to pass to ts-loader, just pass them as parameter of pnp-webpack-plugin's tsLoaderOptions function. As we're using fork-ts-checker-webpack-plugin we're going to want to stop ts-loader doing type checking with the transpileOnly: true option.

We're now initialising fork-ts-checker-webpack-plugin with pnp-webpack-plugin's forkTsCheckerOptions function. Behind the scenes the forkTsCheckerOptions function is providing the fork-ts-checker-webpack-plugin with the options necessary to switch into Yarn PnP mode.

And that's it! You now have ts-loader and fork-ts-checker-webpack-plugin functioning with Yarn PnP support!

Living on the Bleeding Edge

Whilst you can happily develop and build using Yarn PnP, it's worth bearing in mind that this is a new approach. As such, there's some rough edges right now.

If you're interested in Yarn PnP, it's worth taking the v2 of Yarn (Berry) for a spin. You can find it here: https://github.com/yarnpkg/berry. It's where most of the Yarn PnP work happens, and it includes zip loading - two birds, one stone!

Because there isn't first class support for Yarn PnP in TypeScript itself yet, you cannot make use of the Watch API through fork-ts-checker-webpack-plugin. (You can read about that issue here)

As you've likely noticed, the webpack configuration required makes for a noisy webpack.config.js. Further to that, VS Code (which is powered by TypeScript remember) has no support for Yarn PnP yet and so will present resolution errors to you. If you can ignore the sea of red squigglies all over your source files in the editor and just look at your webpack build you'll be fine.

There is a tool called PnPify that adds support for PnP to TypeScript (in particular tsc). You can find more information here: https://yarnpkg.github.io/berry/advanced/pnpify. For tsc it would be:


$> yarn pnpify tsc [...]

The gist is that it simulates the existence of node_modules by leveraging the data from the PnP file. As such it's not a perfect fix (pnp-webpack-plugin is a better integration), but it's a very useful tool to have to unblock yourself when using a project that doesn't support it.

PnPify actually allows us to use TypeScript in VSCode with PnP! Its documentation is here: https://yarnpkg.github.io/berry/advanced/pnpify#vscode-support

All of these hindrances should hopefully be resolved in future. Ideally, one day a good developer experience can be the default experience. In the meantime, you can still dev - just be prepared for the rough edges. Here's some useful resources to track the future of support:

This last one would be nice because:

  • We'd stop having to patch require
  • We probably wouldn't have to use yarn node if Node itself was able to find the loader somehow (such as if it was listed in the package.json metadata)

Thanks to Maël for his tireless work on Yarn. To my mind Maël is certainly a candidate for the hardest worker in open source. I've been shamelessly borrowing his excellent docs for this post - thanks for writing so excellently Maël!

Thursday, 23 May 2019

TypeScript and high CPU usage - watch don't stare!

I'm one of the maintainers of the fork-ts-checker-webpack-plugin. Hi there!

Recently, various issues have been raised against create-react-app (which uses fork-ts-checker-webpack-plugin) as well as against the plugin itself. They've been related to the level of CPU usage in watch mode on idle; i.e. it's high!

Why High?

Now, under the covers, the fork-ts-checker-webpack-plugin uses the TypeScript watch API.

The marvellous John (not me - another John) did some digging and discovered the root cause came down to the way that the TypeScript watch API watches files:

TS uses internally the fs.watch and fs.watchFile API functions of nodejs for their watch mode. The latter function is even not recommended by nodejs documentation for performance reasons, and urges to use fs.watch instead.

NodeJS doc:

Using fs.watch() is more efficient than fs.watchFile and fs.unwatchFile. fs.watch should be used instead of fs.watchFile and fs.unwatchFile when possible.

"there is another"

John also found that there are other file watching behaviours offered by TypeScript. What's more, the file watching behaviour is configurable with an environment variable. That's right, if an environment variable called TSC_WATCHFILE is set, it controls the file watching approach used. Big news!

John did some rough benchmarking of the performance of the different options that be set on his PC running linux 64 bit. Here's how it came out:

Value CPU usage on idle
TS default (TSC_WATCHFILE not set) 7.4%
UseFsEventsWithFallbackDynamicPolling 0.2%
UseFsEventsOnParentDirectory 0.2%
PriorityPollingInterval 6.2%
DynamicPriorityPolling 0.5%
UseFsEvents 0.2%

As you can see, the default performs poorly. On the other hand, an option like UseFsEventsWithFallbackDynamicPolling is comparative greasy lightning.

workaround!

To get this better experience into your world now, you could just set an environment variable on your machine. However, that doesn't scale; let's instead look at introducing the environment variable into your project explicitly.

We're going to do this in a cross platform way using cross-env. This is a mighty useful utility by Kent C Dodds which allows you to set environment variables in a way that will work on Windows, Mac and Linux. Imagine it as the jQuery of the environment variables world :-)

Let's add it as a devDependency:

yarn add -D cross-env

Then take a look at your package.json. You've probably got a start script that looks something like this:

"start": "webpack-dev-server --progress --color --mode development --config webpack.config.development.js",

Or if you're a create-react-app user maybe this:

"start": "react-scripts start",

Prefix your start script with cross-env TSC_WATCHFILE=UseFsEventsWithFallbackDynamicPolling. This will, when run, initialise an environment variable called TSC_WATCHFILE with the value UseFsEventsWithFallbackDynamicPolling. Then it will start your development server as it did before. When TypeScript is fired up by webpack it will see this environment variable and use it to configure the file watching behaviour to one of the more performant options.

So, in the case of a create-react-app user, your finished start script would look like this:

"start": "cross-env TSC_WATCHFILE=UseFsEventsWithFallbackDynamicPolling react-scripts start",

The Future

There's a possibility that the default watch behaviour may change in TypeScript in future. It's currently under discussion, you can read more here.

Saturday, 27 April 2019

react-select with less typing lag

This is going out to all those people using react-select with 1000+ items to render. To those people typing into the select and saying out loud "it's so laggy.... This can't be... It's 2019... I mean, right?" To the people who read this GitHub issue top to bottom 30 times and still came back unsure of what to do. This is for you.

I'm lying. Mostly this goes out to me. I have a select box. I need it to render 2000+ items. I want it to be lovely. I want my users to be delighted as they use it. I want them to type in and (this is the crucial part!) for the control to feel responsive. Not laggy. Not like each keypress is going to Jupiter and back before it renders to the screen.

Amongst the various gems on the GitHub issue are shared CodeSandboxes illustrating ways to integrate react-select with react-window. That's great and they do improve things. However, they don't do much to improve the laggy typing feel. There's brief mention of a props tweak you can make to react-select; this:

filterOption={createFilter({ ignoreAccents: false })}

What does this do? Well, this improves the typing lag experience massively. For why? Well, if you look at the code you find that the default value is ignoreAccents: true. This default makes react-select invoke an expensive (and scary sounding) function called stripDiacritics. Not once but twice. Ouchy. And this kills performance.

But if you're okay with accents not being ignored (and spoiler: I am) then this is the option for you.

Here's a CodeSandbox which also includes the ignoreAccents: false tweak. Enjoy!

Edit johnnyreilly/react-window-with-react-select-less-laggy

import React, { Component } from "react";
import ReactDOM from "react-dom";
import Select, { createFilter } from "react-select";
import { FixedSizeList as List } from "react-window";

import "./styles.css";

const options = [];
for (let i = 0; i < 2500; i = i + 1) {
  options.push({ value: i, label: `Option ${i}` });
}

const height = 35;

class MenuList extends Component {
  render() {
    const { options, children, maxHeight, getValue } = this.props;
    const [value] = getValue();
    const initialOffset = options.indexOf(value) * height;

    return (
      <List
        height={maxHeight}
        itemCount={children.length}
        itemSize={height}
        initialScrollOffset={initialOffset}
      >
        {({ index, style }) => <div style={style}>{children[index]}</div>}
      </List>
    );
  }
}

const App = () => (
  <Select
    filterOption={createFilter({ ignoreAccents: false })} // this makes all the difference!
    components={{ MenuList }}
    options={options}
  />
);

ReactDOM.render(<App />, document.getElementById("root"));

Sunday, 24 March 2019

Template Tricks for a Dainty DOM

I'm somewhat into code golf. Placing restrictions on what you're "allowed" to do in code and seeing what the happens as a result. I'd like to share with you something that came out of some recent dabblings.

Typically I spend a good amount of time playing with TypeScript. Either working on build tools or making web apps with it. (Usually with a portion of React on the side.) This is something different.

I have a side project on the go which is essentially a mini analytics dashboard. For the purposes of this piece let's call it "StatsDash". When I was starting it I thought: let's try something different. Let's build StatsDash with HTML only. The actual HTML is hand cranked by me and generated in ASP.Net Core / C# using a combination of LINQ and string interpolation. (Who needs Razor? 😎) I'll say it's pretty fun - but the back end is not what I want to focus on.

I got something up and running pretty quickly in pure HTML. The first lesson I learned was this: HTML alone is hella ugly. So I relaxed my criteria; I allowed CSS to come play as long as I didn't have to write any / much myself. There followed some experimentation with different CSS frameworks. For a while I rolled with Bootstrap (old school!), then Bulma and finally I settled on Materialized. Materialized is a heavily inspired by Google's Material Design and is hence quite beautiful. With my HTML and Materialize's CSS we were rolling. Beautiful stats - no JS.

"Oh All Right; Just a Splash"

Lovely as things were, StatsDash quickly got to the point where there was too much information on the screen. It was time to make some changes. If data is to convey a message, it must first be comprehensible.

I needed a way to hide and show data as people interacted with StatsDash. I wanted to achieve this without starting to render on the client side and also without going back to the server each time.

If you want interactions in your UI all roads lead to JS. It's certainly possible to do some tricks with CSS but that's a round of code golf I'm ill equipped to play. So, I took a look at what Materialized had to offer. Usefully it has a Modal component. With that in play I'd be able to separate the detailed information into different modals which the users could show and hide as required. Perfect!

It required a little JS. What's a line or two between friends? Dear reader, I compromised once more.

The DOM Bunker

With my handy modals, StatsDash was now a one stop shop for a great deal of information. Info which took the form of DOM nodes. Lots of them. And by "lots of them" I want you to think along the lines of "space is big, really big...".

This was impacting users. Clicking to open a modal resulted in a noticeable lag. It would take 2+ seconds for the browser to respond. Users found themselves clicking multiple times; wondering why nothing seemed to occur. In the end the modal would shuffle into view. However, this wasn't the best experience. The lack of responsiveness was getting in the way of users enjoying all StatsDash had to offer.

Running an audit of StatsDash in Chrome DevTools there was no doubt we had a DOM problem:

What to do? I still didn't want to go back to the server on each click in StatsDash. And I didn't want to start writing rendering code on the client as well either. I have in the past mixed client and server side rendering and I know well that it's a first class ticket to a confusing codebase.

Smuggling DOM in Templates

There's a mechanism that supports this use case directly: the <template> element. To quote MDN:

The HTML Content Template (<template>) element is a mechanism for holding client-side content that is not to be rendered when a page is loaded but may subsequently be instantiated during runtime using JavaScript.

Think of a template as a content fragment that is being stored for subsequent use in the document.

This is exactly what I'm after. I can keep my rendering server side, but instead wrap content that isn't immediately visible to users inside a <template> element and render that only when users need it.

So in the case of my modals (where most of my DOM lives), I can tuck the contents of each modal into a <template> element. Then, when the user clicks to open a modal we move that template content into the DOM so they can see it. Likewise, as they close a modal we can clear out the modal's DOM content to ease the load on the dear old browser.

"That Sounds Complicated..."

It's not. Let me show you how easily this is accomplished. First of all, wrap all your modal contents into <template> elements. They should look a little something like this:

<div>
    <button data-target="modalId" class="btn modal-trigger">Open the Modal!</button>

    <template>
        <!--
        loads of DOM nodes
        -->
    </template>

    <div id="modalId" class="modal modal-fixed-footer"></div>
</div>

Next, where you initialise your modals you need to make a little tweak:

document.addEventListener('DOMContentLoaded', function() {
    M.Modal.init(document.querySelectorAll('.modal'), {
        onOpenStart: modalDiv => {
            const template = modalDiv.parentNode.querySelector('template');

            modalDiv.appendChild(document.importNode(template.content, true));
        },
        onCloseEnd: modalDiv => {
            while (modalDiv.firstChild) {
                modalDiv.removeChild(modalDiv.firstChild);
            }
        }
    });
});

That's it! As you can see, before we open our modals, the onOpenStart callback will fire which creates the actual DOM elements based upon the template. And when the modals finish closing the onCloseEnd callback runs to remove those DOM elements once more.

For this minimal change, the client gets a dramatically different user experience. StatsDash went from super laggy to satisfyingly fast. Using templates, The number of initial DOM nodes dropped from more than 20,000 to 200. That's right 💯 times smaller!

Do It Yourself

The code examples above rely upon the Materialize modals. However the principles used here are broadly applicable. It's easy for you to take the approach outlined here and apply it in a different situation.

If you're interested in some of the other exciting things you can do with templates then I recommend Eric Bidelman's post on the topic.

Friday, 22 March 2019

Google Analytics API and ASP.Net Core

Some of my posts are meaningful treaties on the nature of software development. Some are detailed explanations of approaches you can use. Some are effectively code dumps. This is one of those.

I recently had need to be able to access the API for Google Analytics from ASP.Net Core. Getting this up and running turned out to be surprisingly tough because of an absence of good examples. So here it is; an example of how you can access a simple page access stat using the API:

async Task<SomeKindOfDataStructure[]> GetUsageFromGoogleAnalytics(DateTime startAtThisDate, DateTime endAtThisDate)
{
    // Create the DateRange object. Here we want data from last week.
    var dateRange = new DateRange
    {
        StartDate = startAtThisDate.ToString("yyyy-MM-dd"),
        EndDate = endAtThisDate.ToString("yyyy-MM-dd")
    };
    // Create the Metrics and dimensions object.
    // var metrics = new List<Metric> { new Metric { Expression = "ga:sessions", Alias = "Sessions" } };
    // var dimensions = new List<Dimension> { new Dimension { Name = "ga:pageTitle" } };
    var metrics = new List<Metric> { new Metric { Expression = "ga:uniquePageviews" } };
    var dimensions = new List<Dimension> { 
        new Dimension { Name = "ga:date" },
        new Dimension { Name = "ga:dimension1" } 
    };

    // Get required View Id from configuration
    var viewId = $"ga:{"[VIEWID]"}";

    // Create the Request object.
    var reportRequest = new ReportRequest
    {
        DateRanges = new List<DateRange> { dateRange },
        Metrics = metrics,
        Dimensions = dimensions,
        FiltersExpression = "ga:pagePath==/index.html",
        ViewId = viewId
    };

    var getReportsRequest = new GetReportsRequest {
        ReportRequests = new List<ReportRequest> { reportRequest }
    };
        
    //Invoke Google Analytics API call and get report
    var analyticsService = GetAnalyticsReportingServiceInstance();
    var response = await (analyticsService.Reports.BatchGet(getReportsRequest)).ExecuteAsync();

    var logins = response.Reports[0].Data.Rows.Select(row => new SomeKindOfDataStructure {
        Date = new DateTime(
            year: Convert.ToInt32(row.Dimensions[0].Substring(0, 4)), 
            month: Convert.ToInt32(row.Dimensions[0].Substring(4, 2)), 
            day: Convert.ToInt32(row.Dimensions[0].Substring(6, 2))),
        NumberOfLogins = Convert.ToInt32(row.Metrics[0].Values[0])
    })
    .OrderByDescending(login => login.Date)
    .ToArray();

    return logins;
}

/// <summary>
/// Intializes and returns Analytics Reporting Service Instance
/// </summary>
AnalyticsReportingService GetAnalyticsReportingServiceInstance() {
    var googleAuthFlow = new GoogleAuthorizationCodeFlow(new GoogleAuthorizationCodeFlow.Initializer {
        ClientSecrets = new ClientSecrets {
            ClientId = "[CLIENTID]",
            ClientSecret = "[CLIENTSECRET]"
        }
    });

    var responseToken = new TokenResponse {
        AccessToken = "[ANALYTICSTOKEN]",
        RefreshToken = "[REFRESHTOKEN]",
        Scope = AnalyticsReportingService.Scope.AnalyticsReadonly, //Read-only access to Google Analytics,
        TokenType = "Bearer",
    };

    var credential = new UserCredential(googleAuthFlow, "", responseToken);

    // Create the  Analytics service.
    return new AnalyticsReportingService(new BaseClientService.Initializer {
        HttpClientInitializer = credential,
        ApplicationName = "my-super-applicatio",
    });
}

You can see above that you need various credentials to be able to use the API. You can acquire these by logging into GA. Enjoy!

Wednesday, 6 March 2019

The Big One Point Oh

It's time for the first major version of fork-ts-checker-webpack-plugin. It's been a long time coming :-)

A Little History

The fork-ts-checker-webpack-plugin was originally the handiwork of Piotr Oleś. He raised an issue with ts-loader suggesting it could be the McCartney to ts-loader's Lennon:

Hi everyone!

I've created webpack plugin: fork-ts-checker-webpack-plugin that plays nicely with ts-loader. The idea is to compile project with transpileOnly: true and check types on separate process (async). With this approach, webpack build is not blocked by type checker and we have semantic check with fast incremental build. More info on github repo :)

So if you like it and you think it would be good to add some info in README.md about this plugin, I would be greatful.

Thanks :)

We did like it. We did think it would be good. We took him up on his kind offer.

Since that time many people have had their paws on the fork-ts-checker-webpack-plugin codebase. We love them all.

One Point Oh

We could have had our first major release a long time ago. The idea first occurred when webpack 5 alpha appeared. "Huh, look at that, a major version number.... Maybe we should do that?" "Great idea chap - do it!" So here it is; fresh out the box: v1.0.0

There are actually no breaking changes that we're aware of; users of 0.x fork-ts-checker-webpack-plugin should be be able to upgrade without any drama.

Incremental Watch API on by Default

Users of TypeScript 3+ may notice a performance improvement as by default the plugin now uses the incremental watch API in TypeScript.

Should this prove problematic you can opt out of using it by supplying useTypescriptIncrementalApi: false. We are aware of an issue with Vue and the incremental API. We hope it will be fixed soon - a generous member of the community is taking a look. In the meantime, we will not default to using the incremental watch API when in Vue mode.

Compatibility

As it stands, the plugin supports webpack 2, 3, 4 and 5 alpha. It is compatible with TypeScript 2.1+ and TSLint 4+.

Right that's it - enjoy it! And thanks everyone for contributing - we really dig your help. Much love.

Friday, 22 February 2019

WhiteList Proxying with ASP.Net Core

Once upon a time there lived a young team who were building a product. They were ready to go live with their beta and so they set off on a journey to a mystical land they had heard tales of. This magical kingdom was called "Production". However, Production was a land with walls and but one gate. That gate was jealously guarded by a defender named "InfoSec". InfoSec was there to make sure that only the the right people, noble of thought and pure of deed were allowed into the promised land. InfoSec would ask questions like "are you serving over HTTPS" and "what are you doing about cross site scripting"?

The team felt they had good answers to InfoSec's questions. However, just as they were about to step through the gate, InfoSec held up their hand and said "your application wants to access a database... database access needs to take place on our own internal network. Not over the publicly accessible internet. You shall not pass."

The team, with one foot in the air, paused. They swallowed and said "can you give us five minutes?"

The Proxy Regroup

And so it came to pass that the teams product (which took the form of ASP.Net Core web application) had to be changed. Where once there had been a single application, there would now be two; one that lived on the internet (the web app) and one that lived on the companies private network (the API app). The API app would do all the database access. In fact the product team opted to move all significant operations into the API as well. This left the web app with two purposes:

  1. the straightforward serving of HTML, CSS, JS and images
  2. the proxying of API calls through to the API app

Proxy Part 1

In the early days of this proxying the team reached for AspNetCore.Proxy. It's a great open source project that allows you to proxy HTTP requests. It gives you complete control over the construction of proxy requests, so that you can have a request come into your API and end up proxying it to a URL with a completely different path on the proxy server.

Proxy Part 2

The approach offered by AspNetCore.Proxy is fantastically powerful in terms of control. However, we didn't actually need that level of configurability. In fact, it resulted in us writing a great deal of boilerplate code. You see in our case we'd opted to proxy path for path, changing only the server name on each proxied request. So if a GET request came in going to https://web.app.com/api/version then we would want to proxy it to a GET request to https://api.app.com/api/version. You see? All we did was swap https://web.app.com for https://api.app.com. Nothing more. We did that as a rule. We knew we always wanted to do just this.

So we ended up spinning up our own solution which allowed just the specification of paths we wanted to proxy with their corresponding HTTP verbs. Let's talk through it. Usage of our approach ended up as a middleware within our web app's Startup.cs:

        public void Configure(IApplicationBuilder app) {
            // ...

            app.UseProxyWhiteList(
                // where ServerToProxyToBaseUrl is the server you want requests to be proxied to
                proxyAddressTweaker: (requestPath) => $"{ServerToProxyToBaseUrl}{requestPath}",
                whiteListProxyRoutes: new [] {
                    // An anonymous request
                    WhiteListProxy.AnonymousRoute("api/version", HttpMethod.Get),
     
                    // An authenticated request; to send this we must know who the user is
                    WhiteListProxy.Route("api/account/{accountId:int}/all-the-secret-info", HttpMethod.Get, HttpMethod.Post),
            });


            app.UseMvc();
   
            // ...
        }

If you look at the code above you can see that we are proxing all our requests to a single server: ServerToProxyToBaseUrl. We're proxying 2 different requests:

  1. GET requests to api/version are proxied through as anonymous GET requests.
  2. GET and POST requests to api/account/{accountId:int}/all-the-secret-info are proxied through as GET and POST requests. These requests require that a user be authenticated first.

The WhiteListProxy proxy class we've been using looks like this:

using System;
using System.Collections.Generic;
using System.Net.Http;

namespace My.Web.Proxy {
    public class WhiteListProxy {
        public string Path { get; set; }
        public IEnumerable<HttpMethod> Methods { get; set; }
        public bool IsAnonymous { get; set; }

        private WhiteListProxy(string path, bool isAnonymous, params HttpMethod[] methods) {
            if (methods == null || methods.Length == 0)
                throw new ArgumentException($"You need at least a single HttpMethod to be specified for {path}");

            Path = path;
            IsAnonymous = isAnonymous;
            Methods = methods;
        }

        public static WhiteListProxy Route(string path, params HttpMethod[] methods) => new WhiteListProxy(path, isAnonymous : false, methods: methods);
        public static WhiteListProxy AnonymousRoute(string path, params HttpMethod[] methods) => new WhiteListProxy(path, isAnonymous : true, methods: methods);
    }

}

The middleware for proxying (our UseProxyWhiteList) looks like this:

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Linq;
using System.Net.Http;
using System.Reflection;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Authentication;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Routing;
using Microsoft.Extensions.DependencyModel;
using Microsoft.Extensions.DependencyInjection;
using Serilog;

namespace My.Web.Proxy {
    public static class ProxyRouteExtensions {
        /// <summary>
        /// Middleware which proxies the supplied whitelist routes
        /// </summary>
        public static void UseProxyWhiteList(
            this IApplicationBuilder app,
            Func<string, string> proxyAddressTweaker,
            Action<HttpContext, HttpRequestMessage> preSendProxyRequestAction,
            IEnumerable<WhiteListProxy> whiteListProxyRoutes
        ) {
            app.UseRouter(builder => {
                foreach (var whiteListProxy in whiteListProxyRoutes) {
                    foreach (var method in whiteListProxy.Methods) {
                        builder.MapMiddlewareVerb(method.ToString(), whiteListProxy.Path, proxyApp => {
                            proxyApp.UseProxy_Challenge(whiteListProxy.IsAnonymous);
                            proxyApp.UseProxy_Run(proxyAddressTweaker, preSendProxyRequestAction);
                        });
                    }
                }
            });
        }

        private static void UseProxy_Challenge(this IApplicationBuilder app, bool allowAnonymous) {
            app.Use((context, next) =>
            {
                var routePath = context.Request.Path.Value;

                var weAreAuthenticatedOrWeDontNeedToBe =
                    context.User.Identity.IsAuthenticated || allowAnonymous;
                if (weAreAuthenticatedOrWeDontNeedToBe)
                    return next();

                return context.ChallengeAsync();
            });
        }

        private static void UseProxy_Run(
            this IApplicationBuilder app,
            Func<string, string> proxyAddressTweaker,
            Action<HttpContext, HttpRequestMessage> preSendProxyRequestAction
            )
        {
            app.Run(async context => {
                var proxyAddress = "";
                try {
                    proxyAddress = proxyAddressTweaker(context.Request.Path.Value);
                    
                    var proxyRequest = context.Request.CreateProxyHttpRequest(proxyAddress);

                    if (preSendProxyRequestAction != null)
                        preSendProxyRequestAction(context, proxyRequest);

                    var httpClients = context.RequestServices.GetService<IHttpClients>(); // IHttpClients is just a wrapper for HttpClient - insert your own here

                    var proxyResponse = await httpClients.SendRequestAsync(proxyRequest,
                            HttpCompletionOption.ResponseHeadersRead, context.RequestAborted)
                        .ConfigureAwait(false);

                    await context.CopyProxyHttpResponse(proxyResponse).ConfigureAwait(false);
                }
                catch (OperationCanceledException ex) {
                    if (ex.CancellationToken.IsCancellationRequested)
                        return;

                    if (!context.Response.HasStarted)
                    {
                        context.Response.StatusCode = 408;
                        await context.Response
                            .WriteAsync("Request timed out.");
                    }
                }
                catch (Exception e) {
                    if (!context.Response.HasStarted)
                    {
                        context.Response.StatusCode = 500;
                        await context.Response
                            .WriteAsync(
                                $"Request could not be proxied.\n\n{e.Message}\n\n{e.StackTrace}.");
                    }
                }
            });
        }

        public static void AddOrReplaceHeader(this HttpRequestMessage request, string headerName, string headerValue) {
            // It's possible for there to be multiple headers with the same name; we only want a single header to remain.  Our one.
            while (request.Headers.TryGetValues(headerName, out var existingAuthorizationHeader)) {
                request.Headers.Remove(headerName);
            }
            request.Headers.TryAddWithoutValidation(headerName, headerValue);
        }

        public static HttpRequestMessage CreateProxyHttpRequest(this HttpRequest request, string uriString) {
            var uri = new Uri(uriString + request.QueryString);

            var requestMessage = new HttpRequestMessage();
            var requestMethod = request.Method;
            if (!HttpMethods.IsGet(requestMethod) &&
                !HttpMethods.IsHead(requestMethod) &&
                !HttpMethods.IsDelete(requestMethod) &&
                !HttpMethods.IsTrace(requestMethod)) {
                var streamContent = new StreamContent(request.Body);
                requestMessage.Content = streamContent;
            }

            // Copy the request headers.
            if (requestMessage.Content != null)
                foreach (var header in request.Headers)
                    if (!requestMessage.Headers.TryAddWithoutValidation(header.Key, header.Value.ToArray()))
                        requestMessage.Content?.Headers.TryAddWithoutValidation(header.Key, header.Value.ToArray());

            requestMessage.Headers.Host = uri.Authority;
            requestMessage.RequestUri = uri;
            requestMessage.Method = new HttpMethod(request.Method);

            return requestMessage;
        }

        public static async Task CopyProxyHttpResponse(this HttpContext context, HttpResponseMessage responseMessage) {
            var response = context.Response;

            response.StatusCode = (int) responseMessage.StatusCode;
            foreach (var header in responseMessage.Headers) {
                response.Headers[header.Key] = header.Value.ToArray();
            }

            if (responseMessage.Content != null) {
                foreach (var header in responseMessage.Content.Headers) {
                    response.Headers[header.Key] = header.Value.ToArray();
                }
            }

            response.Headers.Remove("transfer-encoding");

            using(var responseStream = await responseMessage.Content.ReadAsStreamAsync().ConfigureAwait(false)) {
                await responseStream.CopyToAsync(response.Body, 81920, context.RequestAborted).ConfigureAwait(false);
            }
        }
    }
}

Sunday, 13 January 2019

TypeScript and webpack: "Watch" It

All I ask for is a compiler and a tight feedback loop. Narrowing the gap between making a change to a program and seeing the effect of that is a productivity boon. The TypeScript team are wise cats and dig this. They've taken strides to improve the developer experience of TypeScript users by introducing a "watch" API which can be leveraged by other tools. To quote the docs:

TypeScript 2.7 introduces two new APIs: one for creating "watcher" programs that provide set of APIs to trigger rebuilds, and a "builder" API that watchers can take advantage of... This can speed up large projects with many files.

Recently the wonderful 0xorial opened a PR to add support for the watch API to the fork-ts-checker-webpack-plugin.

I took this PR for a spin on a large project that I work on. With my machine, I was averaging 12 seconds between incremental builds. (I will charitably describe the machine in question as "challenged"; hobbled by one of the most aggressive virus checkers known to mankind. Fist bump InfoSec 🤜🤛😉) Switching to using the watch API dropped this to a mere 1.5 seconds!

You Can Watch Too

0xorial's PR was merged toot suite and was been released as [email protected]. If you'd like to take this for a spin then you can. Just:

  1. Up your version of the plugin to [email protected] in your package.json
  2. Add useTypescriptIncrementalApi: true to the plugin when you initialise it in your webpack.config.js.

That's it.

Mary Poppins

Sorry, I was trying to paint a word picture of something you might watch that was also comforting. Didn't quite work...

Anyway, you might be thinking "wait, just hold on a minute.... he said @next - I am not that bleeding edge." Well, it's not like that. Don't be scared.

fork-ts-checker-webpack-plugin has merely been updated for webpack 5 (which is in alpha) and the @next reflects that. To be clear, the @next version of the plugin still supports (remarkably!) webpack 2, 3 and 4 as well as 5 alpha. Users of current and historic versions of webpack should feel safe using the @next version; for webpack 2, 3 and 4 expect stability. webpack 5 users should expect potential changes to align with webpack 5 as it progresses.

Roadmap

This is available now and we'd love for you to try it out. As you can see, at the moment it's opt-in. You have to explicitly choose to use the new behaviour. Depending upon how testing goes, we may look to make this the default behaviour for the plugin in future (assuming users are running a high enough version of TypeScript). It would be great to hear from people if they have any views on that, or feedback in general.

Much ❤️ y'all. And many thanks to the very excellent 0xorial for the hard work.

Saturday, 5 January 2019

GitHub Actions and Yarn

I'd been meaning to automate the npm publishing of ts-loader for the longest time. I had attempted to use Travis to do this in the same way as fork-ts-checker-webpack-plugin. Alas using secure environment variables in Travis has unfortunate implications for ts-loader's test pack.

Be not afeard. I've heard there's a new shiny thing from GitHub that I could use instead... It's a sign; I must use it!

GitHub Actions are still in beta. Technically Actions are code run in Docker containers in response to events. This didn't mean a great deal to me until I started thinking about what I wanted to do with ts-loader's publishing flow.

Automate What?

Each time I publish a release of ts-loader I execute the following node commands by hand:

  1. yarn install - to install ts-loader's dependencies
  2. yarn build - to build ts-loader
  3. yarn test - to run ts-loader's test packs
  4. npm publish - to publish the release of ts-loader to npm

Having read up on GitHub Actions it seemed like they were born to handle this sort of task.

GitHub Action for npm

I quickly discovered that someone out there loves me had already written a GitHub Action for npm.

The example in the README.md could be easily tweaked to meet my needs with one caveat: I had to use npm in place of yarn. I didn't want to switch from yarn. What to do?

Well, remember when I said actions are code run in Docker containers? Another way to phrase that is to say: GitHub Actions are Docker images. Let's look under the covers of the npm GitHub Action. As we peer inside the Dockerfile what do we find?

FROM node:10-slim

Hmmmm.... Interesting. The base image of the npm GitHub Action is node:10-slim. Looking it up, it seems the -slim Docker images come with yarn included. Which means we should be able to use yarn inside the npm GitHub Action. Nice!

GitHub Action for npm for yarn

Using yarn from the GitHub Action for npm is delightfully simple. Here's what running npm install looks like:

# install with npm
action "install" {
  uses = "actions/[email protected]"
  args = "install"
}

Pivoting to use yarn install instead of npm install is as simple as:

# install with yarn
action "install" {
  uses = "actions/[email protected]"
  runs = "yarn"
  args = "install"
}

You can see we've introduced the runs = "yarn" and after that the args are whatever you need them to be.

Going With The Workflow

A GitHub Workflow that implements the steps I need would look like this:

workflow "build, test and publish on release" {
  on = "push"
  resolves = "publish"
}

# install with yarn
action "install" {
  uses = "actions/[email protected]"
  runs = "yarn"
  args = "install"
}

# build with yarn
action "build" {
  needs = "install"
  uses = "actions/[email protected]"
  runs = "yarn"
  args = "build"
}

# test with yarn
action "test" {
  needs = "build"
  uses = "actions/[email protected]"
  runs = "yarn"
  args = "test"
}

# filter for a new tag
action "check for new tag" {
  needs = "Test"
  uses = "actions/bin/[email protected]"
  args = "tag"
}

# publish with npm
action "publish" {
  needs = "check for new tag"
  uses = "actions/[email protected]"
  args = "publish"
  secrets = ["NPM_AUTH_TOKEN"]
}

As you can see, this is a direct automation of steps 1-4 I listed earlier. Since all these actions are executed in the same container, we can skip from yarn to npm with gay abandon.

What's absolutely amazing is, when I got access to GitHub Actions my hand crafted workflow looked like it should work first time! I know, right? Don't you love it when that happens? Alas there's presently a problem with filters in GitHub Actions. But that's by the by, if you're just looking to use a GitHub Action with yarn instead of npm then you are home free.

You Don't Actually Need the npm GitHub Action

You heard me right. Docker containers be Docker containers. You don't actually need to use this:

  uses = "actions/[email protected]"

You can use any Docker container which has node / npm installed! So if you'd like to use say node 11 instead you could just do this:

  uses = "docker://node:11"

Which would use the node 11 image on docker hub.

Which is pretty cool. You know what's even more incredible? Inside a workflow you can switch uses mid-workflow and keep the output. That's right; you can have a work flow with say three actions running uses = "docker://node:11" and then a fourth running uses = "actions/[email protected]". That's so flexible and powerful!

Thanks to Matt Colyer and Landon Schropp for schooling me on the intricicies of GitHub Actions. Much ❤