Saturday, 27 April 2019

react-select with less typing lag

This is going out to all those people using react-select with 1000+ items to render. To those people typing into the select and saying out loud "it's so laggy.... This can't be... It's 2019... I mean, right?" To the people who read this GitHub issue top to bottom 30 times and still came back unsure of what to do. This is for you.

I'm lying. Mostly this goes out to me. I have a select box. I need it to render 2000+ items. I want it to be lovely. I want my users to be delighted as they use it. I want them to type in and (this is the crucial part!) for the control to feel responsive. Not laggy. Not like each keypress is going to Jupiter and back before it renders to the screen.

Amongst the various gems on the GitHub issue are shared CodeSandboxes illustrating ways to integrate react-select with react-window. That's great and they do improve things. However, they don't do much to improve the laggy typing feel. There's brief mention of a props tweak you can make to react-select; this:

filterOption={createFilter({ ignoreAccents: false })}

What does this do? Well, this improves the typing lag experience massively. For why? Well, if you look at the code you find that the default value is ignoreAccents: true. This default makes react-select invoke an expensive (and scary sounding) function called stripDiacritics. Not once but twice. Ouchy. And this kills performance.

But if you're okay with accents not being ignored (and spoiler: I am) then this is the option for you.

Here's a CodeSandbox which also includes the ignoreAccents: false tweak. Enjoy!

Edit johnnyreilly/react-window-with-react-select-less-laggy

import React, { Component } from "react";
import ReactDOM from "react-dom";
import Select, { createFilter } from "react-select";
import { FixedSizeList as List } from "react-window";

import "./styles.css";

const options = [];
for (let i = 0; i < 2500; i = i + 1) {
  options.push({ value: i, label: `Option ${i}` });
}

const height = 35;

class MenuList extends Component {
  render() {
    const { options, children, maxHeight, getValue } = this.props;
    const [value] = getValue();
    const initialOffset = options.indexOf(value) * height;

    return (
      <List
        height={maxHeight}
        itemCount={children.length}
        itemSize={height}
        initialScrollOffset={initialOffset}
      >
        {({ index, style }) => <div style={style}>{children[index]}</div>}
      </List>
    );
  }
}

const App = () => (
  <Select
    filterOption={createFilter({ ignoreAccents: false })} // this makes all the difference!
    components={{ MenuList }}
    options={options}
  />
);

ReactDOM.render(<App />, document.getElementById("root"));

Sunday, 24 March 2019

Template Tricks for a Dainty DOM

I'm somewhat into code golf. Placing restrictions on what you're "allowed" to do in code and seeing what the happens as a result. I'd like to share with you something that came out of some recent dabblings.

Typically I spend a good amount of time playing with TypeScript. Either working on build tools or making web apps with it. (Usually with a portion of React on the side.) This is something different.

I have a side project on the go which is essentially a mini analytics dashboard. For the purposes of this piece let's call it "StatsDash". When I was starting it I thought: let's try something different. Let's build StatsDash with HTML only. The actual HTML is hand cranked by me and generated in ASP.Net Core / C# using a combination of LINQ and string interpolation. (Who needs Razor? 😎) I'll say it's pretty fun - but the back end is not what I want to focus on.

I got something up and running pretty quickly in pure HTML. The first lesson I learned was this: HTML alone is hella ugly. So I relaxed my criteria; I allowed CSS to come play as long as I didn't have to write any / much myself. There followed some experimentation with different CSS frameworks. For a while I rolled with Bootstrap (old school!), then Bulma and finally I settled on Materialized. Materialized is a heavily inspired by Google's Material Design and is hence quite beautiful. With my HTML and Materialize's CSS we were rolling. Beautiful stats - no JS.

"Oh All Right; Just a Splash"

Lovely as things were, StatsDash quickly got to the point where there was too much information on the screen. It was time to make some changes. If data is to convey a message, it must first be comprehensible.

I needed a way to hide and show data as people interacted with StatsDash. I wanted to achieve this without starting to render on the client side and also without going back to the server each time.

If you want interactions in your UI all roads lead to JS. It's certainly possible to do some tricks with CSS but that's a round of code golf I'm ill equipped to play. So, I took a look at what Materialized had to offer. Usefully it has a Modal component. With that in play I'd be able to separate the detailed information into different modals which the users could show and hide as required. Perfect!

It required a little JS. What's a line or two between friends? Dear reader, I compromised once more.

The DOM Bunker

With my handy modals, StatsDash was now a one stop shop for a great deal of information. Info which took the form of DOM nodes. Lots of them. And by "lots of them" I want you to think along the lines of "space is big, really big...".

This was impacting users. Clicking to open a modal resulted in a noticeable lag. It would take 2+ seconds for the browser to respond. Users found themselves clicking multiple times; wondering why nothing seemed to occur. In the end the modal would shuffle into view. However, this wasn't the best experience. The lack of responsiveness was getting in the way of users enjoying all StatsDash had to offer.

Running an audit of StatsDash in Chrome DevTools there was no doubt we had a DOM problem:

What to do? I still didn't want to go back to the server on each click in StatsDash. And I didn't want to start writing rendering code on the client as well either. I have in the past mixed client and server side rendering and I know well that it's a first class ticket to a confusing codebase.

Smuggling DOM in Templates

There's a mechanism that supports this use case directly: the <template> element. To quote MDN:

The HTML Content Template (<template>) element is a mechanism for holding client-side content that is not to be rendered when a page is loaded but may subsequently be instantiated during runtime using JavaScript.

Think of a template as a content fragment that is being stored for subsequent use in the document.

This is exactly what I'm after. I can keep my rendering server side, but instead wrap content that isn't immediately visible to users inside a <template> element and render that only when users need it.

So in the case of my modals (where most of my DOM lives), I can tuck the contents of each modal into a <template> element. Then, when the user clicks to open a modal we move that template content into the DOM so they can see it. Likewise, as they close a modal we can clear out the modal's DOM content to ease the load on the dear old browser.

"That Sounds Complicated..."

It's not. Let me show you how easily this is accomplished. First of all, wrap all your modal contents into <template> elements. They should look a little something like this:

<div>
    <button data-target="modalId" class="btn modal-trigger">Open the Modal!</button>

    <template>
        <!--
        loads of DOM nodes
        -->
    </template>

    <div id="modalId" class="modal modal-fixed-footer"></div>
</div>

Next, where you initialise your modals you need to make a little tweak:

document.addEventListener('DOMContentLoaded', function() {
    M.Modal.init(document.querySelectorAll('.modal'), {
        onOpenStart: modalDiv => {
            const template = modalDiv.parentNode.querySelector('template');

            modalDiv.appendChild(document.importNode(template.content, true));
        },
        onCloseEnd: modalDiv => {
            while (modalDiv.firstChild) {
                modalDiv.removeChild(modalDiv.firstChild);
            }
        }
    });
});

That's it! As you can see, before we open our modals, the onOpenStart callback will fire which creates the actual DOM elements based upon the template. And when the modals finish closing the onCloseEnd callback runs to remove those DOM elements once more.

For this minimal change, the client gets a dramatically different user experience. StatsDash went from super laggy to satisfyingly fast. Using templates, The number of initial DOM nodes dropped from more than 20,000 to 200. That's right πŸ’― times smaller!

Do It Yourself

The code examples above rely upon the Materialize modals. However the principles used here are broadly applicable. It's easy for you to take the approach outlined here and apply it in a different situation.

If you're interested in some of the other exciting things you can do with templates then I recommend Eric Bidelman's post on the topic.

Friday, 22 March 2019

Google Analytics API and ASP.Net Core

Some of my posts are meaningful treaties on the nature of software development. Some are detailed explanations of approaches you can use. Some are effectively code dumps. This is one of those.

I recently had need to be able to access the API for Google Analytics from ASP.Net Core. Getting this up and running turned out to be surprisingly tough because of an absence of good examples. So here it is; an example of how you can access a simple page access stat using the API:

async Task<SomeKindOfDataStructure[]> GetUsageFromGoogleAnalytics(DateTime startAtThisDate, DateTime endAtThisDate)
{
    // Create the DateRange object. Here we want data from last week.
    var dateRange = new DateRange
    {
        StartDate = startAtThisDate.ToString("yyyy-MM-dd"),
        EndDate = endAtThisDate.ToString("yyyy-MM-dd")
    };
    // Create the Metrics and dimensions object.
    // var metrics = new List<Metric> { new Metric { Expression = "ga:sessions", Alias = "Sessions" } };
    // var dimensions = new List<Dimension> { new Dimension { Name = "ga:pageTitle" } };
    var metrics = new List<Metric> { new Metric { Expression = "ga:uniquePageviews" } };
    var dimensions = new List<Dimension> { 
        new Dimension { Name = "ga:date" },
        new Dimension { Name = "ga:dimension1" } 
    };

    // Get required View Id from configuration
    var viewId = $"ga:{"[VIEWID]"}";

    // Create the Request object.
    var reportRequest = new ReportRequest
    {
        DateRanges = new List<DateRange> { dateRange },
        Metrics = metrics,
        Dimensions = dimensions,
        FiltersExpression = "ga:pagePath==/index.html",
        ViewId = viewId
    };

    var getReportsRequest = new GetReportsRequest {
        ReportRequests = new List<ReportRequest> { reportRequest }
    };
        
    //Invoke Google Analytics API call and get report
    var analyticsService = GetAnalyticsReportingServiceInstance();
    var response = await (analyticsService.Reports.BatchGet(getReportsRequest)).ExecuteAsync();

    var logins = response.Reports[0].Data.Rows.Select(row => new SomeKindOfDataStructure {
        Date = new DateTime(
            year: Convert.ToInt32(row.Dimensions[0].Substring(0, 4)), 
            month: Convert.ToInt32(row.Dimensions[0].Substring(4, 2)), 
            day: Convert.ToInt32(row.Dimensions[0].Substring(6, 2))),
        NumberOfLogins = Convert.ToInt32(row.Metrics[0].Values[0])
    })
    .OrderByDescending(login => login.Date)
    .ToArray();

    return logins;
}

/// <summary>
/// Intializes and returns Analytics Reporting Service Instance
/// </summary>
AnalyticsReportingService GetAnalyticsReportingServiceInstance() {
    var googleAuthFlow = new GoogleAuthorizationCodeFlow(new GoogleAuthorizationCodeFlow.Initializer {
        ClientSecrets = new ClientSecrets {
            ClientId = "[CLIENTID]",
            ClientSecret = "[CLIENTSECRET]"
        }
    });

    var responseToken = new TokenResponse {
        AccessToken = "[ANALYTICSTOKEN]",
        RefreshToken = "[REFRESHTOKEN]",
        Scope = AnalyticsReportingService.Scope.AnalyticsReadonly, //Read-only access to Google Analytics,
        TokenType = "Bearer",
    };

    var credential = new UserCredential(googleAuthFlow, "", responseToken);

    // Create the  Analytics service.
    return new AnalyticsReportingService(new BaseClientService.Initializer {
        HttpClientInitializer = credential,
        ApplicationName = "my-super-applicatio",
    });
}

You can see above that you need various credentials to be able to use the API. You can acquire these by logging into GA. Enjoy!

Wednesday, 6 March 2019

The Big One Point Oh

It's time for the first major version of fork-ts-checker-webpack-plugin. It's been a long time coming :-)

A Little History

The fork-ts-checker-webpack-plugin was originally the handiwork of Piotr OleΕ›. He raised an issue with ts-loader suggesting it could be the McCartney to ts-loader's Lennon:

Hi everyone!

I've created webpack plugin: fork-ts-checker-webpack-plugin that plays nicely with ts-loader. The idea is to compile project with transpileOnly: true and check types on separate process (async). With this approach, webpack build is not blocked by type checker and we have semantic check with fast incremental build. More info on github repo :)

So if you like it and you think it would be good to add some info in README.md about this plugin, I would be greatful.

Thanks :)

We did like it. We did think it would be good. We took him up on his kind offer.

Since that time many people have had their paws on the fork-ts-checker-webpack-plugin codebase. We love them all.

One Point Oh

We could have had our first major release a long time ago. The idea first occurred when webpack 5 alpha appeared. "Huh, look at that, a major version number.... Maybe we should do that?" "Great idea chap - do it!" So here it is; fresh out the box: v1.0.0

There are actually no breaking changes that we're aware of; users of 0.x fork-ts-checker-webpack-plugin should be be able to upgrade without any drama.

Incremental Watch API on by Default

Users of TypeScript 3+ may notice a performance improvement as by default the plugin now uses the incremental watch API in TypeScript.

Should this prove problematic you can opt out of using it by supplying useTypescriptIncrementalApi: false. We are aware of an issue with Vue and the incremental API. We hope it will be fixed soon - a generous member of the community is taking a look. In the meantime, we will not default to using the incremental watch API when in Vue mode.

Compatibility

As it stands, the plugin supports webpack 2, 3, 4 and 5 alpha. It is compatible with TypeScript 2.1+ and TSLint 4+.

Right that's it - enjoy it! And thanks everyone for contributing - we really dig your help. Much love.

Friday, 22 February 2019

WhiteList Proxying with ASP.Net Core

Once upon a time there lived a young team who were building a product. They were ready to go live with their beta and so they set off on a journey to a mystical land they had heard tales of. This magical kingdom was called "Production". However, Production was a land with walls and but one gate. That gate was jealously guarded by a defender named "InfoSec". InfoSec was there to make sure that only the the right people, noble of thought and pure of deed were allowed into the promised land. InfoSec would ask questions like "are you serving over HTTPS" and "what are you doing about cross site scripting"?

The team felt they had good answers to InfoSec's questions. However, just as they were about to step through the gate, InfoSec held up their hand and said "your application wants to access a database... database access needs to take place on our own internal network. Not over the publicly accessible internet. You shall not pass."

The team, with one foot in the air, paused. They swallowed and said "can you give us five minutes?"

The Proxy Regroup

And so it came to pass that the teams product (which took the form of ASP.Net Core web application) had to be changed. Where once there had been a single application, there would now be two; one that lived on the internet (the web app) and one that lived on the companies private network (the API app). The API app would do all the database access. In fact the product team opted to move all significant operations into the API as well. This left the web app with two purposes:

  1. the straightforward serving of HTML, CSS, JS and images
  2. the proxying of API calls through to the API app

Proxy Part 1

In the early days of this proxying the team reached for AspNetCore.Proxy. It's a great open source project that allows you to proxy HTTP requests. It gives you complete control over the construction of proxy requests, so that you can have a request come into your API and end up proxying it to a URL with a completely different path on the proxy server.

Proxy Part 2

The approach offered by AspNetCore.Proxy is fantastically powerful in terms of control. However, we didn't actually need that level of configurability. In fact, it resulted in us writing a great deal of boilerplate code. You see in our case we'd opted to proxy path for path, changing only the server name on each proxied request. So if a GET request came in going to https://web.app.com/api/version then we would want to proxy it to a GET request to https://api.app.com/api/version. You see? All we did was swap https://web.app.com for https://api.app.com. Nothing more. We did that as a rule. We knew we always wanted to do just this.

So we ended up spinning up our own solution which allowed just the specification of paths we wanted to proxy with their corresponding HTTP verbs. Let's talk through it. Usage of our approach ended up as a middleware within our web app's Startup.cs:

        public void Configure(IApplicationBuilder app) {
            // ...

            app.UseProxyWhiteList(
                // where ServerToProxyToBaseUrl is the server you want requests to be proxied to
                proxyAddressTweaker: (requestPath) => $"{ServerToProxyToBaseUrl}{requestPath}",
                whiteListProxyRoutes: new [] {
                    // An anonymous request
                    WhiteListProxy.AnonymousRoute("api/version", HttpMethod.Get),
     
                    // An authenticated request; to send this we must know who the user is
                    WhiteListProxy.Route("api/account/{accountId:int}/all-the-secret-info", HttpMethod.Get, HttpMethod.Post),
            });


            app.UseMvc();
   
            // ...
        }

If you look at the code above you can see that we are proxing all our requests to a single server: ServerToProxyToBaseUrl. We're proxying 2 different requests:

  1. GET requests to api/version are proxied through as anonymous GET requests.
  2. GET and POST requests to api/account/{accountId:int}/all-the-secret-info are proxied through as GET and POST requests. These requests require that a user be authenticated first.

The WhiteListProxy proxy class we've been using looks like this:

using System;
using System.Collections.Generic;
using System.Net.Http;

namespace My.Web.Proxy {
    public class WhiteListProxy {
        public string Path { get; set; }
        public IEnumerable<HttpMethod> Methods { get; set; }
        public bool IsAnonymous { get; set; }

        private WhiteListProxy(string path, bool isAnonymous, params HttpMethod[] methods) {
            if (methods == null || methods.Length == 0)
                throw new ArgumentException($"You need at least a single HttpMethod to be specified for {path}");

            Path = path;
            IsAnonymous = isAnonymous;
            Methods = methods;
        }

        public static WhiteListProxy Route(string path, params HttpMethod[] methods) => new WhiteListProxy(path, isAnonymous : false, methods: methods);
        public static WhiteListProxy AnonymousRoute(string path, params HttpMethod[] methods) => new WhiteListProxy(path, isAnonymous : true, methods: methods);
    }

}

The middleware for proxying (our UseProxyWhiteList) looks like this:

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Linq;
using System.Net.Http;
using System.Reflection;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Authentication;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Routing;
using Microsoft.Extensions.DependencyModel;
using Microsoft.Extensions.DependencyInjection;
using Serilog;

namespace My.Web.Proxy {
    public static class ProxyRouteExtensions {
        /// <summary>
        /// Middleware which proxies the supplied whitelist routes
        /// </summary>
        public static void UseProxyWhiteList(
            this IApplicationBuilder app,
            Func<string, string> proxyAddressTweaker,
            Action<HttpContext, HttpRequestMessage> preSendProxyRequestAction,
            IEnumerable<WhiteListProxy> whiteListProxyRoutes
        ) {
            app.UseRouter(builder => {
                foreach (var whiteListProxy in whiteListProxyRoutes) {
                    foreach (var method in whiteListProxy.Methods) {
                        builder.MapMiddlewareVerb(method.ToString(), whiteListProxy.Path, proxyApp => {
                            proxyApp.UseProxy_Challenge(whiteListProxy.IsAnonymous);
                            proxyApp.UseProxy_Run(proxyAddressTweaker, preSendProxyRequestAction);
                        });
                    }
                }
            });
        }

        private static void UseProxy_Challenge(this IApplicationBuilder app, bool allowAnonymous) {
            app.Use((context, next) =>
            {
                var routePath = context.Request.Path.Value;

                var weAreAuthenticatedOrWeDontNeedToBe =
                    context.User.Identity.IsAuthenticated || allowAnonymous;
                if (weAreAuthenticatedOrWeDontNeedToBe)
                    return next();

                return context.ChallengeAsync();
            });
        }

        private static void UseProxy_Run(
            this IApplicationBuilder app,
            Func<string, string> proxyAddressTweaker,
            Action<HttpContext, HttpRequestMessage> preSendProxyRequestAction
            )
        {
            app.Run(async context => {
                var proxyAddress = "";
                try {
                    proxyAddress = proxyAddressTweaker(context.Request.Path.Value);
                    
                    var proxyRequest = context.Request.CreateProxyHttpRequest(proxyAddress);

                    if (preSendProxyRequestAction != null)
                        preSendProxyRequestAction(context, proxyRequest);

                    var httpClients = context.RequestServices.GetService<IHttpClients>(); // IHttpClients is just a wrapper for HttpClient - insert your own here

                    var proxyResponse = await httpClients.SendRequestAsync(proxyRequest,
                            HttpCompletionOption.ResponseHeadersRead, context.RequestAborted)
                        .ConfigureAwait(false);

                    await context.CopyProxyHttpResponse(proxyResponse).ConfigureAwait(false);
                }
                catch (OperationCanceledException ex) {
                    if (ex.CancellationToken.IsCancellationRequested)
                        return;

                    if (!context.Response.HasStarted)
                    {
                        context.Response.StatusCode = 408;
                        await context.Response
                            .WriteAsync("Request timed out.");
                    }
                }
                catch (Exception e) {
                    if (!context.Response.HasStarted)
                    {
                        context.Response.StatusCode = 500;
                        await context.Response
                            .WriteAsync(
                                $"Request could not be proxied.\n\n{e.Message}\n\n{e.StackTrace}.");
                    }
                }
            });
        }

        public static void AddOrReplaceHeader(this HttpRequestMessage request, string headerName, string headerValue) {
            // It's possible for there to be multiple headers with the same name; we only want a single header to remain.  Our one.
            while (request.Headers.TryGetValues(headerName, out var existingAuthorizationHeader)) {
                request.Headers.Remove(headerName);
            }
            request.Headers.TryAddWithoutValidation(headerName, headerValue);
        }

        public static HttpRequestMessage CreateProxyHttpRequest(this HttpRequest request, string uriString) {
            var uri = new Uri(uriString + request.QueryString);

            var requestMessage = new HttpRequestMessage();
            var requestMethod = request.Method;
            if (!HttpMethods.IsGet(requestMethod) &&
                !HttpMethods.IsHead(requestMethod) &&
                !HttpMethods.IsDelete(requestMethod) &&
                !HttpMethods.IsTrace(requestMethod)) {
                var streamContent = new StreamContent(request.Body);
                requestMessage.Content = streamContent;
            }

            // Copy the request headers.
            if (requestMessage.Content != null)
                foreach (var header in request.Headers)
                    if (!requestMessage.Headers.TryAddWithoutValidation(header.Key, header.Value.ToArray()))
                        requestMessage.Content?.Headers.TryAddWithoutValidation(header.Key, header.Value.ToArray());

            requestMessage.Headers.Host = uri.Authority;
            requestMessage.RequestUri = uri;
            requestMessage.Method = new HttpMethod(request.Method);

            return requestMessage;
        }

        public static async Task CopyProxyHttpResponse(this HttpContext context, HttpResponseMessage responseMessage) {
            var response = context.Response;

            response.StatusCode = (int) responseMessage.StatusCode;
            foreach (var header in responseMessage.Headers) {
                response.Headers[header.Key] = header.Value.ToArray();
            }

            if (responseMessage.Content != null) {
                foreach (var header in responseMessage.Content.Headers) {
                    response.Headers[header.Key] = header.Value.ToArray();
                }
            }

            response.Headers.Remove("transfer-encoding");

            using(var responseStream = await responseMessage.Content.ReadAsStreamAsync().ConfigureAwait(false)) {
                await responseStream.CopyToAsync(response.Body, 81920, context.RequestAborted).ConfigureAwait(false);
            }
        }
    }
}

Sunday, 13 January 2019

TypeScript and webpack: "Watch" It

All I ask for is a compiler and a tight feedback loop. Narrowing the gap between making a change to a program and seeing the effect of that is a productivity boon. The TypeScript team are wise cats and dig this. They've taken strides to improve the developer experience of TypeScript users by introducing a "watch" API which can be leveraged by other tools. To quote the docs:

TypeScript 2.7 introduces two new APIs: one for creating "watcher" programs that provide set of APIs to trigger rebuilds, and a "builder" API that watchers can take advantage of... This can speed up large projects with many files.

Recently the wonderful 0xorial opened a PR to add support for the watch API to the fork-ts-checker-webpack-plugin.

I took this PR for a spin on a large project that I work on. With my machine, I was averaging 12 seconds between incremental builds. (I will charitably describe the machine in question as "challenged"; hobbled by one of the most aggressive virus checkers known to mankind. Fist bump InfoSec πŸ€œπŸ€›πŸ˜‰) Switching to using the watch API dropped this to a mere 1.5 seconds!

You Can Watch Too

0xorial's PR was merged toot suite and was been released as [email protected]. If you'd like to take this for a spin then you can. Just:

  1. Up your version of the plugin to [email protected] in your package.json
  2. Add useTypescriptIncrementalApi: true to the plugin when you initialise it in your webpack.config.js.

That's it.

Mary Poppins

Sorry, I was trying to paint a word picture of something you might watch that was also comforting. Didn't quite work...

Anyway, you might be thinking "wait, just hold on a minute.... he said @next - I am not that bleeding edge." Well, it's not like that. Don't be scared.

fork-ts-checker-webpack-plugin has merely been updated for webpack 5 (which is in alpha) and the @next reflects that. To be clear, the @next version of the plugin still supports (remarkably!) webpack 2, 3 and 4 as well as 5 alpha. Users of current and historic versions of webpack should feel safe using the @next version; for webpack 2, 3 and 4 expect stability. webpack 5 users should expect potential changes to align with webpack 5 as it progresses.

Roadmap

This is available now and we'd love for you to try it out. As you can see, at the moment it's opt-in. You have to explicitly choose to use the new behaviour. Depending upon how testing goes, we may look to make this the default behaviour for the plugin in future (assuming users are running a high enough version of TypeScript). It would be great to hear from people if they have any views on that, or feedback in general.

Much ❤️ y'all. And many thanks to the very excellent 0xorial for the hard work.

Saturday, 5 January 2019

GitHub Actions and Yarn

I'd been meaning to automate the npm publishing of ts-loader for the longest time. I had attempted to use Travis to do this in the same way as fork-ts-checker-webpack-plugin. Alas using secure environment variables in Travis has unfortunate implications for ts-loader's test pack.

Be not afeard. I've heard there's a new shiny thing from GitHub that I could use instead... It's a sign; I must use it!

GitHub Actions are still in beta. Technically Actions are code run in Docker containers in response to events. This didn't mean a great deal to me until I started thinking about what I wanted to do with ts-loader's publishing flow.

Automate What?

Each time I publish a release of ts-loader I execute the following node commands by hand:

  1. yarn install - to install ts-loader's dependencies
  2. yarn build - to build ts-loader
  3. yarn test - to run ts-loader's test packs
  4. npm publish - to publish the release of ts-loader to npm

Having read up on GitHub Actions it seemed like they were born to handle this sort of task.

GitHub Action for npm

I quickly discovered that someone out there loves me had already written a GitHub Action for npm.

The example in the README.md could be easily tweaked to meet my needs with one caveat: I had to use npm in place of yarn. I didn't want to switch from yarn. What to do?

Well, remember when I said actions are code run in Docker containers? Another way to phrase that is to say: GitHub Actions are Docker images. Let's look under the covers of the npm GitHub Action. As we peer inside the Dockerfile what do we find?

FROM node:10-slim

Hmmmm.... Interesting. The base image of the npm GitHub Action is node:10-slim. Looking it up, it seems the -slim Docker images come with yarn included. Which means we should be able to use yarn inside the npm GitHub Action. Nice!

GitHub Action for npm for yarn

Using yarn from the GitHub Action for npm is delightfully simple. Here's what running npm install looks like:

# install with npm
action "install" {
  uses = "actions/[email protected]"
  args = "install"
}

Pivoting to use yarn install instead of npm install is as simple as:

# install with yarn
action "install" {
  uses = "actions/[email protected]"
  runs = "yarn"
  args = "install"
}

You can see we've introduced the runs = "yarn" and after that the args are whatever you need them to be.

Going With The Workflow

A GitHub Workflow that implements the steps I need would look like this:

workflow "build, test and publish on release" {
  on = "push"
  resolves = "publish"
}

# install with yarn
action "install" {
  uses = "actions/[email protected]"
  runs = "yarn"
  args = "install"
}

# build with yarn
action "build" {
  needs = "install"
  uses = "actions/[email protected]"
  runs = "yarn"
  args = "build"
}

# test with yarn
action "test" {
  needs = "build"
  uses = "actions/[email protected]"
  runs = "yarn"
  args = "test"
}

# filter for a new tag
action "check for new tag" {
  needs = "Test"
  uses = "actions/bin/[email protected]"
  args = "tag"
}

# publish with npm
action "publish" {
  needs = "check for new tag"
  uses = "actions/[email protected]"
  args = "publish"
  secrets = ["NPM_AUTH_TOKEN"]
}

As you can see, this is a direct automation of steps 1-4 I listed earlier. Since all these actions are executed in the same container, we can skip from yarn to npm with gay abandon.

What's absolutely amazing is, when I got access to GitHub Actions my hand crafted workflow looked like it should work first time! I know, right? Don't you love it when that happens? Alas there's presently a problem with filters in GitHub Actions. But that's by the by, if you're just looking to use a GitHub Action with yarn instead of npm then you are home free.

You Don't Actually Need the npm GitHub Action

You heard me right. Docker containers be Docker containers. You don't actually need to use this:

  uses = "actions/[email protected]"

You can use any Docker container which has node / npm installed! So if you'd like to use say node 11 instead you could just do this:

  uses = "docker://node:11"

Which would use the node 11 image on docker hub.

Which is pretty cool. You know what's even more incredible? Inside a workflow you can switch uses mid-workflow and keep the output. That's right; you can have a work flow with say three actions running uses = "docker://node:11" and then a fourth running uses = "actions/[email protected]". That's so flexible and powerful!

Thanks to Matt Colyer and Landon Schropp for schooling me on the intricicies of GitHub Actions. Much ❤