Thursday, 21 May 2020

Autofac, WebApplicationFactory and integration tests

This is one of those occasions where I'm not writing up my own work so much as my discovery after in depth googling.

Integration tests with ASP.NET Core are the best. They spin up an in memory version of your application and let you fire requests at it. They've gone through a number of iterations since ASP.NET Core has been around. You may also be familiar with the TestServer approach of earlier versions. For some time, the advised approach has been using WebApplicationFactory.

What makes this approach particularly useful / powerful is that you can swap out dependencies of your running app with fakes / stubs etc. Just like unit tests! But potentially more useful because they run your whole app and hence give you a greater degree of confidence. What does this mean? Well, imagine you changed a piece of middleware in your application; this could potentially break functionality. Unit tests would probably not reveal this. Integration tests would.

There is a fly in the ointment. A hair in the gazpacho. ASP.NET Core ships with dependency injection in the box. It has its own Inversion of Control container which is perfectly fine. However, many people are accustomed to using other IOC containers such as Autofac.

What's the problem? Well, swapping out dependencies registered using ASP.NET Core's IOC requires using a hook called ConfigureTestServices. There's an equivalent hook for swapping out services registered using a custom IOC container: ConfigureTestContainer. Unfortunately, there is a bug in ASP.NET Core as of version 3.0: When using GenericHost, in tests ConfigureTestContainer is not executed

This means you cannot swap out dependencies that have been registered with Autofac and the like. According to the tremendous David Fowler of the ASP.NET team, this will hopefully be resolved.

In the meantime, there's a workaround thanks to various commenters on the thread. Instead of using WebApplicationFactory directly, subclass it and create a custom AutofacWebApplicationFactory (the name is not important). This custom class overrides the behavior of ConfigureServices and CreateHost with a CustomServiceProviderFactory:

namespace My.Web.Tests.Helpers {
    /// <summary>
    /// Based upon https://github.com/dotnet/AspNetCore.Docs/tree/master/aspnetcore/test/integration-tests/samples/3.x/IntegrationTestsSample
    /// </summary>
    /// <typeparam name="TStartup"></typeparam>
    public class AutofacWebApplicationFactory<TStartup> : WebApplicationFactory<TStartup> where TStartup : class {
        protected override void ConfigureWebHost(IWebHostBuilder builder) {
            builder.ConfigureServices(services => {
                    services.AddSingleton<IAuthorizationHandler>(new PassThroughPermissionedRolesHandler());
                })
                .ConfigureTestServices(services => {
                }).ConfigureTestContainer<Autofac.ContainerBuilder>(builder => {
                    // called after Startup.ConfigureContainer
                });
        }

        protected override IHost CreateHost(IHostBuilder builder) {
            builder.UseServiceProviderFactory(new CustomServiceProviderFactory());
            return base.CreateHost(builder);
        }
    }

    /// <summary>
    /// Based upon https://github.com/dotnet/aspnetcore/issues/14907#issuecomment-620750841 - only necessary because of an issue in ASP.NET Core
    /// </summary>
    public class CustomServiceProviderFactory : IServiceProviderFactory<CustomContainerBuilder> {
        public CustomContainerBuilder CreateBuilder(IServiceCollection services) => new CustomContainerBuilder(services);

        public IServiceProvider CreateServiceProvider(CustomContainerBuilder containerBuilder) =>
        new AutofacServiceProvider(containerBuilder.CustomBuild());
    }

    public class CustomContainerBuilder : Autofac.ContainerBuilder {
        private readonly IServiceCollection services;

        public CustomContainerBuilder(IServiceCollection services) {
            this.services = services;
            this.Populate(services);
        }

        public Autofac.IContainer CustomBuild() {
            var sp = this.services.BuildServiceProvider();
#pragma warning disable CS0612 // Type or member is obsolete
            var filters = sp.GetRequiredService<IEnumerable<IStartupConfigureContainerFilter<Autofac.ContainerBuilder>>>();
#pragma warning restore CS0612 // Type or member is obsolete

            foreach (var filter in filters) {
                filter.ConfigureContainer(b => { }) (this);
            }

            return this.Build();
        }
    }
}

I'm going to level with you; I don't understand all of this code. I'm not au fait with the inner workings of ASP.NET Core or Autofac but I can tell you what this allows. With this custom WebApplicationFactory in play you get ConfigureTestContainer back in the mix! You get to write code like this:

using System;
using System.Net;
using System.Net.Http.Headers;
using System.Threading.Tasks;
using FakeItEasy;
using FluentAssertions;
using Microsoft.AspNetCore.TestHost;
using Microsoft.Extensions.DependencyInjection;
using Xunit;
using Microsoft.Extensions.Options;
using Autofac;
using System.Net.Http;
using Newtonsoft.Json;

namespace My.Web.Tests.Controllers
{
    public class MyControllerTests : IClassFixture<AutofacWebApplicationFactory<My.Web.Startup>> {
        private readonly AutofacWebApplicationFactory<My.Web.Startup> _factory;

        public MyControllerTests(
            AutofacWebApplicationFactory<My.Web.Startup> factory
        ) {
            _factory = factory;
        }

        [Fact]
        public async Task My() {
            var fakeSomethingService = A.Fake<IMySomethingService>();
            var fakeConfig = Options.Create(new MyConfiguration {
                SomeConfig = "Important thing",
                OtherConfigMaybeAnEmailAddress = "[email protected]"
            });

            A.CallTo(() => fakeSomethingService.DoSomething(A<string>.Ignored))
                .Returns(Task.FromResult(true));

            void ConfigureTestServices(IServiceCollection services) {
                services.AddSingleton(fakeConfig);
            }

            void ConfigureTestContainer(ContainerBuilder builder) {
                builder.RegisterInstance(fakeSomethingService);
            }

            var client = _factory
                .WithWebHostBuilder(builder => {
                    builder.ConfigureTestServices(ConfigureTestServices);
                    builder.ConfigureTestContainer<Autofac.ContainerBuilder>(ConfigureTestContainer);
                })
                .CreateClient();

            // Act
            var request = StringContent("{\"sommat\":\"to see\"}");
            request.Headers.ContentType = MediaTypeHeaderValue.Parse("application/json");
            var response = await client.PostAsync("/something/submit", request);

            // Assert
            response.StatusCode.Should().Be(HttpStatusCode.OK);

            A.CallTo(() => fakeSomethingService.DoSomething(A<string>.Ignored))
                .MustHaveHappened();
        }

    }
}

Sunday, 10 May 2020

From react-window to react-virtual

The tremendous Tanner Linsley recently released react-virtual. react-virtual provides "hooks for virtualizing scrollable elements in React".

I was already using the (also excellent) react-window for this purpose. react-window does the virtualising job and does it very well indeed However, I was both intrigued by the lure of the new shiny thing. I've also never been the biggest fan of react-window's API. So I tried switching over from react-window to react-virtual as an experiment. To my delight, the experiment went so well I didn't look back!

What did I get out of the switch?

  • Simpler code / nicer developer ergonomics. The API for react-virtual allowed me to simplify my code and lose a layer of components.
  • TypeScript support in the box
  • Improved perceived performance. I didn't run any specific tests to quantify this, but I can say that the same functionality now feels snappier.

I tweeted my delight at this and Tanner asked if there was commit diff I could share. I couldn't as it's a private codebase, but I thought it could form the basis of a blogpost.

In case you hadn't guessed, this is that blog post...

Make that change

So what does the change look like? Well first remove react-window from your project:

yarn remove react-window @types/react-window

Add the dependency to react-virtual:

yarn add react-virtual

Change your imports from:

import { FixedSizeList, ListChildComponentProps } from 'react-window';

to:

import { useVirtual } from 'react-virtual';

Change your component code from:

type ImportantDataListProps = {
    classes: ReturnType<typeof useStyles>;
    importants: ImportantData[];
};

const ImportantDataList: React.FC<ImportantDataListProps> = React.memo(props => (
    <FixedSizeList
        height={400}
        width={'100%'}
        itemSize={80}
        itemCount={props.importants.length}
        itemData={props}
    >
        {RenderRow}
    </FixedSizeList>
));

type ListItemProps = {
    classes: ReturnType<typeof useStyles>;
    importants: ImportantData[];
};

function RenderRow(props: ListChildComponentProps) {
    const { index, style } = props;
    const { importants, classes } = props.data as ListItemProps;
    const important = importants[index];

    return (
        <ListItem button style={style} key={index}>
            <ImportantThing classes={classes} important={important} />
        </ListItem>
    );
}

Of the above you can delete the ListItemProps type and the associate RenderRow function. You won't need them again! There's no longer a need to pass down data to the child element and then extract it for usage; it all comes down into a single simpler component.

Replace the ImportantDataList component with this:

const ImportantDataList: React.FC<ImportantDataListProps> = React.memo(props => {
    const parentRef = React.useRef<HTMLDivElement>(null);

    const rowVirtualizer = useVirtual({
        size: props.importants.length,
        parentRef,
        estimateSize: React.useCallback(() => 80, []), // This is just a best guess
        overscan: 5
    });

    return (
            <div
                ref={parentRef}
                style={{
                    width: `100%`,
                    height: `500px`,
                    overflow: 'auto'
                }}
            >
                <div
                    style={{
                        height: `${rowVirtualizer.totalSize}px`,
                        width: '100%',
                        position: 'relative'
                    }}
                >
                    {rowVirtualizer.virtualItems.map(virtualRow => (
                        <div
                            key={virtualRow.index}
                            ref={virtualRow.measureRef}
                            className={props.classes.hoverRow}
                            style={{
                                position: 'absolute',
                                top: 0,
                                left: 0,
                                width: '100%',
                                height: `${virtualRow.size}px`,
                                transform: `translateY(${virtualRow.start}px)`
                            }}
                        >
                            <ImportantThing
                                classes={props.classes}
                                important={props.importants[virtualRow.index]}
                            />
                        </div>
                    ))}
                </div>
            </div>
    );
});

And you are done! Thanks Tanner for this tremendous library!

Saturday, 4 April 2020

Up to the clouds!

This last four months has been quite the departure for me. Most typically I find myself building applications; for this last period of time I've been taking the platform that I work on, and been migrating it from running on our on premise servers to running in the cloud.

This turned out to be much more difficult than I'd expected and for reasons that often surprised me. We knew where we wanted to get to, but not all of what we'd need to do to get there. So many things you can only learn by doing. Whilst these experiences are still fresh in my mind I wanted to document some of the challenges we faced.

The mission

At the start of January, the team decided to make a concerted effort to take our humble ASP.NET Core application and migrate it to the cloud. We sat down with some friends from the DevOps team who are part of our organisation. We're fortunate in that these marvellous people are very talented engineers indeed. It was going to be a collaboration between our two teams of budding cloudmongers that would make this happen.

Now our application is young. It is not much more than a year old. However it is growing fast. And as we did the migration from on premise to the cloud, that wasn't going to stop. Development of the application was to continue as is, shipping new versions daily. Without impeding that, we were to try and get the application migrated to the cloud.

I would liken it to boarding a speeding train, fighting your way to the front, taking the driver hostage and then diverting the train onto a different track. It was challenging. Really, really challenging.

So many things had to change for us to get from on premise servers to the cloud, all the while keeping our application a going (and shipping) concern. Let's go through them one by one.

Kubernetes and Docker

Our application was built using ASP.NET Core. A technology that is entirely cloud friendly (that's one of the reasons we picked it). We were running on a collection of hand installed, hand configured Windows servers. That had to change. We wanted to move our application to run on Kubernetes; so we didn't have to manually configure servers. Rather k8s would manage the provisioning and deployment of containers running our application. Worth saying now: I knew nothing about Kubernetes. Or nearly nothing. I learned a bunch along the way, but, as I've said, this was a collaboration between our team and the mighty site reliability engineers of the DevOps team. They knew a lot about this k8s stuff and moreoften than not, our team stood back and let them work their magic.

In order that we could migrate to running in k8s, we first needed to containerise our application. We needed a Dockerfile. There followed a good amount of experimentation as we worked out how to build ourselves images. There's an art to building an optimal Docker image.

So that we can cover a lot of ground, this post will remain relatively high level. So here's a number of things that we encountered along the way that are worth considering:

  • Multi-stage builds were an absolute necessity for us. We'd build the front end of our app (React / TypeScript) using one stage with a Node base image. Then we'd build our app using a .NET Core SDK base image. Finally, we'd use a ASP.Net image to run the app; copying in the output of previous stages.
  • Our application accesses various SQL Server databases. We struggled to get our application to connect to them. The issue related to the SSL configuration of our runner image. The fix was simple but frustrating; use a -bionic image as it has the configuration you need. We found that gem here.
  • Tests. Automated tests. We want to run them in our build; but how? Once more multi-stage builds to the rescue. We'd build our application, then in a separate stage we'd run the tests; copying in the app from the build stage. If the tests failed, the build failed. If they passed then the intermediate stage containing the tests would be discarded by Docker. No unnecessary bloat of the image; all that testing goodness still; now in containerised form!

Jenkins

Our on premise world used TeamCity for our continuous integration needs and Octopus for deployment. We liked these tools well enough; particularly Octopus. However, the DevOps team were very much of the mind that we should be use Jenkins instead. And Pipeline. It was here that we initially struggled. To quote the docs:

Jenkins Pipeline (or simply "Pipeline" with a capital "P") is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins.

Whilst continuous delivery is super cool, and is something our team was interested in, we weren't ready for it yet. We didn't yet have the kind of automated testing in place that gave us the confidence that we'd need to move to it. One day, but not today. For now there was still some manual testing done on each release, prior to shipping. Octopus suited us very well here as it allowed us to deploy, on demand, a build of our choice to a given environment. So the question was: what to do? Fortunately the immensely talented Aby Egea came up with a mechanism that supported that very notion. A pipeline that would, optionally, deploy our build to a specified environment. So we were good!

One thing we got to really appreciate about Jenkins was that the build is scripted with a Jenkinsfile. This was in contrast to our TeamCity world where it was all manually configured. Configuration as code is truly a wonderful thing as your build pipeline becomes part of your codebase; open for everyone to see and understand. If anyone wants to change the build pipeline it has to get code reviewed like everything else. It was as code in our Jenkinsfile that the deployment mechanism lived.

Vault

Another thing that we used Octopus for was secrets. Applications run on configuration; these are settings that drive the behaviour of your application. A subset of configuration is "secrets". Secrets are configuration that can't be stored in source code; they would represent a risk if they did. For instance a database connection string. We'd been merrily using Octopus for this; as Octopus deploys an application to a server it enriches the appsettings.json file with any required secrets.

Without Octopus in the mix, how were we to handle our secrets? The answer is with Hashicorp Vault. We'd store our secrets in there and, thanks to clever work by Robski of the DevOps team, when our container was brought up by Kubernetes, it would mount into the filesystem an appsettings.Vault.json file which we read thanks to our trusty friend .AddJsonFile with optional: true. (As the file didn't exist in our development environment.)

Hey presto! Safe secrets in k8s.

Networking

Our on premise servers sat on the company network. They could see everything that there was to see. All the other servers around them on the network, bleeping and blooping. The opposite was true in AWS. There was nothing to see. Nothing to access. As it should be. It's safer that way should a machine become compromised. For each database and each API our application depended upon, we needed to specifically whitelist access.

Kerberos

There's always a fly in the ointment. A nasty surprise on a dark night. Ours was realising that our application depended upon an API that was secured using Windows Authentication. Our application was accessing it by running under a service account which had been permissioned to access it. However, in AWS, our application wasn't running as under a service account on the company network. Disappointingly, in the short term the API was not going to support an alternate authentication mechanism.

What to do? Honestly it wasn't looking good. We were considering proxying through one of our Windows servers just to get access to that API. I was tremendously disappointed. At this point our hero arrived; one JMac hacked together a Kerberos sidecar approach one weekend. You can see a similar approach here. This got us to a point that allowed us to access the API we needed to.

I'm kind of amazed that there isn't better documentation out there around have a Kerberos sidecar in a k8s setup. Tragically Windows Authentication is a widely used authentication mechanism. That being the case, having good docs to show how you can get a Kerberos sidecar in place would likely greatly advance the ability of enterprises to migrate to the cloud. The best docs I've found are here. It is super hard though. So hard!

Hangfire

We were using Hosted Services to perform background task running in our app. The nature of our background tasks meant that it was important to only run a single instance of a background task at a time. Or bad things would happen. This was going to become a problem since we had ambitions to be able to horizontally scale our application; to add new pods as running our app as demand determined.

So we started to use Hangfire to perform task running in our app. With Hangfire, when a job is picked up it gets locked so other servers can't pick it up. That's what we need.

Hangfire is pretty awesome. However it turns out that there's quirks when you move to a containerised environment. We have a number of recurring jobs that are scheduled to run at certain dates and times. In order that Hangfire can ascertain what time it is, it needs a timezone. It turns out that timezones on Windows != timezones in Docker / Linux.

This was a problem because, as we limbered up for the great migration, we were trying to run our cloud implementation side by side with our on premise one. And Windows picked a fight with Linux over timezones. You can see others bumping into this condition here. We learned this the hard way; jobs mysteriously stopping due to timezone related errors. Windows Hangfire not able to recognise Linux Hangfire timezones and vica versa.

The TL;DR is that we had to do a hard switch with Hangfire; it couldn't run side by side. Not the end of the world, but surprising.

Azure Active Directory Single Sign-On

Historically our application had used two modes of authentication; Windows Authentication and cookies. Windows Authentication doesn't generally play nicely with Docker. It's doable, but it's not the hill you want to die on. So we didn't; we swapped out Windows Authentication for Azure AD SSO and didn't look back.

We also made some changes so our app would support cookies auth alongside Azure AD auth; I've written about this previously.

Do the right thing and tell people about it

We're there now; we've made the move. It was a difficult journey but one worth making; it sets up our platform for where we want to take it in the future. Having infrastructure as code makes all kinds of approaches possible that weren't before. Here's some things we're hoping to get out of the move:

  • blue green deployments - shipping without taking down our platform
  • provision environments on demand - currently we have a highly contended situation when it comes to test environments. With k8s and AWS we can look at spinning up environments as we need them and throwing them away also
  • autoscaling for need - we can start to look at spinning up new containers in times of high load and removing excessive containers in times of low load

We've also become more efficient as a team. We are no longer maintaining servers, renewing certificates, installing software, RDPing onto boxes. All that time and effort we can plough back into making awesome experiences for our users.

There's a long list of other benefits and it's very exciting indeed! It's not enough for us to have done this though. It's important that we tell the story of what we've done and how and why we've done it. That way people have empathy for the work. Also they can start to think about how they could start to reap similar benefits themselves. By talking to others about the road we've travelled, we can save them time and help them to travel a similar road. This is good for them and it's good for us; it helps our relationships and it helps us all to move forwards together.

A rising tide lifts all boats. By telling others about our journey, we raise the water level. Up to the clouds!

Sunday, 29 March 2020

Offline storage in a PWA

When you are building any kind of application it's typical to want to store information which persists beyond a single user session. Sometimes that will be information that you'll want to live in some kind of centralised database, but not always.

Also, you may want that data to still be available if your user is offline. Even if they can't connect to the network, the user may still be able to use the app to do meaningful tasks; but the app will likely require a certain amount of data to drive that.

How can we achieve this in the context of a PWA?

The problem with localStorage

If you were building a classic web app you'd probably be reaching for Window.localStorage at this point. Window.localStorage is a long existing API that stores data beyond a single session. It has a simple API and is very easy to use. However, it has a couple of problems:

  1. Window.localStorage is synchronous. Not a tremendous problem for every app, but if you're building something that has significant performance needs then this could become an issue.
  2. Window.localStorage cannot be used in the context of a Worker or a ServiceWorker. The APIs are not available there.
  3. Window.localStorage stores only strings. Given JSON.stringify and JSON.parse that's not a big problem. But it's an inconvenience.

The second point here is the significant one. If we've a need to access our offline data in the context of a ServiceWorker (and if you're offline you'll be using a ServiceWorker) then what do you do?

IndexedDB to the rescue?

Fortunately, localStorage is not the only game in town. There's alternative offline storage mechanism available in browsers with the curious name of IndexedDB. To quote the docs:

IndexedDB is a transactional database system, like an SQL-based RDBMS. However, unlike SQL-based RDBMSes, which use fixed-column tables, IndexedDB is a JavaScript-based object-oriented database. IndexedDB lets you store and retrieve objects that are indexed with a key; any objects supported by the structured clone algorithm can be stored. You need to specify the database schema, open a connection to your database, and then retrieve and update data within a series of transactions.

It's clear that IndexedDB is very powerful. But it doesn't sound very simple. A further look at the MDN example of how to interact with IndexedDB does little to remove that thought.

We'd like to be able to access data offline; but in a simple fashion. Like we could with localStorage which has a wonderfully straightforward API. If only someone would build an astraction on top of IndexedDB to make our lives easier...

Someone did.

IDB-Keyval to the rescue!

The excellent Jake Archibald of Google has written IDB-Keyval which is:

A super-simple-small promise-based keyval store implemented with IndexedDB

The API is essentially equivalent to localStorage with a few lovely differences:

  1. The API is promise based; all functions return a Promise; this makes it a non-blocking API.
  2. The API is not restricted to strings as localStorage is. To quote the docs: this is IDB-backed, you can store anything structured-clonable (numbers, arrays, objects, dates, blobs etc)
  3. Because this is abstraction built on top of IndexedDB, it can be used both in the context of a typical web app and also in a Worker or a ServiceWorker if required.

Simple usage

Let's take a look at what usage of IDB-Keyval might be like. For that we're going to need an application. It would be good to be able to demonstrate both simple usage and also how usage in the context of an application might look.

Let's spin up a TypeScript React app with Create React App:

npx create-react-app offline-storage-in-a-pwa --template typescript

This creates us a simple app. Now let's add IDB-Keyval to it:

yarn add idb-keyval

Then, let's update the index.tsx file to add a function that tests using IDB-Keyval:

import React from 'react';
import ReactDOM from 'react-dom';
import { set, get } from 'idb-keyval';
import './index.css';
import App from './App';
import * as serviceWorker from './serviceWorker';

ReactDOM.render(<App />, document.getElementById('root'));

serviceWorker.register();

async function testIDBKeyval() {
    await set('hello', 'world');
    const whatDoWeHave = await get('hello');
    console.log(`When we queried idb-keyval for 'hello', we found: ${whatDoWeHave}`);
}

testIDBKeyval();

As you can see, we've added a testIDBKeyval function which does the following:

  1. Adds a value of 'world' to IndexedDB using IDB-Keyval for the key of 'hello'
  2. Queries IndexedDB using IDB-Keyval for the key of 'hello' and stores it in the variable whatDoWeHave
  3. Logs out what we found.

You'll also note that testIDBKeyval is an async function. This is so that we can use await when we're interacting with IDB-Keyval. Given that its API is Promise based, it is await friendly. Where you're performing more than an a single asynchronous operation at a time, it's often valuable to use async / await to increase the readability of your codebase.

What happens when we run our application with yarn start? Let's do that and take a look at the devtools:

We successfully wrote something into IndexedDB, read it back and printed that value to the console. Amazing!

Usage in React

What we've done so far is slightly abstract. It would be good to implement a real-world use case. Let's create an application which gives users the choice between using a "Dark mode" version of the app or not. To do that we'll replace our App.tsx with this:

import React, { useState } from "react";
import "./App.css";

const sharedStyles = {
  height: "30rem",
  fontSize: "5rem",
  textAlign: "center"
} as const;

function App() {
  const [darkModeOn, setDarkModeOn] = useState(true)
  const handleOnChange = ({ target }: React.ChangeEvent<HTMLInputElement>) => setDarkModeOn(target.checked);

  const styles = {
    ...sharedStyles,
    ...(darkModeOn
      ? {
          backgroundColor: "black",
          color: "white"
        }
      : {
          backgroundColor: "white",
          color: "black"
        })
  };

  return (
    <div style={styles}>
      <input
        type="checkbox"
        value="darkMode"
        checked={darkModeOn}
        id="darkModeOn"
        name="darkModeOn"
        style={{ width: "3rem", height: "3rem" }}
        onChange={handleOnChange}
      />
      <label htmlFor="darkModeOn">Use dark mode?</label>
    </div>
  );
}

export default App;

When you run the app you can see how it works:

Looking at the code you'll be able to see that this is implemented using React's useState hook. So any user preference selected will be lost on a page refresh. Let's see if we can take this state and move it into IndexedDB using IDB-Keyval.

We'll change the code like so:

import React, { useState, useEffect } from "react";
import { set, get } from "idb-keyval";
import "./App.css";

const sharedStyles = {
  height: "30rem",
  fontSize: "5rem",
  textAlign: "center"
} as const;

function App() {
  const [darkModeOn, setDarkModeOn] = useState<boolean | undefined>(undefined);

  useEffect(() => {
    get<boolean>("darkModeOn").then(value =>
      // If a value is retrieved then use it; otherwise default to true
      setDarkModeOn(value ?? true)
    );
  }, [setDarkModeOn]);

  const handleOnChange = ({ target }: React.ChangeEvent<HTMLInputElement>) => {
    setDarkModeOn(target.checked);

    set("darkModeOn", target.checked);
  };

  const styles = {
    ...sharedStyles,
    ...(darkModeOn
      ? {
          backgroundColor: "black",
          color: "white"
        }
      : {
          backgroundColor: "white",
          color: "black"
        })
  };

  return (
    <div style={styles}>
      {darkModeOn === undefined ? (
        <>Loading preferences...</>
      ) : (
        <>
          <input
            type="checkbox"
            value="darkMode"
            checked={darkModeOn}
            id="darkModeOn"
            name="darkModeOn"
            style={{ width: "3rem", height: "3rem" }}
            onChange={handleOnChange}
          />
          <label htmlFor="darkModeOn">Use dark mode?</label>
        </>
      )}
    </div>
  );
}

export default App;

The changes here are:

  1. darkModeOn is now initialised to undefined and the app displays a loading message until darkModeOn has a value.
  2. The app attempts to app load a value from IDB-Keyval with the key 'darkModeOn' and set darkModeOn with the retrieved value. If no value is retrieved then it sets darkModeOn to true.
  3. When the checkbox is changed, the corresponding value is both applied to darkModeOn and saved to IDB-Keyval with the key 'darkModeOn'

As you can see, this means that we are persisting preferences beyond page refresh in a fashion that will work both online and offline!

Usage as a React hook

Finally it's time for bonus points. Wouldn't it be nice if we could move this functionality into a reusable React hook? Let's do it!

Let's create a new usePersistedState.ts file:

import { useState, useEffect, useCallback } from "react";
import { set, get } from "idb-keyval";

export function usePersistedState<TState>(keyToPersistWith: string, defaultState: TState) {
    const [state, setState] = useState<TState | undefined>(undefined);

    useEffect(() => {
        get<TState>(keyToPersistWith).then(retrievedState =>
            // If a value is retrieved then use it; otherwise default to defaultValue
            setState(retrievedState ?? defaultState));
    }, [keyToPersistWith, setState, defaultState]);
    
    const setPersistedValue = useCallback((newValue: TState) => {
        setState(newValue);
        set(keyToPersistWith, newValue);
    }, [keyToPersistWith, setState]);
    
    return [state, setPersistedValue] as const;
}

This new hook is modelled after the API of useState and is named usePersistentState. It requires that a key be supplied which is the key that will be used to save the data. It also requires a default value to use in the case that nothing is found during the lookup.

It returns (just like useState) a stateful value, and a function to update it. Finally, let's switch over our App.tsx to use our shiny new hook:

import React from "react";
import "./App.css";
import { usePersistedState } from "./usePersistedState";

const sharedStyles = {
  height: "30rem",
  fontSize: "5rem",
  textAlign: "center"
} as const;

function App() {
  const [darkModeOn, setDarkModeOn] = usePersistedState<boolean>("darkModeOn", true);

  const handleOnChange = ({ target }: React.ChangeEvent<HTMLInputElement>) =>
    setDarkModeOn(target.checked);

  const styles = {
    ...sharedStyles,
    ...(darkModeOn
      ? {
        backgroundColor: "black",
        color: "white"
      }
      : {
        backgroundColor: "white",
        color: "black"
      })
  };

  return (
    <div style={styles}>
      {darkModeOn === undefined ? (
        <>Loading preferences...</>
      ) : (
          <>
            <input
              type="checkbox"
              value="darkMode"
              checked={darkModeOn}
              id="darkModeOn"
              name="darkModeOn"
              style={{ width: "3rem", height: "3rem" }}
              onChange={handleOnChange}
            />
            <label htmlFor="darkModeOn">Use dark mode?</label>
          </>
        )}
    </div>
  );
}

export default App;

Conclusion

This post has demonstrate how a web application or a PWA can safely store data that is persisted between sessions using native browser capabilities easily. IndexedDB powered the solution we've built. We used used IDB-Keyval for the delightful and familiar abstraction it offers over IndexedDB. It's allowed us to come up with a solution with a similarly lovely API. It's worth knowing that there are alternatives to IDB-Keyval available such as localForage. If you are building for older browsers which may lack good IndexedDB support then this would be a good choice. But be aware that with greater backwards compatibility comes greater download size. Do consider this and make the tradeoffs that make sense for you.

Finally, I've finished this post illustrating what usage would look like in a React context. Do be aware that there's nothing React specific about our offline storage mechanism. So if you're rolling with Vue, Angular or something else entirely: this is for you too! Offline storage is a feature that provide much greater user experiences. Please do consider making use of it in your applications.

This post was originally published on LogRocket.

The source code for this project can be found here.

Sunday, 22 March 2020

Dual boot authentication with ASP.Net Core

This is a post about having two kinds of authentication working at the same time in ASP.Net Core. But choosing which authentication method to use dynamically at runtime; based upon the criteria of your choice.

Already this sounds complicated; let's fix that. Perhaps I should describe my situation to you. I've an app which has two classes of user. One class, let's call them "customers" (because... uh... they're customers). The customers access our application via a public facing website. Traffic rolls through Cloudflare and into our application. The public facing URL is something fancy like https://mega-app.com. That's one class of user.

The other class of user we'll call "our peeps"; because they are us. We use the app that we build. Traffic from "us" comes from a different hostname; only addressable on our network. So URLs from requests that we make are more along the lines of https://strictly4mypeeps.io.

So far, so uncontroversial. Now it starts to get interesting. Our customers log into our application using their super secret credentials. It's cookie based authentication. But for our peeps we do something different. Having to enter your credentials each time you use the app is friction. It gets in the way. So for us we have Azure AD in the mix. Azure AD is how we authenticate ourselves; and that means we don't spend 5% of each working day entering credentials.

Let us speak of the past

Now our delightful little application grew up in a simpler time. A time where you went to the marketplace, picked out some healthy looking servers, installed software upon them, got them attached to the internet, deployed an app onto them and said "hey presto, we're live!".

Way back when, we had some servers on the internet, that's how our customers got to our app. Our peeps, us, we went to other servers that lived on our network. So we had multiple instances of our app, deployed to different machines. The ones on the internet were configured to use cookie based auth, the ones on our internal network were Azure AD.

As I said, a simpler time.

A new hope

We've been going through the process of cloudifying our app. Bye, bye servers, hello Docker and Kubernetes. So exciting! As we change the way our app is built and deployed; we've been thinking about whether the choices we make still make sense.

When it came to authentication, my initial thoughts were to continue the same road we're travelling; just in containers and pods. So where we had "internal" servers, we'd have "internal" pods, and where we'd have "external" servers we'd have external pods. I had the good fortune to be working with the amazingly talented Robski. Robski knows far more about K8s and networking than I'm ever likely to. He'd regularly say things like "ingress" and "MTLS" whilst I stared blankly at him. He definitely knows stuff.

Robski challenged my plans. "We don't need it. Have one pod that does both sorts of auth. If you do that, your implementation is simpler and scaling is more straightforward. You'll only need half the pods because you won't need internal and external ones; one pod can handle both sets of traffic. You'll save money."

I loved the idea but I didn't think that ASP.Net Core supported it. "It's just not a thing Robski; ASP.Net Core doesn't suppport it." Robski didn't believe me. That turned out to a very good thing. There followed a period of much googling and experimentation. One day of hunting in, I was still convinced there was no way to do it that would allow me to look in the mirror without self loathing. Then Robski sent me this:

It was a link to the amazing David Fowler talking about some API I'd never heard of called SchemeSelector. It turned out that this was the starting point for exactly what we needed; a way to dynamically select an authentication scheme at runtime.

Show me the code

This API did end up landing in ASP.Net Core, but with the name ForwardDefaultSelector. Not the most descriptive of names and I've struggled to find any documentation on it at all. What I did discover was an answer on StackOverflow by the marvellous Barbara Post. I was able to take the approach Barbara laid out and use it to my own ends. I ended up with this snippet of code added to my Startup.ConfigureServices:

services
    .AddAuthentication(sharedOptions => {
        sharedOptions.DefaultScheme = "WhichAuthDoWeUse";
        sharedOptions.DefaultAuthenticateScheme = "WhichAuthDoWeUse";
        sharedOptions.DefaultChallengeScheme = "WhichAuthDoWeUse";
    })
    .AddPolicyScheme("WhichAuthDoWeUse", "Azure AD or Cookies", options => {
        options.ForwardDefaultSelector = context => {
            var (isExternalRequest, requestUrl) = context.Request.GetIsExternalRequestAndDomain();
            if (isExternalRequest) {
                _logger.LogInformation(
                    "Request ({RequestURL}) has come from external domain ({Domain}) so using Cookie Authentication",
                    requestUrl, ExternalBaseUrl);

                return CookieAuthenticationDefaults.AuthenticationScheme;
           }

           _logger.LogInformation(
               "Request ({RequestURL}) has not come from external domain ({Domain}) so using Azure AD Authentication",
               requestUrl, ExternalBaseUrl);

            return AzureADDefaults.AuthenticationScheme;
        };
    })
    .AddAzureAD(options => {
        Configuration.Bind("AzureAd", options);
    })
    .AddCookie(options => {
        options.Cookie.SecurePolicy = CookieSecurePolicy.Always;
        options.Cookie.SameSite = SameSiteMode.Strict;
        options.Cookie.HttpOnly = true;
        options.Events.OnRedirectToAccessDenied = (context) => {
            context.Response.StatusCode = Microsoft.AspNetCore.Http.StatusCodes.Status401Unauthorized;
            return Task.CompletedTask;
        };

        options.Events.OnRedirectToLogin = (context) => {
            context.Response.StatusCode = Microsoft.AspNetCore.Http.StatusCodes.Status401Unauthorized;
            return Task.CompletedTask;
        };
    });

If you look at this code it's doing these things:

  1. Registering three types of authentication: Cookie, Azure AD and "WhichAuthDoWeUse"
  2. Registers the default Scheme to be "WhichAuthDoWeUse".

"WhichAuthDoWeUse" is effectively an if statement that says, "if this is an external Request use Cookies authentication, otherwise use Azure AD". Given that "WhichAuthDoWeUse" is the default scheme, this code runs for each request, to determine which authentication method to use.

Alongside this mechanism I added these extension methods:

using System;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Http.Extensions;

namespace My.App.Auth {
    public static class AuthExtensions {
        public const string ExternalBaseUrl = "https://mega-app.com";
        public const string InternalBaseUrl = "https://strictly4mypeeps.io";

        /// <summary>
        /// Determines if a request is an "external" URL (eg begins "https://mega-app.com")
        /// or an "internal" URL (eg begins "https://strictly4mypeeps.io")
        /// </summary>
        public static (bool, string) GetIsExternalRequestAndDomain(this HttpRequest request) {
            var (requestUrl, domain) = GetRequestUrlAndDomain(request);

            var isExternalUrl = domain == ExternalBaseUrl;

            var isUnknownPath = domain == null; // This scenario is extremely unlikely but has been observed once during testing so we will cater for it

            var isExternalRequest = isExternalUrl || isUnknownPath; // If unknown we'll treat as "external" for a safe fallback

            return (isExternalRequest, requestUrl);
        }

        /// <summary>
        /// Determines if a request is an "external" URL (eg begins "https://mega-app.com")
        /// or an "internal" URL (eg begins "https://strictly4mypeeps.io")
        /// </summary>
        public static (bool, string) GetIsInternalRequestAndDomain(this HttpRequest request) {
            var (requestUrl, domain) = GetRequestUrlAndDomain(request);

            var isInternalRequest = domain == InternalBaseUrl;

            return (isInternalRequest, requestUrl);
        }

        private static (string, string) GetRequestUrlAndDomain(HttpRequest request) {
            string requestUrl = null;
            string domain = null;
            if (request.Host.HasValue) {
                requestUrl = request.GetEncodedUrl();
                domain = new Uri(requestUrl).GetLeftPart(UriPartial.Authority);
            }

            return (requestUrl, domain);
        }
    }
}

Finally, I updated the SpaController.cs (which serves initial requests to our Single Page Application) to cater for having two types of Auth in play:

        /// <summary>
        /// ASP.NET will try and load the index.html using the FileServer if we don't have a route
        /// here to match `/`. These attributes can't be on Index or the spa fallback doesn't work
        /// Note: this is almost perfect except that if someone actually calls /index.html they'll get
        /// the FileServer one, not the one from this file.
        /// </summary>
        [HttpGet("/")]
        [AllowAnonymous]
        public async Task<IActionResult> SpaFallback([FromQuery] string returnUrl) {
            var redirectUrlIfUserIsInternalAndNotAuthenticated = GetRedirectUrlIfUserIsInternalAndNotAuthenticated(returnUrl);

            if (redirectUrlIfUserIsInternalAndNotAuthenticated != null)
                return LocalRedirect(redirectUrlIfUserIsInternalAndNotAuthenticated);

            return await Index(); // Index just serves up our SPA index.html
        }

        /// <summary>
        /// SPA landing with authorisation - this endpoint will typically not be directly navigated to by a user; 
        /// rather it will be redirected to from the IndexWithoutAuthorisation and SpaFallback actions above
        /// in the case where a user is *not* authenticated but has come from an internal URL eg https://strictlyformypeeps.io
        /// </summary>
        [HttpGet("/login-with-azure-ad")]
        [Authorize]
        public async Task<IActionResult> IndexWithAuthorisation()
        {
            return await Index(); // Index just serves up our SPA index.html
        }

        /// <summary>
        /// This method returns a RedirectURL if a request is coming from an internal URL
        /// eg https://ix-web-int.prd.investec.cloud and is not authenticated.  In this case
        /// we likely want to trigger authentication by redirecting to an authorized endpoint
        /// </summary>
        string GetRedirectUrlIfUserIsInternalAndNotAuthenticated(string returnUrl)
        {
            // If a user is authenticated then we don't need to trigger authentication
            var isAuthenticated = User?.Identity?.Name != null;
            if (isAuthenticated)
                return null;

            // This scenario is extremely unlikely but has been observed once during testing so we will cater for it
            var (isInternalRequest, requestUrl) = Request.GetIsInternalRequestAndDomain();

            if (isInternalRequest) {
                var redirectUrl = $"/login-with-azure-ad{(string.IsNullOrEmpty(returnUrl) ? "" : "?returnUrl=" + WebUtility.UrlEncode(returnUrl))}";
                _logger.LogInformation(
                    "Request ({RequestURL}) has come from internal domain ({InternalDomain}) but is not authenticated; redirecting to {RedirectURL}",
                    requestUrl, AuthExtensions.InternalBaseUrl, redirectUrl);

                return redirectUrl;
            }

            return null;
        }

The code above allows anonymous requests to land in our app through the AllowAnonymous attribute. However, it checks the request when it comes in to see if:

  1. It's an internal request (i.e. the Request URL starts "https://strictly4mypeeps.io/")
  2. The current user is not authenticated.

In this case the user is redirected to the https://strictly4mypeeps.io/login-with-azure-ad route which is decorated with the Authorize attribute. This will trigger authentication for our unauthenticated internal users and drive them through the Azure AD login process.

The mystery of no documentation

I'm so surprised that this approach hasn't yet been better documented on the (generally superb) ASP.Net Core docs. It's such a potentially useful approach; and in our case, money saving too! I hope the official docs feature something on this in future. If they do, and I've just missed it (possible!) then please hit me up in the comments.

Friday, 21 February 2020

Web Workers, comlink, TypeScript and React

JavaScript is famously single threaded. However, if you're developing for the web, you may well know that this is not quite accurate. There are Web Workers:

A worker is an object created using a constructor (e.g. Worker()) that runs a named JavaScript file — this file contains the code that will run in the worker thread; workers run in another global context that is different from the current window.

Given that there is a way to use other threads for background processing, why doesn't this happen all the time? Well there's a number of reasons; not the least of which is the ceremony involved in interacting with Web Workers. Consider the following example that illustrates moving a calculation into a worker:

// main.js
function add2NumbersUsingWebWorker() {
    const myWorker = new Worker("worker.js");

    myWorker.postMessage([42, 7]);
    console.log('Message posted to worker');

    myWorker.onmessage = function(e) {
        console.log('Message received from worker', e.data);
    }
}

add2NumbersUsingWebWorker();

// worker.js
onmessage = function(e) {
  console.log('Worker: Message received from main script');
  const result = e.data[0] * e.data[1];
  if (isNaN(result)) {
    postMessage('Please write two numbers');
  } else {
    const workerResult = 'Result: ' + result;
    console.log('Worker: Posting message back to main script');
    postMessage(workerResult);
  }
}

This is not simple. It's hard to understand what's happening. Also, this approach only supports a single method call. I'd much rather write something that looked more like this:

// main.js
function add2NumbersUsingWebWorker() {
    const myWorker = new Worker("worker.js");

    const total = myWorker.add2Numbers([42, 7]);
    console.log('Message received from worker', total);
}

add2NumbersUsingWebWorker();

// worker.js
export function add2Numbers(firstNumber, secondNumber) {
  const result = firstNumber + secondNumber;
  return (isNaN(result))
    ? 'Please write two numbers'
    : 'Result: ' + result;
}

There's a way to do this using a library made by Google called comlink. This post will demonstrate how we can use this. We'll use TypeScript and webpack. We'll also examine how to integrate this approach into a React app.

A use case for a Web Worker

Let's make ourselves a TypeScript web app. We're going to use create-react-app for this:

npx create-react-app webworkers-comlink-typescript-react --template typescript

Create a takeALongTimeToDoSomething.ts file alongside index.tsx:

export function takeALongTimeToDoSomething() {
    console.log('Start our long running job...');
    const seconds = 5;
    const start = new Date().getTime();
    const delay = seconds * 1000;

    while (true) {
        if ((new Date().getTime() - start) > delay) {
            break;
        }
    }
    console.log('Finished our long running job');
}

To index.tsx add this code:

import { takeALongTimeToDoSomething } from './takeALongTimeToDoSomething';

// ...

console.log('Do something');
takeALongTimeToDoSomething();
console.log('Do another thing');

When our application runs we see this behaviour:

The app starts and logs Do something and Start our long running job... to the console. It then blocks the UI until the takeALongTimeToDoSomething function has completed running. During this time the screen is empty and unresponsive. This is a poor user experience.

Hello worker-plugin and comlink

To start using comlink we're going to need to eject our create-react-app application. The way create-react-app works is by giving you a setup that handles a high percentage of the needs for a typical web app. When you encounter an unsupported use case, you can run the yarn eject command to get direct access to the configuration of your setup.

Web Workers are not that commonly used in day to day development at present. Consequently there isn't yet a "plug'n'play" solution for workers supported by create-react-app. There's a number of potential ways to support this use case and you can track the various discussions happening against create-react-app that covers this. For now, let's eject with:

yarn eject

Then let's install the packages we're going to be using:

  • worker-plugin - this webpack plugin automatically compiles modules loaded in Web Workers
  • comlink - this library provides the RPC-like experience that we want from our workers
yarn add comlink worker-plugin

We now need to tweak our webpack.config.js to use the worker-plugin:

const WorkerPlugin = require('worker-plugin');

// ....

    plugins: [
      new WorkerPlugin(),

// ....

Do note that there's a number of plugins statements in the webpack.config.js. You want the top level one; look out for the new HtmlWebpackPlugin statement and place your new WorkerPlugin(), before that.

Workerize our slow process

Now we're ready to take our long running process and move it into a worker. Inside the src folder, create a new folder called my-first-worker. Our worker is going to live in here. Into this folder we're going to add a tsconfig.json file:

{
  "compilerOptions": {
    "strict": true,
    "target": "esnext",
    "module": "esnext",
    "lib": [
      "webworker",
      "esnext"
    ],
    "moduleResolution": "node",
    "noUnusedLocals": true,
    "sourceMap": true,
    "allowJs": false,
    "baseUrl": "."
  }
}

This file exists to tell TypeScript that this is a Web Worker. Do note the "lib": [ "webworker" usage which does exactly that.

Alongside the tsconfig.json file, let's create an index.ts file. This will be our worker:

import { expose } from 'comlink';
import { takeALongTimeToDoSomething } from '../takeALongTimeToDoSomething';

const exports = {
    takeALongTimeToDoSomething
};
export type MyFirstWorker = typeof exports;

expose(exports);

There's a number of things happening in our small worker file. Let's go through this statement by statement:

import { expose } from 'comlink';

Here we're importing the expose method from comlink. Comlink’s goal is to make exposed values from one thread available in the other. The expose method can be viewed as the comlink equivalent of export. It is used to export the RPC style signature of our worker. We'll see it's use later.

import { takeALongTimeToDoSomething } from '../takeALongTimeToDoSomething';

Here we're going to import our takeALongTimeToDoSomething function that we wrote previously, so we can use it in our worker.

const exports = {
    takeALongTimeToDoSomething
};

Here we're creating the public facing API that we're going to expose.

export type MyFirstWorker = typeof exports;

We're going to want our worker to be strongly typed. This line creates a type called MyFirstWorker which is derived from our exports object literal.

expose(exports);

Finally we expose the exports using comlink. We're done; that's our worker finished. Now let's consume it. Let's change our index.tsx file to use it. Replace our import of takeALongTimeToDoSomething:

import { takeALongTimeToDoSomething } from './takeALongTimeToDoSomething';

With an import of wrap from comlink that creates a local takeALongTimeToDoSomething function that wraps interacting with our worker:

import { wrap } from 'comlink';

function takeALongTimeToDoSomething() {
    const worker = new Worker('./my-first-worker', { name: 'my-first-worker', type: 'module' });
    const workerApi = wrap<import('./my-first-worker').MyFirstWorker>(worker);
    workerApi.takeALongTimeToDoSomething();    
}

Now we're ready to demo our application using our function offloaded into a Web Worker. It now behaves like this:

There's a number of exciting things to note here:

  1. The application is now non-blocking. Our long running function is now not preventing the UI from updating
  2. The functionality is lazily loaded via a my-first-worker.chunk.worker.js that has been created by the worker-plugin and comlink.

Using Web Workers in React

The example we've showed so far demostrates how you could use Web Workers and why you might want to. However, it's a far cry from a real world use case. Let's take the next step and plug our Web Worker usage into our React application. What would that look like? Let's find out.

We'll return index.tsx back to it's initial state. Then we'll make a simple adder function that takes some values and returns their total. To our takeALongTimeToDoSomething.ts module let's add:

export function takeALongTimeToAddTwoNumbers(number1: number, number2: number) {
    console.log('Start to add...');
    const seconds = 5;
    const start = new Date().getTime();
    const delay = seconds * 1000;
    while (true) {
        if ((new Date().getTime() - start) > delay) {
            break;
        }
    }
    const total = number1 + number2;
    console.log('Finished adding');
    return total;
}

Let's start using our long running calculator in a React component. We'll update our App.tsx to use this function and create a simple adder component:

import React, { useState } from "react";
import "./App.css";
import { takeALongTimeToAddTwoNumbers } from "./takeALongTimeToDoSomething";

const App: React.FC = () => {
  const [number1, setNumber1] = useState(1);
  const [number2, setNumber2] = useState(2);

  const total = takeALongTimeToAddTwoNumbers(number1, number2);

  return (
    <div className="App">
      <h1>Web Workers in action!</h1>

      <div>
        <label>Number to add: </label>
        <input
          type="number"
          onChange={e => setNumber1(parseInt(e.target.value))}
          value={number1}
        />
      </div>
      <div>
        <label>Number to add: </label>
        <input
          type="number"
          onChange={e => setNumber2(parseInt(e.target.value))}
          value={number2}
        />
      </div>
      <h2>Total: {total}</h2>
    </div>
  );
};

export default App;

When you try it out you'll notice that entering a single digit locks the UI for 5 seconds whilst it adds the numbers. From the moment the cursor stops blinking to the moment the screen updates the UI is non-responsive:

So far, so classic. Let's Web Workerify this!

We'll update our my-first-worker/index.ts to import this new function:

import { expose } from "comlink";
import {
  takeALongTimeToDoSomething,
  takeALongTimeToAddTwoNumbers
} from "../takeALongTimeToDoSomething";

const exports = {
  takeALongTimeToDoSomething,
  takeALongTimeToAddTwoNumbers
};
export type MyFirstWorker = typeof exports;

expose(exports);

Alongside our App.tsx file let's create an App.hooks.ts file.

import { wrap, releaseProxy } from "comlink";
import { useEffect, useState, useMemo } from "react";

/**
 * Our hook that performs the calculation on the worker
 */
export function useTakeALongTimeToAddTwoNumbers(
  number1: number,
  number2: number
) {
  // We'll want to expose a wrapping object so we know when a calculation is in progress
  const [data, setData] = useState({
    isCalculating: false,
    total: undefined as number | undefined
  });

  // acquire our worker
  const { workerApi } = useWorker();

  useEffect(() => {
    // We're starting the calculation here
    setData({ isCalculating: true, total: undefined });

    workerApi
      .takeALongTimeToAddTwoNumbers(number1, number2)
      .then(total => setData({ isCalculating: false, total })); // We receive the result here
  }, [workerApi, setData, number1, number2]);

  return data;
}

function useWorker() {
  // memoise a worker so it can be reused; create one worker up front
  // and then reuse it subsequently; no creating new workers each time
  const workerApiAndCleanup = useMemo(() => makeWorkerApiAndCleanup(), []);

  useEffect(() => {
    const { cleanup } = workerApiAndCleanup;

    // cleanup our worker when we're done with it
    return () => {
      cleanup();
    };
  }, [workerApiAndCleanup]);

  return workerApiAndCleanup;
}

/**
 * Creates a worker, a cleanup function and returns it
 */
function makeWorkerApiAndCleanup() {
  // Here we create our worker and wrap it with comlink so we can interact with it
  const worker = new Worker("./my-first-worker", {
    name: "my-first-worker",
    type: "module"
  });
  const workerApi = wrap<import("./my-first-worker").MyFirstWorker>(worker);

  // A cleanup function that releases the comlink proxy and terminates the worker
  const cleanup = () => {
    workerApi[releaseProxy]();
    worker.terminate();
  };

  const workerApiAndCleanup = { workerApi, cleanup };

  return workerApiAndCleanup;
}

The useWorker and makeWorkerApiAndCleanup functions make up the basis of a shareable worker hooks approach. It would take very little work to paramaterise them so this could be used elsewhere. That's outside the scope of this post but would be extremely straightforward to accomplish.

Time to test! We'll change our App.tsx to use the new useTakeALongTimeToAddTwoNumbers hook:

import React, { useState } from "react";
import "./App.css";
import { useTakeALongTimeToAddTwoNumbers } from "./App.hooks";

const App: React.FC = () => {
  const [number1, setNumber1] = useState(1);
  const [number2, setNumber2] = useState(2);

  const total = useTakeALongTimeToAddTwoNumbers(number1, number2);

  return (
    <div className="App">
      <h1>Web Workers in action!</h1>

      <div>
        <label>Number to add: </label>
        <input
          type="number"
          onChange={e => setNumber1(parseInt(e.target.value))}
          value={number1}
        />
      </div>
      <div>
        <label>Number to add: </label>
        <input
          type="number"
          onChange={e => setNumber2(parseInt(e.target.value))}
          value={number2}
        />
      </div>
      <h2>
        Total:{" "}
        {total.isCalculating ? (
          <em>Calculating...</em>
        ) : (
          <strong>{total.total}</strong>
        )}
      </h2>
    </div>
  );
};

export default App;

Now our calculation takes place off the main thread and the UI is no longer blocked!

This post was originally published on LogRocket.

The source code for this project can be found here.

Friday, 31 January 2020

From create-react-app to PWA

Progressive Web Apps are a (terribly named) wonderful idea. You can build an app once using web technologies which serves all devices and form factors. It can be accessible over the web, but also surface on the home screen of your Android / iOS device. That app can work offline, have a splash screen when it launches and have notifications too.

PWAs can be a money saver for your business. The alternative, should you want an app experience for your users, is building the same application using three different technologies (one for web, one for Android and one for iOS). When you take this path it's hard to avoid a multiplication of cost and complexity. It often leads to dividing up the team as each works on a different stack. It's common to lose a certain amount of focus as a consequence. PWAs can help here; they are a compelling alternative, not just from a developers standpoint, but from a resourcing one too.

However, the downside of PWAs is that they are more complicated than normal web apps. Writing one from scratch is just less straightforward than a classic web app. There are easy onramps to building a PWA that help you fall into the pit of success. This post will highlight one of these. How you can travel from zero to a PWA of your very own using React and TypeScript.

This post presumes knowledge of:

  • React
  • TypeScript
  • Node

From console to web app

To create our PWA we're going to use create-react-app. This excellent project has long had inbuilt support for making PWAs. In recent months that support has matured to a very satisfactory level. To create ourselves a TypeScript React app using create-react-app enter this npx command at the console:

npx create-react-app pwa-react-typescript --template typescript

This builds you a react web app built with TypeScript; it can be tested locally with:

cd pwa-react-typescript
yarn start

From web app to PWA

From web app to PWA is incredibly simple; it’s just a question of opting in to offline behaviour. If you open up the index.tsx file in your newly created project you'll find this code:

// If you want your app to work offline and load faster, you can change
// unregister() to register() below. Note this comes with some pitfalls.
// Learn more about service workers: https://bit.ly/CRA-PWA
serviceWorker.unregister();

As the hint suggests, swap serviceWorker.unregister() for serviceWorker.register() and you now have a PWA. Amazing! What does this mean? Well to quote the docs:

  • All static site assets are cached so that your page loads fast on subsequent visits, regardless of network connectivity (such as 2G or 3G). Updates are downloaded in the background.
  • Your app will work regardless of network state, even if offline. This means your users will be able to use your app at 10,000 feet and on the subway.

... it will take care of generating a service worker file that will automatically precache all of your local assets and keep them up to date as you deploy updates. The service worker will use a cache-first strategy for handling all requests for local assets, including navigation requests for your HTML, ensuring that your web app is consistently fast, even on a slow or unreliable network.

Under the bonnet, create-react-app is achieving this through the use of technology called "Workbox". Workbox describes itself as:

a set of libraries and Node modules that make it easy to cache assets and take full advantage of features used to build Progressive Web Apps.

The good folks of Google are aware that writing your own PWA can be tricky. There's much new behaviour to configure and be aware of; it's easy to make mistakes. Workbox is there to help ease the way forward by implementing default strategies for caching / offline behaviour which can be controlled through configuration.

A downside of the usage of Workbox in create-react-app is that (as with most things create-react-app) there's little scope for configuration of your own if the defaults don't serve your purpose. This may change in the future, indeed there's an open PR that adds this support.

Icons and splash screens and A2HS, oh my!

But it's not just an offline experience that makes this a PWA. Other important factors are:

  • That the app can be added to your home screen (A2HS AKA "installed").
  • That the app has a name and an icon which can be customised.
  • That there's a splash screen displayed to the user as the app starts up.

All of the above is "in the box" with create-react-app. Let's start customizing these.

First of all, we'll give our app a name. Fire up index.html and replace <title>React App</title> with <title>My PWA</title>. (Feel free to concoct a more imaginative name than the one I've suggested.) Next open up manifest.json and replace:

  "short_name": "React App",
  "name": "Create React App Sample",

with:

  "short_name": "My PWA",
  "name": "My PWA",

Your app now has a name. The question you might be asking is: what is this manifest.json file? Well to quote the good folks of Google:

The web app manifest is a simple JSON file that tells the browser about your web application and how it should behave when 'installed' on the user's mobile device or desktop. Having a manifest is required by Chrome to show the Add to Home Screen prompt.

A typical manifest file includes information about the app name, icons it should use, the start_url it should start at when launched, and more.

So the manifest.json is essentially metadata about your app. Here's what it should look like right now:

{
  "short_name": "My PWA",
  "name": "My PWA",
  "icons": [
    {
      "src": "favicon.ico",
      "sizes": "64x64 32x32 24x24 16x16",
      "type": "image/x-icon"
    },
    {
      "src": "logo192.png",
      "type": "image/png",
      "sizes": "192x192"
    },
    {
      "src": "logo512.png",
      "type": "image/png",
      "sizes": "512x512"
    }
  ],
  "start_url": ".",
  "display": "standalone",
  "theme_color": "#000000",
  "background_color": "#ffffff"
}

You can use the above properties (and others not yet configured) to control how your app behaves. For instance, if you want to replace icons your app uses then it's a simple matter of:

  • placing new logo files in the public folder
  • updating references to them in the manifest.json
  • finally, for older Apple devices, updating the <link rel="apple-touch-icon" ... /> in the index.html.

Where are we?

So far, we have a basic PWA in place. It's installable. You can run it locally and develop it with yarn start. You can build it for deployment with yarn build.

What this isn't, is recognisably a web app. In the sense that it doesn't have support for different pages / URLs. We're typically going to want to break up our application this way. Let's do that now. We're going to use react-router; the de facto routing solution for React. To add it to our project (and the required type definitions for TypeScript) we use:

yarn add react-router-dom @types/react-router-dom

Now let's split up our app into a couple of pages. We'll replace the existing App.tsx with this:

import React from "react";
import { BrowserRouter as Router, Switch, Route, Link } from "react-router-dom";
import About from "./About";
import Home from "./Home";

const App: React.FC = () => (
  <Router>
    <nav>
      <ul>
        <li>
          <Link to="/">Home</Link>
        </li>
        <li>
          <Link to="/about">About</Link>
        </li>
      </ul>
    </nav>
    <Switch>
      <Route path="/about">
        <About />
      </Route>
      <Route path="/">
        <Home />
      </Route>
    </Switch>
  </Router>
);

export default App;

This will be our root page. It has the responsiblity of using react-router to render the pages we want to serve, and also to provide the links that allow users to navigate to those pages. In making our changes we'll have broken our test (which checked for a link we've now deleted), so we'll fix it like so:

Replace the App.test.tsx with this:

import React from 'react';
import { render } from '@testing-library/react';
import App from './App';

test('renders about link', () => {
  const { getByText } = render(<App />);
  const linkElement = getByText(/about/i);
  expect(linkElement).toBeInTheDocument();
});

You'll have noticed that in our new App.tsx we import two new components (or pages); About and Home. Let's create those. First About.tsx:

import React from "react";

const About: React.FC = () => (
  <h1>This is a PWA</h1>
);

export default About;

Then Home.tsx:

import React from "react";

const Home: React.FC = () => (
  <h1>Welcome to your PWA!</h1>
);

export default Home;

Code splitting

Now we've split up our app into multiple sections, we're going to split the code too. A good way to improve loading times for PWAs is to ensure that the code is not built into big files. At the moment our app builds a single-file.js. If you run yarn build you'll see what this looks like:

  47.88 KB  build/static/js/2.89bc6648.chunk.js
  784 B     build/static/js/runtime-main.9c116153.js
  555 B     build/static/js/main.bc740179.chunk.js
  269 B     build/static/css/main.5ecd60fb.chunk.css

Notice the build/static/js/main.bc740179.chunk.js file. This is our single-file.js. It represents the compiled output of building the TypeScript files that make up our app. It will grow and grow as our app grows, eventually becoming problematic from a user loading speed perspective.

create-react-app is built upon webpack. There is excellent support for code splitting in webpack and hence create-react-app supports it by default. Let's apply it to our app. Again we're going to change App.tsx.

Where we previously had:

import About from "./About";
import Home from "./Home";

Let's replace with:

const About = lazy(() => import('./About'));
const Home = lazy(() => import('./Home'));

This is the syntax to lazily load components in React. You'll note that it internally uses the dynamic import() syntax which webpack uses as a "split point".

Let's also give React something to render whilst it waits for the dynamic imports to be resolved. Just inside our <Router> component we'll add a <Suspense> component too:

  <Router>
    <Suspense fallback={<div>Loading...</div>}>
    {/*...*/}
    </Suspense>
  </Router>

The <Suspense> component will render the <div>Loading...</div> whilst it waits for a routes code to be dynamically loaded. So our final App.tsx component ends up looking like this:

import React, { lazy, Suspense } from "react";
import { BrowserRouter as Router, Switch, Route, Link } from "react-router-dom";
const About = lazy(() => import("./About"));
const Home = lazy(() => import("./Home"));

const App: React.FC = () => (
  <Router>
    <Suspense fallback={<div>Loading...</div>}>
      <nav>
        <ul>
          <li>
            <Link to="/">Home</Link>
          </li>
          <li>
            <Link to="/about">About</Link>
          </li>
        </ul>
      </nav>
      <Switch>
        <Route path="/about">
          <About />
        </Route>
        <Route path="/">
          <Home />
        </Route>
      </Switch>
    </Suspense>
  </Router>
);

export default App;

This is now a code split application. How can we tell? If we run yarn build again we'll see something like this:

  47.88 KB          build/static/js/2.89bc6648.chunk.js
  1.18 KB (+428 B)  build/static/js/runtime-main.415ab5ea.js
  596 B (+41 B)     build/static/js/main.e60948bb.chunk.js
  269 B             build/static/css/main.5ecd60fb.chunk.css
  233 B             build/static/js/4.0c85e1cb.chunk.js
  228 B             build/static/js/3.eed49094.chunk.js

Note that we now have multiple *.chunk.js files. Our initial main.*.chunk.js and then 3.*.chunk.js representing Home.tsx and 4.*.chunk.js representing About.tsx.

As we continue to build out our app from this point we'll have a great approach in place to ensure that users load files as they need to and that those files should not be too large. Great performance which will scale.

Deploy your PWA

Now that we have our basic PWA in place, let's deploy it so the outside world can appreciate it. We're going to use Netlify for this.

The source code of our PWA lives on GitHub here: https://github.com/johnnyreilly/pwa-react-typescript

We're going to log into Netlify, click on the "Create a new site" option and select GitHub as the provider. We'll need to authorize Netlify to access our GitHub.

You may need to click the "Configure Netlify on GitHub" button to grant permissions for Netlify to access your repo like so:

Then you can select your repo from within Netlify. All of the default settings that Netlify provides should work for our use case:

Let's hit the magic "Deploy site" button! In a matter of minutes you'll find that Netlify has deployed your PWA.

If we browse to the URL provided by Netlify we'll be able to see the deployed PWA in action. (You also have the opportunity to set up a custom domain name that you would typically want outside of a simple demo such as this.) Importantly this will be served over HTTPS which will allow our Service Worker to operate.

Now that we know it's there, let's see how what we've built holds up according to the professionals. We're going to run the Google Chrome Developer Tools Audit against our PWA:

That is a good start for our PWA!

This post was originally published on LogRocket.

The source code for this project can be found here.