Friday, 4 September 2020

Why your team needs a newsfeed

I'm part of a team that builds an online platform. I'm often preoccupied by how to narrow the gap between our users and "us" - the people that build the platform. It's important we understand how people use and interact with what we've built. If we don't then we're liable to waste our time and energy building the wrong things. Or the wrong amount of the right things.

On a recent holiday I spent a certain amount of time pondering how to narrow the gap between our user and us. We have lots of things that help us; we use various analytics tools like mixpanel, we've got a mini analytics platform of our own, we have teams notifications that pop up client feedback and so on. They are all great, but they're somewhat disparate; they don't give us a clear insight as to who uses our platform and how they do so. The information is there, but it's tough to grok. It doesn't make for a joined up story.

Reaching around for how to solve this I had an idea: what if our platform had a newsfeed? The kind of thing that social media platforms the likes of Twitter and Facebook have used to great effect; a stream of mini-activities which show how the community interacts with the product. People logging in and browsing around, using features on the platform. If we could see this in near real time we'd be brought closer to our users; we'd have something that would help us have real empathy and understanding. We'd see our product as the stories of users interacting with it.

How do you build a newsfeed?

This was an experiment that seemed worth pursuing. So I decided to build a proof of concept and see what happened. Now I intended to put the "M" into MVP with this; I went in with a number of intentional constraints:

  1. The news feed wouldn't auto update (users have the F5 key for that)
  2. We'd host the newsfeed in our own mini analytics platform (which is already used by the team to understand how people use the platform)
  3. News stories wouldn't be stored anywhere; we'd generate them on the fly by querying various databases / APIs. The cost of this would be that our news stories wouldn't be "persistent"; you wouldn't be able to address them with a URL; there'd be no way to build "like" or "share" functionality.

All of the above constraints are, importantly, reversable decisions. If we want auto update it could be built later. If we want the newsfeed to live somewhere else we could move it. If we wanted news stories to be persisted then we could do that.

Implementation

With these constraints in mind, I turned my attention to the implementation. I built a NewsFeedService that would be queried for news stories. The interface I decided to build looked like this:

NewsFeedService.getNewsFeed(from: Date, to: Date): NewsFeed

type NewsFeed {
    startedAt: Date;
    ended at: Date;
    stories: NewsStory[];
}

type NewsStory {
    /** When the story happened */
    happenedAt: Date;
    /** A code that represents the type of story this is; eg USER_SESSION */
    storyCode: string
    /** The story details in markdown format */
    story: string;
}

Each query to NewsFeedService.getNewsFeed would query various databases / APIs related to our product, looking for interesting events. Whether it be users logging in, users performing some kind of action, whatever. For each interested event a news story like this would be produced:

Jane Smith logged in at 10:03am for 25 minutes. They placed an order worth £3,000.

Now the killer feature here is Markdown. Our stories are written in Markdown. Why is Markdown cool? Well to quote the creators of Markdown:

Markdown allows you to write using an easy-to-read, easy-to-write plain text format, then convert it to structurally valid XHTML (or HTML).

This crucially includes the ability to include links. This was significant because I want us to be able to be able to click on pieces of information in the stories and be taken to the relevant place in the platform to see more details. Just as you see status updates on, for example, Twitter which lead you on to more details:

Again consider this example news story:

Jane Smith logged in at 10:03am for 25 minutes. They placed an order worth £3,000.

Consider that story but without a link. It's not the same is it? A newsfeed without links would be missing a trick. Markdown gives us links. And happily due to my extensive work down the open source mines, I speak it like a native.

The first consumer of the newsfeed was to be our own mini analytics platform, which is a React app. Converting the markdown stories to React is a solved problem thanks to the wonderful react-markdown. You can simply sling Markdown at it and out comes HTML. Et voilà a news feed!

What's next?

So that's it! We've built a (primitive) news feed. We can now see in real time how are users are getting on. We're closer to them, we understand them better as a consequence. If we want to take it further there's a number of things we could do:

  1. We could make the feed auto-update
  2. We could push news stories to other destinations. Markdown is a gloriously portable format which can be used in a variety of environments. For instance the likes of Slack and Teams accept it and apps like these are generally open on people's desktops and phones all the time anyway. Another way to narrow the gap between us and and our users.

It's very exciting!

Sunday, 9 August 2020

Devcontainers AKA performance in a secure sandbox

Many corporate machines arrive in engineers hands with a preponderance of pre-installed background tools; from virus checkers to backup utilities to port blockers; the list is long.

The reason that these tools are installed is generally noble. However, the implementation can often be problematic. The tools may be set up in such a way as they impact and interfere with one another. Really powerful machines with 8 CPUs and hardy SSDs can be slowed to a crawl. Put simply: the good people responsible for ensuring security are rarely encouraged to incentivise performance alongside it. And so don't.

The unfortunate consequence of considering the role of security without regard to performance is this: sluggish computers. The further consequence (and this is the one I want you to think long and hard about) is low developer productivity. And that sucks. It impacts what an organisation is able to do, how fast an organisation is able to move. Put simply: it can be the difference between success and failure.

The most secure computer is off. But you won't ship much with it. Encouraging your organisation to consider tackling security with performance in mind is worthwhile. It's a long game though. In the meantime what can we do?

"Hide from the virus checkers* in a devcontainer"

Devcontainers, the infrastructure as code equivalent for developing software, have an underappreciated quality: unlocking your machine's performance.

Devcontainers are isolated secure sandboxes in which you can build software. To quote the docs:

A devcontainer.json file in your project tells VS Code how to access (or create) a development container with a well-defined tool and runtime stack. This container can be used to run an application or to sandbox tools, libraries, or runtimes needed for working with a codebase.

Workspace files are mounted from the local file system or copied or cloned into the container.

We're going to set up a devcontainer to code an ASP.NET Core application with a JavaScript (well TypeScript) front end. If there's one thing that's sure to catch a virus checkers beady eye, it's node_modules. node_modules contains more files than a black hole has mass. Consider a project with 5,000 source files. One trusty yarn later and the folder now has a tidy 250,000 files. The virus checker is now really sitting up and taking notice.

Our project has a git commit hook set up with Husky that formats our TypeScript files with Prettier. Every commit the files are formatted to align with the project standard. With all the virus checkers in place a git commit takes around 45 seconds. Inside a devcontainer we can drop this to 5 seconds. That's nine times faster. I'll repeat that: that's nine times faster!

The "cloned into a container" above is key to what we're going to do. We're not going to mount our local file system into the devcontainer. Oh no. We're going to build a devcontainer with ASP.NET CORE and JavaScript in. Then, inside there, we're going to clone our repo. Then we can develop, build and debug all inside the container. It will feel like we're working on our own machine because VS Code does such a damn fine job. In reality, we're connecting to another computer (a Linux computer to boot) that is running in isolation to our own. In our case that machine is sharing our hardware; but that's just an implementation detail. It could be anywhere (and in the future may well be).

Make me a devcontainer...

Enough talk... We're going to need a .devcontainer/devcontainer.json:

{
  "name": "my devcontainer",
  "dockerComposeFile": "../docker-compose.devcontainer.yml",
  "service": "my-devcontainer",
  "workspaceFolder": "/workspace",

  // Set *default* container specific settings.json values on container create.
  "settings": {
    "terminal.integrated.shell.linux": "/bin/zsh"
  },

  // Add the IDs of extensions you want installed when the container is created.
  "extensions": [
    "ms-dotnettools.csharp",
    "dbaeumer.vscode-eslint",
    "esbenp.prettier-vscode",
    "ms-mssql.mssql",
    "eamodio.gitlens",
    "ms-azuretools.vscode-docker",
    "k--kato.docomment",
    "Leopotam.csharpfixformat"
  ],

  // Use 'postCreateCommand' to clone the repo into the workspace folder when the devcontainer starts
  // and copy in the .env file
  "postCreateCommand": "git clone [email protected]:my-org/my-repo.git . && cp /.env /workspace/.env"

  // "remoteUser": "vscode"
}

Now the docker-compose.devcontainer.yml which lives in the root of the project. It provisions a SQL Server container (using the official image) and our devcontainer:

version: "3.7"
services:
  my-devcontainer:
    image: my-devcontainer
    build: 
      context: .
      dockerfile: Dockerfile.devcontainer
    command: /bin/zsh -c "while sleep 1000; do :; done"
    volumes:
      # mount .zshrc from home - make sure it doesn't contain Windows line endings
      - ~/.zshrc:/root/.zshrc

    # user: vscode
    ports:
      - "5000:5000"
      - "8080:8080"
    environment:
      - CONNECTIONSTRINGS__MYDATABASECONNECTION
    depends_on:
      - db
  db:
    image: mcr.microsoft.com/mssql/server:2019-latest
    privileged: true
    ports:
      - 1433:1433
    environment:
      SA_PASSWORD: "Your_password123"
      ACCEPT_EULA: "Y"         

The devcontainer will be built with the Dockerfile.devcontainer in the root of our repo. It relies upon your SSH keys and a .env file being available to be copied in:


#-----------------------------------------------------------------------------------------------------------
# Based upon: https://github.com/microsoft/vscode-dev-containers/tree/master/containers/dotnetcore
#-----------------------------------------------------------------------------------------------------------
ARG VARIANT="3.1-bionic"
FROM mcr.microsoft.com/dotnet/core/sdk:${VARIANT}

# Because MITM certificates
COPY ./docker/certs/. /usr/local/share/ca-certificates/
ENV NODE_EXTRA_CA_CERTS=/usr/local/share/ca-certificates/mitm.pem
RUN update-ca-certificates 

# This Dockerfile adds a non-root user with sudo access. Use the "remoteUser"
# property in devcontainer.json to use it. On Linux, the container user's GID/UIDs
# will be updated to match your local UID/GID (when using the dockerFile property).
# See https://aka.ms/vscode-remote/containers/non-root-user for details.
ARG USERNAME=vscode
ARG USER_UID=1000
ARG USER_GID=$USER_UID

# Options for common package install script
ARG INSTALL_ZSH="true"
ARG UPGRADE_PACKAGES="true"
ARG COMMON_SCRIPT_SOURCE="https://raw.githubusercontent.com/microsoft/vscode-dev-containers/master/script-library/common-debian.sh"
ARG COMMON_SCRIPT_SHA="dev-mode"

# Settings for installing Node.js.
ARG INSTALL_NODE="true"
ARG NODE_SCRIPT_SOURCE="https://raw.githubusercontent.com/microsoft/vscode-dev-containers/master/script-library/node-debian.sh"
ARG NODE_SCRIPT_SHA="dev-mode"

# ARG NODE_VERSION="lts/*"
ARG NODE_VERSION="14"
ENV NVM_DIR=/usr/local/share/nvm

# Have nvm create a "current" symlink and add to path to work around https://github.com/microsoft/vscode-remote-release/issues/3224
ENV NVM_SYMLINK_CURRENT=true
ENV PATH=${NVM_DIR}/current/bin:${PATH}

# Configure apt and install packages
RUN apt-get update \
    && export DEBIAN_FRONTEND=noninteractive \
    #
    # Verify git, common tools / libs installed, add/modify non-root user, optionally install zsh
    && apt-get -y install --no-install-recommends curl ca-certificates 2>&1 \
    && curl -sSL ${COMMON_SCRIPT_SOURCE} -o /tmp/common-setup.sh \
    && ([ "${COMMON_SCRIPT_SHA}" = "dev-mode" ] || (echo "${COMMON_SCRIPT_SHA} */tmp/common-setup.sh" | sha256sum -c -)) \
    && /bin/bash /tmp/common-setup.sh "${INSTALL_ZSH}" "${USERNAME}" "${USER_UID}" "${USER_GID}" "${UPGRADE_PACKAGES}" \
    #
    # Install Node.js
    && curl -sSL ${NODE_SCRIPT_SOURCE} -o /tmp/node-setup.sh \
    && ([ "${NODE_SCRIPT_SHA}" = "dev-mode" ] || (echo "${COMMON_SCRIPT_SHA} */tmp/node-setup.sh" | sha256sum -c -)) \
    && /bin/bash /tmp/node-setup.sh "${NVM_DIR}" "${NODE_VERSION}" "${USERNAME}" \
    #
    # Clean up
    && apt-get autoremove -y \
    && apt-get clean -y \
    && rm -f /tmp/common-setup.sh /tmp/node-setup.sh \
    && rm -rf /var/lib/apt/lists/* \
    #
    # Workspace
    && mkdir workspace \
    && chown -R ${NONROOT_USER}:root workspace


# Install Vim
RUN apt-get update && apt-get install -y \
    vim \
    && rm -rf /var/lib/apt/lists/*

# Set up a timezone in the devcontainer - necessary for anything timezone dependent
ENV TZ=Europe/London
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone \
 && apt-get update \
 && apt-get install --no-install-recommends -y \
    apt-utils \
    tzdata  \
 && apt-get autoremove -y \
 && apt-get clean -y \
 && rm -rf /var/lib/apt/lists/* 

ENV DOTNET_RUNNING_IN_CONTAINER=true 

# Copy across SSH keys so you can git clone
RUN mkdir /root/.ssh
RUN chmod 700 /root/.ssh

COPY .ssh/id_rsa /root/.ssh
RUN chmod 600 /root/.ssh/id_rsa

COPY .ssh/id_rsa.pub /root/.ssh
RUN chmod 644 /root/.ssh/id_rsa.pub

COPY .ssh/known_hosts /root/.ssh
RUN chmod 644 /root/.ssh/known_hosts  

# Disable initial git clone prompt
RUN echo "StrictHostKeyChecking no" >> /etc/ssh/ssh_config

# Copy across .env file so you can customise environment variables
# This will be copied into the root of the repo post git clone
COPY .env /.env
RUN chmod 644 /.env  

# Install dotnet entity framework tools
RUN dotnet tool install dotnet-ef --tool-path /usr/local/bin --version 3.1.2 

With this devcontainer you're good to go for an ASP.NET Core / JavaScript developer setup that is blazing fast! Remember to fire up Docker and give it goodly access to the resources of your host machine. All the CPUs, lots of memory and all the performance that there ought to be.

* "virus checkers" is a euphemism here for all the background tools that may be running. It was that or calling them "we are legion"

Saturday, 11 July 2020

Devcontainers and SSL interception

Devcontainers are cool. They are the infrastructure as code equivalent for developing software.

Imagine your new starter joins the team, you'd like them to be contributing code on day 1. But if the first thing that happens is you hand them a sheaf of paper upon which are the instructions for how to get their machines set up for development, well, maybe it's going to be a while. But if your project has a devcontainer then you're off to the races. One trusty git clone, fire up VS Code and they can get going.

That's the dream right?

I've recently been doing some work getting a project I work on set up with a devcontainer. As I've worked on that I've become aware of some of the hurdles that might hamper your adoption of devcontainers in a corporate environment.

Certificates: I'm starting with the man in the middle

It is a common practice in company networks to perform SSL interception. Not SSL inception; that'd be more fun.

SSL interception is the practice of installing a "man-in-the-middle" (MITM) CA certificate on users machines. When SSL traffic takes place from a users machine, it goes through a proxy. That proxy performs the SSL on behalf of that user and, if it's happy, supplies another certificate back to the users machine which satisfies the MITM CA certificate. So rather than seeing, for example, Google's certificate from https://google.com you'd see the one resulting from the SSL interception. You can read more here.

Now this is a little known and less understood practice. I barely understand it myself. Certificates are hard. Even having read the above you may be none the wiser about why this is relevant. Let's get to the broken stuff.

"Devcontainers don't work at work!"

So, you're ready to get going with your first devcontainer. You fire up the vscode-dev-containers repo and find the container that's going to work for you. Copy pasta the .devcontainer into your repo, install the Remote Development extension into VS Code and enter the Remote-Containers: Reopen Folder in Container. Here comes the future!

But when it comes to performing SSL inside the devcontainer, trouble awaits. Here's what a yarn install results in:

yarn install v1.22.4
[1/4] Resolving packages...
[2/4] Fetching packages...
error An unexpected error occurred: "https://registry.yarnpkg.com/@octokit/core/-/core-2.5.0.tgz: self signed certificate in certificate chain".

Oh no!

Gosh but it's okay - you're just bumping on the SSL interception. Why though? Well it's like this: when you fire up your devcontainer it builds a new Docker container. It's as well to imagine the container as a virtual operating system. So what's the difference between this operating system and the one our machine is running? Well a number of things, but crucially our host operating system has the MITM CA certificate installed. So when we SSL, we have the certificate that will match up with what the proxy sends back to us certificate-wise. And inside our trusty devcontainer we don't have that. Hence the sadness.

Devcontainer + MITM cert = working

We need to do two things to get this working:

  1. Acquire the requisite CA certificate(s) from your friendly neighbourhood networking team. Place them in a certs folder inside your repo, in the .devcontainer folder.
  2. Add the following lines to your .devcontainer/Dockerfile, just after the initial FROM statement:
# Because MITM certificates
COPY certs/. /usr/local/share/ca-certificates/
ENV NODE_EXTRA_CA_CERTS=/usr/local/share/ca-certificates/mitm.pem
RUN update-ca-certificates

Which does the following:

  • Copies the certs into the devcontainer
  • This is a Node example and so we set an environment variable called NODE_EXTRA_CA_CERTS which points to the path of your MITM CA certificate file inside your devcontainer.
  • updates the directory /etc/ssl/certs to hold SSL certificates and generates ca-certificates.crt

With these in place then you should be able to build your devcontainer with no SSL trauma. Enjoy!

Sunday, 21 June 2020

Task.WhenAll / Select is a footgun 👟🔫

This post differs from my typical fayre. Most often I write "here's how to do a thing". This is not that. It's more "don't do this thing I did". And maybe also, "how can we avoid a situation like this happening again in future?". On this topic I very much don't have all the answers - but by putting my thoughts down maybe I'll learn and maybe others will educate me. I would love that!

Doing things that don't scale

The platform that I work on once had zero users. We used to beg people to log in and see what we had built. Those days are (happily) but a memory. We're getting popular.

As our platform has grown in popularity it has revealed some bad choices we made. Approaches that look fine on the surface (and that work just dandy when you have no users) may start to cause problems as your number of users grows.

I wanted to draw attention to one approach in particular that impacted us severely. In this case "impacted us severely" is a euphemism for "brought the site down and caused a critical incident".

You don't want this to happen to you. Trust me. So, what follows is a cautionary tale. The purpose of which is simply this: reader, do you have code of this ilk in your codebase? If you do: out, damn'd spot! out, I say!

So cool, so terrible

I love LINQ. I love a declarative / functional style of coding. It appeals to me on some gut level. I find it tremendously readable. Read any C# of mine and the odds are pretty good that you'll find some LINQ in the mix.

Imagine this scenario: you have a collection of user ids. You want to load the details of each user represented by their id from an API. You want to bag up all of those users into some kind of collection and send it back to the calling code.

Reading that, if you're like me, you're imagining some kind of map operation which loads the user details for each user id. Something like this:

var users = userIds.Select(userId => GetUserDetails(userId)).ToArray(); // users is User[]

Lovely. But you'll note that I'm loading users from an API. Oftentimes, APIs are asynchronous. Certainly, in my case they were. So rather than calling a GetUserDetails function I found myself calling a GetUserDetailsAsync function, behind which an HTTP request is being sent and, later, a response is being returned.

So how do we deal with this? Task.WhenAll my friends!

var userTasks = userIds.Select(userId => GetUserDetailsAsync(userId));
var users = await Task.WhenAll(tasks); // users is User[]

It worked great! Right up until to the point where it didn't. These sorts of shenanigans were fine when we had a minimal number of users... But there came a point where problems arose. It got to the point where that simple looking mapping operation became a cause of many, many, many HTTP requests being fired concurrently. Then bad things started to happen. Not only did we realise we were launching a denial of service attack on the API we were consuming, we were bringing our own application to collapse.

Not a proud day.

What is the problem?

Through log analysis, code reading and speculation, (with the help of the invaluable Robski) we came to realise that the cause of our woes was the Task.WhenAll / Select combination. Exercising that codepath was a surefire way to bring the application to its knees.

As I read around on the topic I happened upon Mark Heath's excellent list of Async antipatterns. Number #6 on the list is "Excessive parallelization". It describes a nearly identical scenario to my own:

Now, this does "work", but what if there were 10,000 orders? We've flooded the thread pool with thousands of tasks, potentially preventing other useful work from completing. If ProcessOrderAsync makes downstream calls to another service like a database or a microservice, we'll potentially overload that with too high a volume of calls.

We're definitely overloading the API we're consuming with too high a volume of calls. I have to admit that I'm less clear on the direct reason that a Task.WhenAll / Select combination could prove fatal to our application. Mark suggests this approach will flood the thread pool with tasks. As I read around on async and await it's repeated again and again that a Task is not the same thing as a Thread. I have to hold my hands up here and say that I don't understand the implementation of async / await in C# well enough. These docs are helpful but I still don't think the penny has fully dropped for me yet. I will continue to read.

One thing we learned as we debugged the production k8s pod was that, prior to its collapse, our app appeared to be opening up 1 million connections to the API we were consuming. Which seemed a bit much. Worthy of investigation. It's worth saying that we're not certain this is exactly what is happening; we have less instrumentation in place than we'd like. But some fancy wc grepping on Robski's behalf suggested this was the case.

What will we change in future?

A learning that came out of this for us was this: we need more metrics exposed. We don't understand our application's behaviour under load as well as we'd like. So we're planning to do some work with App Metrics and Grafana so we've a better idea of how our application performs. If you want to improve something, first measure it.

Another fly in the ointment was that we were unable to reproduce the issue when running locally. It's worth saying here that I develop on a Windows machine and, when deployed, our application runs in a (Linux) Docker container. So there's a difference and a distance between our development experience and our running one.

I'm planning to migrate to developing in a devcontainer where that's possible. That should narrow the gap between our production experience and our development one. Reducing the difference between the two is always useful as it means you're less likely to get different behaviour (ie "problems") in production as compared to development. I'm curious as to whether I'll be able to replicate that behaviour in a devcontainer.

What did we do right now?

To solve the immediate issue we were able to pivot away to a completely different approach. We moved aggregation from our ASP.NET Core web application to our TypeScript / React client with a (pretty sweet) custom hook. The topic for a subsequent blog post.

Moving to a different approach solved my immediate issue. But it left me puzzling. What was actually going wrong? Is it thread pool exhaustion? Is it something else? So many possibilities!

If anyone has any insights they'd like to share that would be incredible! I've also asked a question on Stack Overflow which has kindly had answers from generous souls. James Skimming's answer lead me to Steve Gordon's excellent post on connection pooling which I'm still absorbing and seems like it could be relevant.

Thursday, 21 May 2020

Autofac, WebApplicationFactory and integration tests

This is one of those occasions where I'm not writing up my own work so much as my discovery after in depth googling.

Integration tests with ASP.NET Core are the best. They spin up an in memory version of your application and let you fire requests at it. They've gone through a number of iterations since ASP.NET Core has been around. You may also be familiar with the TestServer approach of earlier versions. For some time, the advised approach has been using WebApplicationFactory.

What makes this approach particularly useful / powerful is that you can swap out dependencies of your running app with fakes / stubs etc. Just like unit tests! But potentially more useful because they run your whole app and hence give you a greater degree of confidence. What does this mean? Well, imagine you changed a piece of middleware in your application; this could potentially break functionality. Unit tests would probably not reveal this. Integration tests would.

There is a fly in the ointment. A hair in the gazpacho. ASP.NET Core ships with dependency injection in the box. It has its own Inversion of Control container which is perfectly fine. However, many people are accustomed to using other IOC containers such as Autofac.

What's the problem? Well, swapping out dependencies registered using ASP.NET Core's IOC requires using a hook called ConfigureTestServices. There's an equivalent hook for swapping out services registered using a custom IOC container: ConfigureTestContainer. Unfortunately, there is a bug in ASP.NET Core as of version 3.0: When using GenericHost, in tests ConfigureTestContainer is not executed

This means you cannot swap out dependencies that have been registered with Autofac and the like. According to the tremendous David Fowler of the ASP.NET team, this will hopefully be resolved.

In the meantime, there's a workaround thanks to various commenters on the thread. Instead of using WebApplicationFactory directly, subclass it and create a custom AutofacWebApplicationFactory (the name is not important). This custom class overrides the behavior of ConfigureServices and CreateHost with a CustomServiceProviderFactory:

namespace My.Web.Tests.Helpers {
    /// <summary>
    /// Based upon https://github.com/dotnet/AspNetCore.Docs/tree/master/aspnetcore/test/integration-tests/samples/3.x/IntegrationTestsSample
    /// </summary>
    /// <typeparam name="TStartup"></typeparam>
    public class AutofacWebApplicationFactory<TStartup> : WebApplicationFactory<TStartup> where TStartup : class {
        protected override void ConfigureWebHost(IWebHostBuilder builder) {
            builder.ConfigureServices(services => {
                    services.AddSingleton<IAuthorizationHandler>(new PassThroughPermissionedRolesHandler());
                })
                .ConfigureTestServices(services => {
                }).ConfigureTestContainer<Autofac.ContainerBuilder>(builder => {
                    // called after Startup.ConfigureContainer
                });
        }

        protected override IHost CreateHost(IHostBuilder builder) {
            builder.UseServiceProviderFactory(new CustomServiceProviderFactory());
            return base.CreateHost(builder);
        }
    }

    /// <summary>
    /// Based upon https://github.com/dotnet/aspnetcore/issues/14907#issuecomment-620750841 - only necessary because of an issue in ASP.NET Core
    /// </summary>
    public class CustomServiceProviderFactory : IServiceProviderFactory<CustomContainerBuilder> {
        public CustomContainerBuilder CreateBuilder(IServiceCollection services) => new CustomContainerBuilder(services);

        public IServiceProvider CreateServiceProvider(CustomContainerBuilder containerBuilder) =>
        new AutofacServiceProvider(containerBuilder.CustomBuild());
    }

    public class CustomContainerBuilder : Autofac.ContainerBuilder {
        private readonly IServiceCollection services;

        public CustomContainerBuilder(IServiceCollection services) {
            this.services = services;
            this.Populate(services);
        }

        public Autofac.IContainer CustomBuild() {
            var sp = this.services.BuildServiceProvider();
#pragma warning disable CS0612 // Type or member is obsolete
            var filters = sp.GetRequiredService<IEnumerable<IStartupConfigureContainerFilter<Autofac.ContainerBuilder>>>();
#pragma warning restore CS0612 // Type or member is obsolete

            foreach (var filter in filters) {
                filter.ConfigureContainer(b => { }) (this);
            }

            return this.Build();
        }
    }
}

I'm going to level with you; I don't understand all of this code. I'm not au fait with the inner workings of ASP.NET Core or Autofac but I can tell you what this allows. With this custom WebApplicationFactory in play you get ConfigureTestContainer back in the mix! You get to write code like this:

using System;
using System.Net;
using System.Net.Http.Headers;
using System.Threading.Tasks;
using FakeItEasy;
using FluentAssertions;
using Microsoft.AspNetCore.TestHost;
using Microsoft.Extensions.DependencyInjection;
using Xunit;
using Microsoft.Extensions.Options;
using Autofac;
using System.Net.Http;
using Newtonsoft.Json;

namespace My.Web.Tests.Controllers
{
    public class MyControllerTests : IClassFixture<AutofacWebApplicationFactory<My.Web.Startup>> {
        private readonly AutofacWebApplicationFactory<My.Web.Startup> _factory;

        public MyControllerTests(
            AutofacWebApplicationFactory<My.Web.Startup> factory
        ) {
            _factory = factory;
        }

        [Fact]
        public async Task My() {
            var fakeSomethingService = A.Fake<IMySomethingService>();
            var fakeConfig = Options.Create(new MyConfiguration {
                SomeConfig = "Important thing",
                OtherConfigMaybeAnEmailAddress = "[email protected]"
            });

            A.CallTo(() => fakeSomethingService.DoSomething(A<string>.Ignored))
                .Returns(Task.FromResult(true));

            void ConfigureTestServices(IServiceCollection services) {
                services.AddSingleton(fakeConfig);
            }

            void ConfigureTestContainer(ContainerBuilder builder) {
                builder.RegisterInstance(fakeSomethingService);
            }

            var client = _factory
                .WithWebHostBuilder(builder => {
                    builder.ConfigureTestServices(ConfigureTestServices);
                    builder.ConfigureTestContainer<Autofac.ContainerBuilder>(ConfigureTestContainer);
                })
                .CreateClient();

            // Act
            var request = StringContent("{\"sommat\":\"to see\"}");
            request.Headers.ContentType = MediaTypeHeaderValue.Parse("application/json");
            var response = await client.PostAsync("/something/submit", request);

            // Assert
            response.StatusCode.Should().Be(HttpStatusCode.OK);

            A.CallTo(() => fakeSomethingService.DoSomething(A<string>.Ignored))
                .MustHaveHappened();
        }

    }
}

Sunday, 10 May 2020

From react-window to react-virtual

The tremendous Tanner Linsley recently released react-virtual. react-virtual provides "hooks for virtualizing scrollable elements in React".

I was already using the (also excellent) react-window for this purpose. react-window does the virtualising job and does it very well indeed However, I was both intrigued by the lure of the new shiny thing. I've also never been the biggest fan of react-window's API. So I tried switching over from react-window to react-virtual as an experiment. To my delight, the experiment went so well I didn't look back!

What did I get out of the switch?

  • Simpler code / nicer developer ergonomics. The API for react-virtual allowed me to simplify my code and lose a layer of components.
  • TypeScript support in the box
  • Improved perceived performance. I didn't run any specific tests to quantify this, but I can say that the same functionality now feels snappier.

I tweeted my delight at this and Tanner asked if there was commit diff I could share. I couldn't as it's a private codebase, but I thought it could form the basis of a blogpost.

In case you hadn't guessed, this is that blog post...

Make that change

So what does the change look like? Well first remove react-window from your project:

yarn remove react-window @types/react-window

Add the dependency to react-virtual:

yarn add react-virtual

Change your imports from:

import { FixedSizeList, ListChildComponentProps } from 'react-window';

to:

import { useVirtual } from 'react-virtual';

Change your component code from:

type ImportantDataListProps = {
    classes: ReturnType<typeof useStyles>;
    importants: ImportantData[];
};

const ImportantDataList: React.FC<ImportantDataListProps> = React.memo(props => (
    <FixedSizeList
        height={400}
        width={'100%'}
        itemSize={80}
        itemCount={props.importants.length}
        itemData={props}
    >
        {RenderRow}
    </FixedSizeList>
));

type ListItemProps = {
    classes: ReturnType<typeof useStyles>;
    importants: ImportantData[];
};

function RenderRow(props: ListChildComponentProps) {
    const { index, style } = props;
    const { importants, classes } = props.data as ListItemProps;
    const important = importants[index];

    return (
        <ListItem button style={style} key={index}>
            <ImportantThing classes={classes} important={important} />
        </ListItem>
    );
}

Of the above you can delete the ListItemProps type and the associate RenderRow function. You won't need them again! There's no longer a need to pass down data to the child element and then extract it for usage; it all comes down into a single simpler component.

Replace the ImportantDataList component with this:

const ImportantDataList: React.FC<ImportantDataListProps> = React.memo(props => {
    const parentRef = React.useRef<HTMLDivElement>(null);

    const rowVirtualizer = useVirtual({
        size: props.importants.length,
        parentRef,
        estimateSize: React.useCallback(() => 80, []), // This is just a best guess
        overscan: 5
    });

    return (
            <div
                ref={parentRef}
                style={{
                    width: `100%`,
                    height: `500px`,
                    overflow: 'auto'
                }}
            >
                <div
                    style={{
                        height: `${rowVirtualizer.totalSize}px`,
                        width: '100%',
                        position: 'relative'
                    }}
                >
                    {rowVirtualizer.virtualItems.map(virtualRow => (
                        <div
                            key={virtualRow.index}
                            ref={virtualRow.measureRef}
                            className={props.classes.hoverRow}
                            style={{
                                position: 'absolute',
                                top: 0,
                                left: 0,
                                width: '100%',
                                height: `${virtualRow.size}px`,
                                transform: `translateY(${virtualRow.start}px)`
                            }}
                        >
                            <ImportantThing
                                classes={props.classes}
                                important={props.importants[virtualRow.index]}
                            />
                        </div>
                    ))}
                </div>
            </div>
    );
});

And you are done! Thanks Tanner for this tremendous library!

Saturday, 4 April 2020

Up to the clouds!

This last four months has been quite the departure for me. Most typically I find myself building applications; for this last period of time I've been taking the platform that I work on, and been migrating it from running on our on premise servers to running in the cloud.

This turned out to be much more difficult than I'd expected and for reasons that often surprised me. We knew where we wanted to get to, but not all of what we'd need to do to get there. So many things you can only learn by doing. Whilst these experiences are still fresh in my mind I wanted to document some of the challenges we faced.

The mission

At the start of January, the team decided to make a concerted effort to take our humble ASP.NET Core application and migrate it to the cloud. We sat down with some friends from the DevOps team who are part of our organisation. We're fortunate in that these marvellous people are very talented engineers indeed. It was going to be a collaboration between our two teams of budding cloudmongers that would make this happen.

Now our application is young. It is not much more than a year old. However it is growing fast. And as we did the migration from on premise to the cloud, that wasn't going to stop. Development of the application was to continue as is, shipping new versions daily. Without impeding that, we were to try and get the application migrated to the cloud.

I would liken it to boarding a speeding train, fighting your way to the front, taking the driver hostage and then diverting the train onto a different track. It was challenging. Really, really challenging.

So many things had to change for us to get from on premise servers to the cloud, all the while keeping our application a going (and shipping) concern. Let's go through them one by one.

Kubernetes and Docker

Our application was built using ASP.NET Core. A technology that is entirely cloud friendly (that's one of the reasons we picked it). We were running on a collection of hand installed, hand configured Windows servers. That had to change. We wanted to move our application to run on Kubernetes; so we didn't have to manually configure servers. Rather k8s would manage the provisioning and deployment of containers running our application. Worth saying now: I knew nothing about Kubernetes. Or nearly nothing. I learned a bunch along the way, but, as I've said, this was a collaboration between our team and the mighty site reliability engineers of the DevOps team. They knew a lot about this k8s stuff and moreoften than not, our team stood back and let them work their magic.

In order that we could migrate to running in k8s, we first needed to containerise our application. We needed a Dockerfile. There followed a good amount of experimentation as we worked out how to build ourselves images. There's an art to building an optimal Docker image.

So that we can cover a lot of ground, this post will remain relatively high level. So here's a number of things that we encountered along the way that are worth considering:

  • Multi-stage builds were an absolute necessity for us. We'd build the front end of our app (React / TypeScript) using one stage with a Node base image. Then we'd build our app using a .NET Core SDK base image. Finally, we'd use a ASP.Net image to run the app; copying in the output of previous stages.
  • Our application accesses various SQL Server databases. We struggled to get our application to connect to them. The issue related to the SSL configuration of our runner image. The fix was simple but frustrating; use a -bionic image as it has the configuration you need. We found that gem here.
  • Tests. Automated tests. We want to run them in our build; but how? Once more multi-stage builds to the rescue. We'd build our application, then in a separate stage we'd run the tests; copying in the app from the build stage. If the tests failed, the build failed. If they passed then the intermediate stage containing the tests would be discarded by Docker. No unnecessary bloat of the image; all that testing goodness still; now in containerised form!

Jenkins

Our on premise world used TeamCity for our continuous integration needs and Octopus for deployment. We liked these tools well enough; particularly Octopus. However, the DevOps team were very much of the mind that we should be use Jenkins instead. And Pipeline. It was here that we initially struggled. To quote the docs:

Jenkins Pipeline (or simply "Pipeline" with a capital "P") is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins.

Whilst continuous delivery is super cool, and is something our team was interested in, we weren't ready for it yet. We didn't yet have the kind of automated testing in place that gave us the confidence that we'd need to move to it. One day, but not today. For now there was still some manual testing done on each release, prior to shipping. Octopus suited us very well here as it allowed us to deploy, on demand, a build of our choice to a given environment. So the question was: what to do? Fortunately the immensely talented Aby Egea came up with a mechanism that supported that very notion. A pipeline that would, optionally, deploy our build to a specified environment. So we were good!

One thing we got to really appreciate about Jenkins was that the build is scripted with a Jenkinsfile. This was in contrast to our TeamCity world where it was all manually configured. Configuration as code is truly a wonderful thing as your build pipeline becomes part of your codebase; open for everyone to see and understand. If anyone wants to change the build pipeline it has to get code reviewed like everything else. It was as code in our Jenkinsfile that the deployment mechanism lived.

Vault

Another thing that we used Octopus for was secrets. Applications run on configuration; these are settings that drive the behaviour of your application. A subset of configuration is "secrets". Secrets are configuration that can't be stored in source code; they would represent a risk if they did. For instance a database connection string. We'd been merrily using Octopus for this; as Octopus deploys an application to a server it enriches the appsettings.json file with any required secrets.

Without Octopus in the mix, how were we to handle our secrets? The answer is with Hashicorp Vault. We'd store our secrets in there and, thanks to clever work by Robski of the DevOps team, when our container was brought up by Kubernetes, it would mount into the filesystem an appsettings.Vault.json file which we read thanks to our trusty friend .AddJsonFile with optional: true. (As the file didn't exist in our development environment.)

Hey presto! Safe secrets in k8s.

Networking

Our on premise servers sat on the company network. They could see everything that there was to see. All the other servers around them on the network, bleeping and blooping. The opposite was true in AWS. There was nothing to see. Nothing to access. As it should be. It's safer that way should a machine become compromised. For each database and each API our application depended upon, we needed to specifically whitelist access.

Kerberos

There's always a fly in the ointment. A nasty surprise on a dark night. Ours was realising that our application depended upon an API that was secured using Windows Authentication. Our application was accessing it by running under a service account which had been permissioned to access it. However, in AWS, our application wasn't running as under a service account on the company network. Disappointingly, in the short term the API was not going to support an alternate authentication mechanism.

What to do? Honestly it wasn't looking good. We were considering proxying through one of our Windows servers just to get access to that API. I was tremendously disappointed. At this point our hero arrived; one JMac hacked together a Kerberos sidecar approach one weekend. You can see a similar approach here. This got us to a point that allowed us to access the API we needed to.

I'm kind of amazed that there isn't better documentation out there around have a Kerberos sidecar in a k8s setup. Tragically Windows Authentication is a widely used authentication mechanism. That being the case, having good docs to show how you can get a Kerberos sidecar in place would likely greatly advance the ability of enterprises to migrate to the cloud. The best docs I've found are here. It is super hard though. So hard!

Hangfire

We were using Hosted Services to perform background task running in our app. The nature of our background tasks meant that it was important to only run a single instance of a background task at a time. Or bad things would happen. This was going to become a problem since we had ambitions to be able to horizontally scale our application; to add new pods as running our app as demand determined.

So we started to use Hangfire to perform task running in our app. With Hangfire, when a job is picked up it gets locked so other servers can't pick it up. That's what we need.

Hangfire is pretty awesome. However it turns out that there's quirks when you move to a containerised environment. We have a number of recurring jobs that are scheduled to run at certain dates and times. In order that Hangfire can ascertain what time it is, it needs a timezone. It turns out that timezones on Windows != timezones in Docker / Linux.

This was a problem because, as we limbered up for the great migration, we were trying to run our cloud implementation side by side with our on premise one. And Windows picked a fight with Linux over timezones. You can see others bumping into this condition here. We learned this the hard way; jobs mysteriously stopping due to timezone related errors. Windows Hangfire not able to recognise Linux Hangfire timezones and vica versa.

The TL;DR is that we had to do a hard switch with Hangfire; it couldn't run side by side. Not the end of the world, but surprising.

Azure Active Directory Single Sign-On

Historically our application had used two modes of authentication; Windows Authentication and cookies. Windows Authentication doesn't generally play nicely with Docker. It's doable, but it's not the hill you want to die on. So we didn't; we swapped out Windows Authentication for Azure AD SSO and didn't look back.

We also made some changes so our app would support cookies auth alongside Azure AD auth; I've written about this previously.

Do the right thing and tell people about it

We're there now; we've made the move. It was a difficult journey but one worth making; it sets up our platform for where we want to take it in the future. Having infrastructure as code makes all kinds of approaches possible that weren't before. Here's some things we're hoping to get out of the move:

  • blue green deployments - shipping without taking down our platform
  • provision environments on demand - currently we have a highly contended situation when it comes to test environments. With k8s and AWS we can look at spinning up environments as we need them and throwing them away also
  • autoscaling for need - we can start to look at spinning up new containers in times of high load and removing excessive containers in times of low load

We've also become more efficient as a team. We are no longer maintaining servers, renewing certificates, installing software, RDPing onto boxes. All that time and effort we can plough back into making awesome experiences for our users.

There's a long list of other benefits and it's very exciting indeed! It's not enough for us to have done this though. It's important that we tell the story of what we've done and how and why we've done it. That way people have empathy for the work. Also they can start to think about how they could start to reap similar benefits themselves. By talking to others about the road we've travelled, we can save them time and help them to travel a similar road. This is good for them and it's good for us; it helps our relationships and it helps us all to move forwards together.

A rising tide lifts all boats. By telling others about our journey, we raise the water level. Up to the clouds!

Sunday, 29 March 2020

Offline storage in a PWA

When you are building any kind of application it's typical to want to store information which persists beyond a single user session. Sometimes that will be information that you'll want to live in some kind of centralised database, but not always.

Also, you may want that data to still be available if your user is offline. Even if they can't connect to the network, the user may still be able to use the app to do meaningful tasks; but the app will likely require a certain amount of data to drive that.

How can we achieve this in the context of a PWA?

The problem with localStorage

If you were building a classic web app you'd probably be reaching for Window.localStorage at this point. Window.localStorage is a long existing API that stores data beyond a single session. It has a simple API and is very easy to use. However, it has a couple of problems:

  1. Window.localStorage is synchronous. Not a tremendous problem for every app, but if you're building something that has significant performance needs then this could become an issue.
  2. Window.localStorage cannot be used in the context of a Worker or a ServiceWorker. The APIs are not available there.
  3. Window.localStorage stores only strings. Given JSON.stringify and JSON.parse that's not a big problem. But it's an inconvenience.

The second point here is the significant one. If we've a need to access our offline data in the context of a ServiceWorker (and if you're offline you'll be using a ServiceWorker) then what do you do?

IndexedDB to the rescue?

Fortunately, localStorage is not the only game in town. There's alternative offline storage mechanism available in browsers with the curious name of IndexedDB. To quote the docs:

IndexedDB is a transactional database system, like an SQL-based RDBMS. However, unlike SQL-based RDBMSes, which use fixed-column tables, IndexedDB is a JavaScript-based object-oriented database. IndexedDB lets you store and retrieve objects that are indexed with a key; any objects supported by the structured clone algorithm can be stored. You need to specify the database schema, open a connection to your database, and then retrieve and update data within a series of transactions.

It's clear that IndexedDB is very powerful. But it doesn't sound very simple. A further look at the MDN example of how to interact with IndexedDB does little to remove that thought.

We'd like to be able to access data offline; but in a simple fashion. Like we could with localStorage which has a wonderfully straightforward API. If only someone would build an astraction on top of IndexedDB to make our lives easier...

Someone did.

IDB-Keyval to the rescue!

The excellent Jake Archibald of Google has written IDB-Keyval which is:

A super-simple-small promise-based keyval store implemented with IndexedDB

The API is essentially equivalent to localStorage with a few lovely differences:

  1. The API is promise based; all functions return a Promise; this makes it a non-blocking API.
  2. The API is not restricted to strings as localStorage is. To quote the docs: this is IDB-backed, you can store anything structured-clonable (numbers, arrays, objects, dates, blobs etc)
  3. Because this is abstraction built on top of IndexedDB, it can be used both in the context of a typical web app and also in a Worker or a ServiceWorker if required.

Simple usage

Let's take a look at what usage of IDB-Keyval might be like. For that we're going to need an application. It would be good to be able to demonstrate both simple usage and also how usage in the context of an application might look.

Let's spin up a TypeScript React app with Create React App:

npx create-react-app offline-storage-in-a-pwa --template typescript

This creates us a simple app. Now let's add IDB-Keyval to it:

yarn add idb-keyval

Then, let's update the index.tsx file to add a function that tests using IDB-Keyval:

import React from 'react';
import ReactDOM from 'react-dom';
import { set, get } from 'idb-keyval';
import './index.css';
import App from './App';
import * as serviceWorker from './serviceWorker';

ReactDOM.render(<App />, document.getElementById('root'));

serviceWorker.register();

async function testIDBKeyval() {
    await set('hello', 'world');
    const whatDoWeHave = await get('hello');
    console.log(`When we queried idb-keyval for 'hello', we found: ${whatDoWeHave}`);
}

testIDBKeyval();

As you can see, we've added a testIDBKeyval function which does the following:

  1. Adds a value of 'world' to IndexedDB using IDB-Keyval for the key of 'hello'
  2. Queries IndexedDB using IDB-Keyval for the key of 'hello' and stores it in the variable whatDoWeHave
  3. Logs out what we found.

You'll also note that testIDBKeyval is an async function. This is so that we can use await when we're interacting with IDB-Keyval. Given that its API is Promise based, it is await friendly. Where you're performing more than an a single asynchronous operation at a time, it's often valuable to use async / await to increase the readability of your codebase.

What happens when we run our application with yarn start? Let's do that and take a look at the devtools:

We successfully wrote something into IndexedDB, read it back and printed that value to the console. Amazing!

Usage in React

What we've done so far is slightly abstract. It would be good to implement a real-world use case. Let's create an application which gives users the choice between using a "Dark mode" version of the app or not. To do that we'll replace our App.tsx with this:

import React, { useState } from "react";
import "./App.css";

const sharedStyles = {
  height: "30rem",
  fontSize: "5rem",
  textAlign: "center"
} as const;

function App() {
  const [darkModeOn, setDarkModeOn] = useState(true)
  const handleOnChange = ({ target }: React.ChangeEvent<HTMLInputElement>) => setDarkModeOn(target.checked);

  const styles = {
    ...sharedStyles,
    ...(darkModeOn
      ? {
          backgroundColor: "black",
          color: "white"
        }
      : {
          backgroundColor: "white",
          color: "black"
        })
  };

  return (
    <div style={styles}>
      <input
        type="checkbox"
        value="darkMode"
        checked={darkModeOn}
        id="darkModeOn"
        name="darkModeOn"
        style={{ width: "3rem", height: "3rem" }}
        onChange={handleOnChange}
      />
      <label htmlFor="darkModeOn">Use dark mode?</label>
    </div>
  );
}

export default App;

When you run the app you can see how it works:

Looking at the code you'll be able to see that this is implemented using React's useState hook. So any user preference selected will be lost on a page refresh. Let's see if we can take this state and move it into IndexedDB using IDB-Keyval.

We'll change the code like so:

import React, { useState, useEffect } from "react";
import { set, get } from "idb-keyval";
import "./App.css";

const sharedStyles = {
  height: "30rem",
  fontSize: "5rem",
  textAlign: "center"
} as const;

function App() {
  const [darkModeOn, setDarkModeOn] = useState<boolean | undefined>(undefined);

  useEffect(() => {
    get<boolean>("darkModeOn").then(value =>
      // If a value is retrieved then use it; otherwise default to true
      setDarkModeOn(value ?? true)
    );
  }, [setDarkModeOn]);

  const handleOnChange = ({ target }: React.ChangeEvent<HTMLInputElement>) => {
    setDarkModeOn(target.checked);

    set("darkModeOn", target.checked);
  };

  const styles = {
    ...sharedStyles,
    ...(darkModeOn
      ? {
          backgroundColor: "black",
          color: "white"
        }
      : {
          backgroundColor: "white",
          color: "black"
        })
  };

  return (
    <div style={styles}>
      {darkModeOn === undefined ? (
        <>Loading preferences...</>
      ) : (
        <>
          <input
            type="checkbox"
            value="darkMode"
            checked={darkModeOn}
            id="darkModeOn"
            name="darkModeOn"
            style={{ width: "3rem", height: "3rem" }}
            onChange={handleOnChange}
          />
          <label htmlFor="darkModeOn">Use dark mode?</label>
        </>
      )}
    </div>
  );
}

export default App;

The changes here are:

  1. darkModeOn is now initialised to undefined and the app displays a loading message until darkModeOn has a value.
  2. The app attempts to app load a value from IDB-Keyval with the key 'darkModeOn' and set darkModeOn with the retrieved value. If no value is retrieved then it sets darkModeOn to true.
  3. When the checkbox is changed, the corresponding value is both applied to darkModeOn and saved to IDB-Keyval with the key 'darkModeOn'

As you can see, this means that we are persisting preferences beyond page refresh in a fashion that will work both online and offline!

Usage as a React hook

Finally it's time for bonus points. Wouldn't it be nice if we could move this functionality into a reusable React hook? Let's do it!

Let's create a new usePersistedState.ts file:

import { useState, useEffect, useCallback } from "react";
import { set, get } from "idb-keyval";

export function usePersistedState<TState>(keyToPersistWith: string, defaultState: TState) {
    const [state, setState] = useState<TState | undefined>(undefined);

    useEffect(() => {
        get<TState>(keyToPersistWith).then(retrievedState =>
            // If a value is retrieved then use it; otherwise default to defaultValue
            setState(retrievedState ?? defaultState));
    }, [keyToPersistWith, setState, defaultState]);
    
    const setPersistedValue = useCallback((newValue: TState) => {
        setState(newValue);
        set(keyToPersistWith, newValue);
    }, [keyToPersistWith, setState]);
    
    return [state, setPersistedValue] as const;
}

This new hook is modelled after the API of useState and is named usePersistentState. It requires that a key be supplied which is the key that will be used to save the data. It also requires a default value to use in the case that nothing is found during the lookup.

It returns (just like useState) a stateful value, and a function to update it. Finally, let's switch over our App.tsx to use our shiny new hook:

import React from "react";
import "./App.css";
import { usePersistedState } from "./usePersistedState";

const sharedStyles = {
  height: "30rem",
  fontSize: "5rem",
  textAlign: "center"
} as const;

function App() {
  const [darkModeOn, setDarkModeOn] = usePersistedState<boolean>("darkModeOn", true);

  const handleOnChange = ({ target }: React.ChangeEvent<HTMLInputElement>) =>
    setDarkModeOn(target.checked);

  const styles = {
    ...sharedStyles,
    ...(darkModeOn
      ? {
        backgroundColor: "black",
        color: "white"
      }
      : {
        backgroundColor: "white",
        color: "black"
      })
  };

  return (
    <div style={styles}>
      {darkModeOn === undefined ? (
        <>Loading preferences...</>
      ) : (
          <>
            <input
              type="checkbox"
              value="darkMode"
              checked={darkModeOn}
              id="darkModeOn"
              name="darkModeOn"
              style={{ width: "3rem", height: "3rem" }}
              onChange={handleOnChange}
            />
            <label htmlFor="darkModeOn">Use dark mode?</label>
          </>
        )}
    </div>
  );
}

export default App;

Conclusion

This post has demonstrate how a web application or a PWA can safely store data that is persisted between sessions using native browser capabilities easily. IndexedDB powered the solution we've built. We used used IDB-Keyval for the delightful and familiar abstraction it offers over IndexedDB. It's allowed us to come up with a solution with a similarly lovely API. It's worth knowing that there are alternatives to IDB-Keyval available such as localForage. If you are building for older browsers which may lack good IndexedDB support then this would be a good choice. But be aware that with greater backwards compatibility comes greater download size. Do consider this and make the tradeoffs that make sense for you.

Finally, I've finished this post illustrating what usage would look like in a React context. Do be aware that there's nothing React specific about our offline storage mechanism. So if you're rolling with Vue, Angular or something else entirely: this is for you too! Offline storage is a feature that provide much greater user experiences. Please do consider making use of it in your applications.

This post was originally published on LogRocket.

The source code for this project can be found here.

Sunday, 22 March 2020

Dual boot authentication with ASP.Net Core

This is a post about having two kinds of authentication working at the same time in ASP.Net Core. But choosing which authentication method to use dynamically at runtime; based upon the criteria of your choice.

Already this sounds complicated; let's fix that. Perhaps I should describe my situation to you. I've an app which has two classes of user. One class, let's call them "customers" (because... uh... they're customers). The customers access our application via a public facing website. Traffic rolls through Cloudflare and into our application. The public facing URL is something fancy like https://mega-app.com. That's one class of user.

The other class of user we'll call "our peeps"; because they are us. We use the app that we build. Traffic from "us" comes from a different hostname; only addressable on our network. So URLs from requests that we make are more along the lines of https://strictly4mypeeps.io.

So far, so uncontroversial. Now it starts to get interesting. Our customers log into our application using their super secret credentials. It's cookie based authentication. But for our peeps we do something different. Having to enter your credentials each time you use the app is friction. It gets in the way. So for us we have Azure AD in the mix. Azure AD is how we authenticate ourselves; and that means we don't spend 5% of each working day entering credentials.

Let us speak of the past

Now our delightful little application grew up in a simpler time. A time where you went to the marketplace, picked out some healthy looking servers, installed software upon them, got them attached to the internet, deployed an app onto them and said "hey presto, we're live!".

Way back when, we had some servers on the internet, that's how our customers got to our app. Our peeps, us, we went to other servers that lived on our network. So we had multiple instances of our app, deployed to different machines. The ones on the internet were configured to use cookie based auth, the ones on our internal network were Azure AD.

As I said, a simpler time.

A new hope

We've been going through the process of cloudifying our app. Bye, bye servers, hello Docker and Kubernetes. So exciting! As we change the way our app is built and deployed; we've been thinking about whether the choices we make still make sense.

When it came to authentication, my initial thoughts were to continue the same road we're travelling; just in containers and pods. So where we had "internal" servers, we'd have "internal" pods, and where we'd have "external" servers we'd have external pods. I had the good fortune to be working with the amazingly talented Robski. Robski knows far more about K8s and networking than I'm ever likely to. He'd regularly say things like "ingress" and "MTLS" whilst I stared blankly at him. He definitely knows stuff.

Robski challenged my plans. "We don't need it. Have one pod that does both sorts of auth. If you do that, your implementation is simpler and scaling is more straightforward. You'll only need half the pods because you won't need internal and external ones; one pod can handle both sets of traffic. You'll save money."

I loved the idea but I didn't think that ASP.Net Core supported it. "It's just not a thing Robski; ASP.Net Core doesn't suppport it." Robski didn't believe me. That turned out to a very good thing. There followed a period of much googling and experimentation. One day of hunting in, I was still convinced there was no way to do it that would allow me to look in the mirror without self loathing. Then Robski sent me this:

It was a link to the amazing David Fowler talking about some API I'd never heard of called SchemeSelector. It turned out that this was the starting point for exactly what we needed; a way to dynamically select an authentication scheme at runtime.

Show me the code

This API did end up landing in ASP.Net Core, but with the name ForwardDefaultSelector. Not the most descriptive of names and I've struggled to find any documentation on it at all. What I did discover was an answer on StackOverflow by the marvellous Barbara Post. I was able to take the approach Barbara laid out and use it to my own ends. I ended up with this snippet of code added to my Startup.ConfigureServices:

services
    .AddAuthentication(sharedOptions => {
        sharedOptions.DefaultScheme = "WhichAuthDoWeUse";
        sharedOptions.DefaultAuthenticateScheme = "WhichAuthDoWeUse";
        sharedOptions.DefaultChallengeScheme = "WhichAuthDoWeUse";
    })
    .AddPolicyScheme("WhichAuthDoWeUse", "Azure AD or Cookies", options => {
        options.ForwardDefaultSelector = context => {
            var (isExternalRequest, requestUrl) = context.Request.GetIsExternalRequestAndDomain();
            if (isExternalRequest) {
                _logger.LogInformation(
                    "Request ({RequestURL}) has come from external domain ({Domain}) so using Cookie Authentication",
                    requestUrl, ExternalBaseUrl);

                return CookieAuthenticationDefaults.AuthenticationScheme;
           }

           _logger.LogInformation(
               "Request ({RequestURL}) has not come from external domain ({Domain}) so using Azure AD Authentication",
               requestUrl, ExternalBaseUrl);

            return AzureADDefaults.AuthenticationScheme;
        };
    })
    .AddAzureAD(options => {
        Configuration.Bind("AzureAd", options);
    })
    .AddCookie(options => {
        options.Cookie.SecurePolicy = CookieSecurePolicy.Always;
        options.Cookie.SameSite = SameSiteMode.Strict;
        options.Cookie.HttpOnly = true;
        options.Events.OnRedirectToAccessDenied = (context) => {
            context.Response.StatusCode = Microsoft.AspNetCore.Http.StatusCodes.Status401Unauthorized;
            return Task.CompletedTask;
        };

        options.Events.OnRedirectToLogin = (context) => {
            context.Response.StatusCode = Microsoft.AspNetCore.Http.StatusCodes.Status401Unauthorized;
            return Task.CompletedTask;
        };
    });

If you look at this code it's doing these things:

  1. Registering three types of authentication: Cookie, Azure AD and "WhichAuthDoWeUse"
  2. Registers the default Scheme to be "WhichAuthDoWeUse".

"WhichAuthDoWeUse" is effectively an if statement that says, "if this is an external Request use Cookies authentication, otherwise use Azure AD". Given that "WhichAuthDoWeUse" is the default scheme, this code runs for each request, to determine which authentication method to use.

Alongside this mechanism I added these extension methods:

using System;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Http.Extensions;

namespace My.App.Auth {
    public static class AuthExtensions {
        public const string ExternalBaseUrl = "https://mega-app.com";
        public const string InternalBaseUrl = "https://strictly4mypeeps.io";

        /// <summary>
        /// Determines if a request is an "external" URL (eg begins "https://mega-app.com")
        /// or an "internal" URL (eg begins "https://strictly4mypeeps.io")
        /// </summary>
        public static (bool, string) GetIsExternalRequestAndDomain(this HttpRequest request) {
            var (requestUrl, domain) = GetRequestUrlAndDomain(request);

            var isExternalUrl = domain == ExternalBaseUrl;

            var isUnknownPath = domain == null; // This scenario is extremely unlikely but has been observed once during testing so we will cater for it

            var isExternalRequest = isExternalUrl || isUnknownPath; // If unknown we'll treat as "external" for a safe fallback

            return (isExternalRequest, requestUrl);
        }

        /// <summary>
        /// Determines if a request is an "external" URL (eg begins "https://mega-app.com")
        /// or an "internal" URL (eg begins "https://strictly4mypeeps.io")
        /// </summary>
        public static (bool, string) GetIsInternalRequestAndDomain(this HttpRequest request) {
            var (requestUrl, domain) = GetRequestUrlAndDomain(request);

            var isInternalRequest = domain == InternalBaseUrl;

            return (isInternalRequest, requestUrl);
        }

        private static (string, string) GetRequestUrlAndDomain(HttpRequest request) {
            string requestUrl = null;
            string domain = null;
            if (request.Host.HasValue) {
                requestUrl = request.GetEncodedUrl();
                domain = new Uri(requestUrl).GetLeftPart(UriPartial.Authority);
            }

            return (requestUrl, domain);
        }
    }
}

Finally, I updated the SpaController.cs (which serves initial requests to our Single Page Application) to cater for having two types of Auth in play:

        /// <summary>
        /// ASP.NET will try and load the index.html using the FileServer if we don't have a route
        /// here to match `/`. These attributes can't be on Index or the spa fallback doesn't work
        /// Note: this is almost perfect except that if someone actually calls /index.html they'll get
        /// the FileServer one, not the one from this file.
        /// </summary>
        [HttpGet("/")]
        [AllowAnonymous]
        public async Task<IActionResult> SpaFallback([FromQuery] string returnUrl) {
            var redirectUrlIfUserIsInternalAndNotAuthenticated = GetRedirectUrlIfUserIsInternalAndNotAuthenticated(returnUrl);

            if (redirectUrlIfUserIsInternalAndNotAuthenticated != null)
                return LocalRedirect(redirectUrlIfUserIsInternalAndNotAuthenticated);

            return await Index(); // Index just serves up our SPA index.html
        }

        /// <summary>
        /// SPA landing with authorisation - this endpoint will typically not be directly navigated to by a user; 
        /// rather it will be redirected to from the IndexWithoutAuthorisation and SpaFallback actions above
        /// in the case where a user is *not* authenticated but has come from an internal URL eg https://strictlyformypeeps.io
        /// </summary>
        [HttpGet("/login-with-azure-ad")]
        [Authorize]
        public async Task<IActionResult> IndexWithAuthorisation()
        {
            return await Index(); // Index just serves up our SPA index.html
        }

        /// <summary>
        /// This method returns a RedirectURL if a request is coming from an internal URL
        /// eg https://ix-web-int.prd.investec.cloud and is not authenticated.  In this case
        /// we likely want to trigger authentication by redirecting to an authorized endpoint
        /// </summary>
        string GetRedirectUrlIfUserIsInternalAndNotAuthenticated(string returnUrl)
        {
            // If a user is authenticated then we don't need to trigger authentication
            var isAuthenticated = User?.Identity?.Name != null;
            if (isAuthenticated)
                return null;

            // This scenario is extremely unlikely but has been observed once during testing so we will cater for it
            var (isInternalRequest, requestUrl) = Request.GetIsInternalRequestAndDomain();

            if (isInternalRequest) {
                var redirectUrl = $"/login-with-azure-ad{(string.IsNullOrEmpty(returnUrl) ? "" : "?returnUrl=" + WebUtility.UrlEncode(returnUrl))}";
                _logger.LogInformation(
                    "Request ({RequestURL}) has come from internal domain ({InternalDomain}) but is not authenticated; redirecting to {RedirectURL}",
                    requestUrl, AuthExtensions.InternalBaseUrl, redirectUrl);

                return redirectUrl;
            }

            return null;
        }

The code above allows anonymous requests to land in our app through the AllowAnonymous attribute. However, it checks the request when it comes in to see if:

  1. It's an internal request (i.e. the Request URL starts "https://strictly4mypeeps.io/")
  2. The current user is not authenticated.

In this case the user is redirected to the https://strictly4mypeeps.io/login-with-azure-ad route which is decorated with the Authorize attribute. This will trigger authentication for our unauthenticated internal users and drive them through the Azure AD login process.

The mystery of no documentation

I'm so surprised that this approach hasn't yet been better documented on the (generally superb) ASP.Net Core docs. It's such a potentially useful approach; and in our case, money saving too! I hope the official docs feature something on this in future. If they do, and I've just missed it (possible!) then please hit me up in the comments.

Friday, 21 February 2020

Web Workers, comlink, TypeScript and React

JavaScript is famously single threaded. However, if you're developing for the web, you may well know that this is not quite accurate. There are Web Workers:

A worker is an object created using a constructor (e.g. Worker()) that runs a named JavaScript file — this file contains the code that will run in the worker thread; workers run in another global context that is different from the current window.

Given that there is a way to use other threads for background processing, why doesn't this happen all the time? Well there's a number of reasons; not the least of which is the ceremony involved in interacting with Web Workers. Consider the following example that illustrates moving a calculation into a worker:

// main.js
function add2NumbersUsingWebWorker() {
    const myWorker = new Worker("worker.js");

    myWorker.postMessage([42, 7]);
    console.log('Message posted to worker');

    myWorker.onmessage = function(e) {
        console.log('Message received from worker', e.data);
    }
}

add2NumbersUsingWebWorker();

// worker.js
onmessage = function(e) {
  console.log('Worker: Message received from main script');
  const result = e.data[0] * e.data[1];
  if (isNaN(result)) {
    postMessage('Please write two numbers');
  } else {
    const workerResult = 'Result: ' + result;
    console.log('Worker: Posting message back to main script');
    postMessage(workerResult);
  }
}

This is not simple. It's hard to understand what's happening. Also, this approach only supports a single method call. I'd much rather write something that looked more like this:

// main.js
function add2NumbersUsingWebWorker() {
    const myWorker = new Worker("worker.js");

    const total = myWorker.add2Numbers([42, 7]);
    console.log('Message received from worker', total);
}

add2NumbersUsingWebWorker();

// worker.js
export function add2Numbers(firstNumber, secondNumber) {
  const result = firstNumber + secondNumber;
  return (isNaN(result))
    ? 'Please write two numbers'
    : 'Result: ' + result;
}

There's a way to do this using a library made by Google called comlink. This post will demonstrate how we can use this. We'll use TypeScript and webpack. We'll also examine how to integrate this approach into a React app.

A use case for a Web Worker

Let's make ourselves a TypeScript web app. We're going to use create-react-app for this:

npx create-react-app webworkers-comlink-typescript-react --template typescript

Create a takeALongTimeToDoSomething.ts file alongside index.tsx:

export function takeALongTimeToDoSomething() {
    console.log('Start our long running job...');
    const seconds = 5;
    const start = new Date().getTime();
    const delay = seconds * 1000;

    while (true) {
        if ((new Date().getTime() - start) > delay) {
            break;
        }
    }
    console.log('Finished our long running job');
}

To index.tsx add this code:

import { takeALongTimeToDoSomething } from './takeALongTimeToDoSomething';

// ...

console.log('Do something');
takeALongTimeToDoSomething();
console.log('Do another thing');

When our application runs we see this behaviour:

The app starts and logs Do something and Start our long running job... to the console. It then blocks the UI until the takeALongTimeToDoSomething function has completed running. During this time the screen is empty and unresponsive. This is a poor user experience.

Hello worker-plugin and comlink

To start using comlink we're going to need to eject our create-react-app application. The way create-react-app works is by giving you a setup that handles a high percentage of the needs for a typical web app. When you encounter an unsupported use case, you can run the yarn eject command to get direct access to the configuration of your setup.

Web Workers are not that commonly used in day to day development at present. Consequently there isn't yet a "plug'n'play" solution for workers supported by create-react-app. There's a number of potential ways to support this use case and you can track the various discussions happening against create-react-app that covers this. For now, let's eject with:

yarn eject

Then let's install the packages we're going to be using:

  • worker-plugin - this webpack plugin automatically compiles modules loaded in Web Workers
  • comlink - this library provides the RPC-like experience that we want from our workers
yarn add comlink worker-plugin

We now need to tweak our webpack.config.js to use the worker-plugin:

const WorkerPlugin = require('worker-plugin');

// ....

    plugins: [
      new WorkerPlugin(),

// ....

Do note that there's a number of plugins statements in the webpack.config.js. You want the top level one; look out for the new HtmlWebpackPlugin statement and place your new WorkerPlugin(), before that.

Workerize our slow process

Now we're ready to take our long running process and move it into a worker. Inside the src folder, create a new folder called my-first-worker. Our worker is going to live in here. Into this folder we're going to add a tsconfig.json file:

{
  "compilerOptions": {
    "strict": true,
    "target": "esnext",
    "module": "esnext",
    "lib": [
      "webworker",
      "esnext"
    ],
    "moduleResolution": "node",
    "noUnusedLocals": true,
    "sourceMap": true,
    "allowJs": false,
    "baseUrl": "."
  }
}

This file exists to tell TypeScript that this is a Web Worker. Do note the "lib": [ "webworker" usage which does exactly that.

Alongside the tsconfig.json file, let's create an index.ts file. This will be our worker:

import { expose } from 'comlink';
import { takeALongTimeToDoSomething } from '../takeALongTimeToDoSomething';

const exports = {
    takeALongTimeToDoSomething
};
export type MyFirstWorker = typeof exports;

expose(exports);

There's a number of things happening in our small worker file. Let's go through this statement by statement:

import { expose } from 'comlink';

Here we're importing the expose method from comlink. Comlink’s goal is to make exposed values from one thread available in the other. The expose method can be viewed as the comlink equivalent of export. It is used to export the RPC style signature of our worker. We'll see it's use later.

import { takeALongTimeToDoSomething } from '../takeALongTimeToDoSomething';

Here we're going to import our takeALongTimeToDoSomething function that we wrote previously, so we can use it in our worker.

const exports = {
    takeALongTimeToDoSomething
};

Here we're creating the public facing API that we're going to expose.

export type MyFirstWorker = typeof exports;

We're going to want our worker to be strongly typed. This line creates a type called MyFirstWorker which is derived from our exports object literal.

expose(exports);

Finally we expose the exports using comlink. We're done; that's our worker finished. Now let's consume it. Let's change our index.tsx file to use it. Replace our import of takeALongTimeToDoSomething:

import { takeALongTimeToDoSomething } from './takeALongTimeToDoSomething';

With an import of wrap from comlink that creates a local takeALongTimeToDoSomething function that wraps interacting with our worker:

import { wrap } from 'comlink';

function takeALongTimeToDoSomething() {
    const worker = new Worker('./my-first-worker', { name: 'my-first-worker', type: 'module' });
    const workerApi = wrap<import('./my-first-worker').MyFirstWorker>(worker);
    workerApi.takeALongTimeToDoSomething();    
}

Now we're ready to demo our application using our function offloaded into a Web Worker. It now behaves like this:

There's a number of exciting things to note here:

  1. The application is now non-blocking. Our long running function is now not preventing the UI from updating
  2. The functionality is lazily loaded via a my-first-worker.chunk.worker.js that has been created by the worker-plugin and comlink.

Using Web Workers in React

The example we've showed so far demostrates how you could use Web Workers and why you might want to. However, it's a far cry from a real world use case. Let's take the next step and plug our Web Worker usage into our React application. What would that look like? Let's find out.

We'll return index.tsx back to it's initial state. Then we'll make a simple adder function that takes some values and returns their total. To our takeALongTimeToDoSomething.ts module let's add:

export function takeALongTimeToAddTwoNumbers(number1: number, number2: number) {
    console.log('Start to add...');
    const seconds = 5;
    const start = new Date().getTime();
    const delay = seconds * 1000;
    while (true) {
        if ((new Date().getTime() - start) > delay) {
            break;
        }
    }
    const total = number1 + number2;
    console.log('Finished adding');
    return total;
}

Let's start using our long running calculator in a React component. We'll update our App.tsx to use this function and create a simple adder component:

import React, { useState } from "react";
import "./App.css";
import { takeALongTimeToAddTwoNumbers } from "./takeALongTimeToDoSomething";

const App: React.FC = () => {
  const [number1, setNumber1] = useState(1);
  const [number2, setNumber2] = useState(2);

  const total = takeALongTimeToAddTwoNumbers(number1, number2);

  return (
    <div className="App">
      <h1>Web Workers in action!</h1>

      <div>
        <label>Number to add: </label>
        <input
          type="number"
          onChange={e => setNumber1(parseInt(e.target.value))}
          value={number1}
        />
      </div>
      <div>
        <label>Number to add: </label>
        <input
          type="number"
          onChange={e => setNumber2(parseInt(e.target.value))}
          value={number2}
        />
      </div>
      <h2>Total: {total}</h2>
    </div>
  );
};

export default App;

When you try it out you'll notice that entering a single digit locks the UI for 5 seconds whilst it adds the numbers. From the moment the cursor stops blinking to the moment the screen updates the UI is non-responsive:

So far, so classic. Let's Web Workerify this!

We'll update our my-first-worker/index.ts to import this new function:

import { expose } from "comlink";
import {
  takeALongTimeToDoSomething,
  takeALongTimeToAddTwoNumbers
} from "../takeALongTimeToDoSomething";

const exports = {
  takeALongTimeToDoSomething,
  takeALongTimeToAddTwoNumbers
};
export type MyFirstWorker = typeof exports;

expose(exports);

Alongside our App.tsx file let's create an App.hooks.ts file.

import { wrap, releaseProxy } from "comlink";
import { useEffect, useState, useMemo } from "react";

/**
 * Our hook that performs the calculation on the worker
 */
export function useTakeALongTimeToAddTwoNumbers(
  number1: number,
  number2: number
) {
  // We'll want to expose a wrapping object so we know when a calculation is in progress
  const [data, setData] = useState({
    isCalculating: false,
    total: undefined as number | undefined
  });

  // acquire our worker
  const { workerApi } = useWorker();

  useEffect(() => {
    // We're starting the calculation here
    setData({ isCalculating: true, total: undefined });

    workerApi
      .takeALongTimeToAddTwoNumbers(number1, number2)
      .then(total => setData({ isCalculating: false, total })); // We receive the result here
  }, [workerApi, setData, number1, number2]);

  return data;
}

function useWorker() {
  // memoise a worker so it can be reused; create one worker up front
  // and then reuse it subsequently; no creating new workers each time
  const workerApiAndCleanup = useMemo(() => makeWorkerApiAndCleanup(), []);

  useEffect(() => {
    const { cleanup } = workerApiAndCleanup;

    // cleanup our worker when we're done with it
    return () => {
      cleanup();
    };
  }, [workerApiAndCleanup]);

  return workerApiAndCleanup;
}

/**
 * Creates a worker, a cleanup function and returns it
 */
function makeWorkerApiAndCleanup() {
  // Here we create our worker and wrap it with comlink so we can interact with it
  const worker = new Worker("./my-first-worker", {
    name: "my-first-worker",
    type: "module"
  });
  const workerApi = wrap<import("./my-first-worker").MyFirstWorker>(worker);

  // A cleanup function that releases the comlink proxy and terminates the worker
  const cleanup = () => {
    workerApi[releaseProxy]();
    worker.terminate();
  };

  const workerApiAndCleanup = { workerApi, cleanup };

  return workerApiAndCleanup;
}

The useWorker and makeWorkerApiAndCleanup functions make up the basis of a shareable worker hooks approach. It would take very little work to paramaterise them so this could be used elsewhere. That's outside the scope of this post but would be extremely straightforward to accomplish.

Time to test! We'll change our App.tsx to use the new useTakeALongTimeToAddTwoNumbers hook:

import React, { useState } from "react";
import "./App.css";
import { useTakeALongTimeToAddTwoNumbers } from "./App.hooks";

const App: React.FC = () => {
  const [number1, setNumber1] = useState(1);
  const [number2, setNumber2] = useState(2);

  const total = useTakeALongTimeToAddTwoNumbers(number1, number2);

  return (
    <div className="App">
      <h1>Web Workers in action!</h1>

      <div>
        <label>Number to add: </label>
        <input
          type="number"
          onChange={e => setNumber1(parseInt(e.target.value))}
          value={number1}
        />
      </div>
      <div>
        <label>Number to add: </label>
        <input
          type="number"
          onChange={e => setNumber2(parseInt(e.target.value))}
          value={number2}
        />
      </div>
      <h2>
        Total:{" "}
        {total.isCalculating ? (
          <em>Calculating...</em>
        ) : (
          <strong>{total.total}</strong>
        )}
      </h2>
    </div>
  );
};

export default App;

Now our calculation takes place off the main thread and the UI is no longer blocked!

This post was originally published on LogRocket.

The source code for this project can be found here.