Sunday, 24 June 2018

VSTS and EF Core Migrations

Let me start by telling you a dirty secret. I have an ASP.Net Core project that I build with VSTS. It is deployed to Azure through a CI / CD setup in VSTS. That part I'm happy with. Proud of even. Now to the sordid hiddenness: try as I might, I've never found a nice way to deploy Entity Framework database migrations as part of the deployment flow. So I have [blushes with embarrassment] been using the Startup of my ASP.Net core app to run the migrations on my database. There. I said it. You all know. Absolutely filthy. Don't judge me.

If you care to google, you'll find various discussions around this, and various ways to tackle it. Most of which felt like too much hard work and so I never attempted.

It's also worth saying that being on VSTS made me less likely to give these approaches a go. Why? Well, the feedback loop for debugging a CI / CD setup is truly sucky. Make a change. Wait for it to trickle through the CI / CD flow (10 mins at least). Spot a problem, try and fix. Start waiting again. Repeat until you succeed. Or, if you're using the free tier of VSTS, repeat until you run out of build minutes. You have a limited number of build minutes per month with VSTS. Last time I fiddled with the build I bled my way through a full month's minutes in 2 days. I have now adopted the approach of only playing with the setup in the last week of the month. That way if I end up running out of minutes, at least I'll roll over to the new allowance in a matter of days.

Digression over. I could take the guilt of my EF migrations secret no longer, I decided to try and tackle it another way. I used the approach suggested by Andre Broers here:

I worked around by adding a dotnetcore consoleapp project where I run the migration via the Context. In the Build I build this consoleapp in the release I execute it.

Console Yourself

First things first, we need a console app added to our solution. Fire up PowerShell in the root of your project and:

md MyAwesomeProject.MigrateDatabase
cd .\MyAwesomeProject.MigrateDatabase\
dotnet new console

Next we need that project to know about Entity Framework and also our DbContext (which I store in a dedicated project):

dotnet add package Microsoft.EntityFrameworkCore.Design
dotnet add package Microsoft.EntityFrameworkCore.SqlServer
dotnet add reference ..\MyAwesomeProject.Database\MyAwesomeProject.Database.csproj

Add our new project to our solution: (I always forget to do this)

cd ../
dotnet sln add .\MyAwesomeProject.MigrateDatabase\MyAwesomeProject.MigrateDatabase.csproj

You should now be the proud possessor of a .csproj file that looks like this:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp2.1</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.EntityFrameworkCore.Design" Version="2.1.1" />
    <PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" Version="2.1.1" />
  </ItemGroup>

  <ItemGroup>
    <ProjectReference Include="..\MyAwesomeProject.Database\MyAwesomeProject.Database.csproj" />
  </ItemGroup>

</Project>

Replace the contents of the Program.cs file with this:

using System;
using System.IO;
using MyAwesomeProject.Database;
using Microsoft.EntityFrameworkCore;

namespace MyAwesomeProject.MigrateDatabase {
    class Program {
        // Example usage:
        // dotnet MyAwesomeProject.MigrateDatabase.dll "Server=(localdb)\\mssqllocaldb;Database=MyAwesomeProject;Trusted_Connection=True;"
        static void Main(string[] args) {
            if (args.Length == 0)
                throw new Exception("No connection string supplied!");

            var myAwesomeProjectConnectionString = args[0];

            // Totally optional debug information
            Console.WriteLine("About to migrate this database:");
            var connectionBits = myAwesomeProjectConnectionString.Split(";");
            foreach (var connectionBit in connectionBits) {
                if (!connectionBit.StartsWith("Password", StringComparison.CurrentCultureIgnoreCase))
                    Console.WriteLine(connectionBit);
            }

            try {
                var optionsBuilder = new DbContextOptionsBuilder<MyAwesomeProjectContext>();
                optionsBuilder.UseSqlServer(myAwesomeProjectConnectionString);

                using(var context = new MyAwesomeProjectContext(optionsBuilder.Options)) {
                    context.Database.Migrate();
                }
                Console.WriteLine("This database is migrated like it's the Serengeti!");
            } catch (Exception exc) {
                var failedToMigrateException = new Exception("Failed to apply migrations!", exc);
                Console.WriteLine($"Didn't succeed in applying migrations: {exc.Message}");
                throw failedToMigrateException;
            }
        }
    }
}

This code takes the database connection string passed as an argument, spins up a db context with that, and migrates like it's the Serengeti.

Build It!

The next thing we need is to ensure that this is included as part of the build process in VSTS. The following commands need to be run during the build to include the MigrateDatabase project in the build output in a MigrateDatabase folder:

cd MyAwesomeProject.MigrateDatabase
dotnet build
dotnet publish --configuration Release --output $(build.artifactstagingdirectory)/MigrateDatabase

There's various ways to accomplish this which I wont reiterate now. I recommend YAML.

Deploy It!

Now to execute our console app as part of the deployment process we need to add a CommandLine task to our VSTS build definition. It should execute the following command:

dotnet MyAwesomeProject.MigrateDatabase.dll "$(ConnectionStrings.MyAwesomeProjectDatabaseConnection)"

In the following folder:

$(System.DefaultWorkingDirectory)/my-awesome-project-YAML/drop/MigrateDatabase

Do note that the command uses the ConnectionStrings.MyAwesomeProjectDatabaseConnection variable which you need to create and add your connection string to.

Give It A Whirl

Let's find out what happens when the rubber hits the road. I'll add a new entity to my database project:

using System;

namespace InvoCollect.Database.Entities {
    public class NewHotness {
        public Guid NewHotnessId { get; set; }
    }
}

And reference it in my DbContext:

using MyAwesomeProject.Database.Entities;
using Microsoft.EntityFrameworkCore;

namespace MyAwesomeProject.Database {
    public class MyAwesomeProjectContext : DbContext {
        public MyAwesomeProjectContext(DbContextOptions<MyAwesomeProjectContext> options) : base(options) { }

        // ...
  
        public DbSet<NewHotness> NewHotnesses { get; set; }

        // ...
    }
}

Let's let EF know by adding a migration to my project:

dotnet ef migrations add TestOurMigrationsApproach

Commit my change, push it to VSTS, wait for the build to run and a deployment to take place.... Okay. It's done. Looks good.

Let's take a look in the database:

select * from NewHotnesses
go

It's there! We are migrating our database upon deployment; and not in our ASP.Net Core app itself. I feel a burden lifted.

Wrapping Up

The EF Core team are aware of the lack of guidance around deploying migrations and have recently announced plans to fix that in the docs. You can track the progress of this issue here. There's good odds that once they come out with this I'll find there's a better way than the approach I've outlined in this post. Until that glorious day!

Saturday, 16 June 2018

VSTS... YAML up!

For the longest time I've been using the likes of Travis and AppVeyor to build open source projects that I work on. They rock. I've also recently been dipping my toes back in the water of Visual Studio Team Services. VSTS offers a whole stack of stuff, but my own area of interest has been the Continuous Integration / Continuous Deployment offering.

Historically I have been underwhelmed by the CI proposition of Team Foundation Server / VSTS. It was difficult to debug, difficult to configure, difficult to understand. If it worked... Great! If it didn't (and it often didn't), you were toast. But things done changed! I don't know when it happened, but VSTS is now super configurable. You add tasks / configure them, build and you're done! It's really nice.

However, there's been something I've been missing from Travis, AppVeyor et al. Keeping my build script with my code. Travis has .travis.yml, AppVeyor has appveyor.yml. VSTS, what's up?

The New Dawn

Up until now, really not much. It just wasn't possible. Until it was:

When I started testing it out I found things to like and some things I didn't understand. Crucially, my CI now builds based upon .vsts-ci.yml. YAML baby!

It Begins!

You can get to "Hello World" by looking at the docs here and the examples here. But what you really want is your existing build, configured in the UI, exported to YAML. That doesn't seem to quite exist, but there's something that gets you part way. Take a look:

If you notice, in the top right of the screen, each task now allows you click on a new "View YAML" button. It's kinda Ronseal:

Using this hotness you can build yourself a .vsts-ci.yml file task by task.

A Bump in the Road

If you look closely at the message above you'll see there's a message about an undefined variable.

#Your build definition references an undefined variable named ‘Parameters.RestoreBuildProjects’. Create or edit the build definition for this YAML file, define the variable on the Variables tab. See https://go.microsoft.com/fwlink/?linkid=865972
steps:
- task: [email protected]
  displayName: Restore
  inputs:
    command: restore
    projects: '$(Parameters.RestoreBuildProjects)'

Try as I might, I couldn't locate Parameters.RestoreBuildProjects. So no working CI build for me. Then I remembered Zoltan Erdos. He's hard to forget. Or rather, I remembered an idea of his which I will summarise thusly: "Have a package.json in the root of your repo, use the scripts for individual tasks and you have a cross platform task runner".

This is a powerful idea and one I decided to put to work. My project is React and TypeScript on the front end, and ASP.Net Core on the back. I wanted a package.json in the root of the repo which I could install dependencies, build, test and publish my whole app. I could call into that from my .vsts-ci.yml file. Something like this:

{
  "name": "my-amazing-project",
  "version": "1.0.0",
  "author": "John Reilly ",
  "license": "MIT",
  "private": true,
  "scripts": {
    "preinstall": "yarn run install:clientapp && yarn run install:web",
    "install:clientapp": "cd MyAmazingProject.ClientApp && yarn install",
    "install:web": "dotnet restore",
    "prebuild": "yarn install",
    "build": "yarn run build:clientapp && yarn run build:web",
    "build:clientapp": "cd MyAmazingProject.ClientApp && yarn run build",
    "build:web": "dotnet build --configuration Release",
    "postbuild": "yarn test",
    "test": "yarn run test:clientapp && yarn run test:web",
    "test:clientapp": "cd MyAmazingProject.ClientApp && yarn test",
    "test:web": "cd MyAmazingProject.Web.Tests && dotnet test",
    "publish:web": "cd MyAmazingProject.Web && dotnet publish MyAmazingProject.Web.csproj --configuration Release"
  }
}

It doesn't matter if I have "an undefined variable named ‘Parameters.RestoreBuildProjects’". I now have no need to use all the individual tasks in a build. I can convert them into a couple of scripts in my package.json. So here's where I've ended up for now. I've a .vsts-ci.yml file which looks like this:

queue: Hosted VS2017

steps:
- task: geek[email protected]2
  displayName: install yarn itself
  inputs:
    checkLatest: true
- task: [email protected]
  displayName: yarn build and test
  inputs:
    Arguments: build
- task: [email protected]
  displayName: yarn publish:web
  inputs:
    Arguments: 'run publish:web --output $(build.artifactstagingdirectory)/MyAmazingProject'
- task: [email protected]
  displayName: publish build artifact
  inputs:
    PathtoPublish: '$(build.artifactstagingdirectory)'

This file does the following:

  1. Installs yarn. (By the way VSTS, what's with not having yarn installed by default? I'll say this for the avoidance of doubt: in the npm cli space: yarn has won.)
  2. Install our dependencies, build the front end and back end, run all the tests. Effectively yarn build.
  3. Publish our web app to a directory. Effectively yarn run publish:web. This is only separate because we want to pass in the output directory and so it's just easier for it to be a separate step.
  4. Publish the build artefact to TFS. (This will go on to be picked up by the continuous deployment mechanism and published out to Azure.)

I much prefer this to what I had before. I feel there's much more that can be done here as well. I'm looking forward to the continuous deployment piece becoming scriptable too.

Thanks to Zoltan and props to the TFVS team!

Sunday, 13 May 2018

Compromising: A Guide for Developers

It is a truth universally acknowledged, that a single developer, will not be short of an opinion. Opinions on tabs vs spaces. Upon OOP vs FP. Upon classes vs functions. Just opinions, opinions, opinions. Opinions that are felt with all the sincerity of a Witchfinder General. And, alas, not always the same level of empathy.

Given the wealth of strongly felt desires, it's kind of amazing that developers ever manage to work together. It's rare to find a fellow dev that agrees entirely with your predilections. So how do people ever get past the "you don't use semi-colons; what's wrong with you"? Well, not easily to be honest. It involves compromise.

On Compromise

We've all been in the position where we realise that there's something we don't like in a codebase. The ordering of members in a class, naming conventions, a lack of tests... Something.

Then comes the moment of trepidation. You suggest a change. You suggest difference. It's time to find out if you're working with psychopaths. It's not untypical to find that you just have to go with the flow.

  • "You've been using 3 spaces?"
  • "Yes we use 3 spaces."
  • "Okay... So we'll be using 3 spaces..." [backs away carefully]

I've been in this position so many times I've learned to adapt. It helps that I'm a malleable sort anyway. But what if there were another way?

Weighting Opinion

Sometimes your opinion is... Well.... Just an opinion. Other opinions are legitimate. At least in theory. If you can acknowledge that, you already have a level of self knowledge not gifted to all in the dev community. If you're able to get that far I feel there's something you might want to consider.

Let me frame this up: there's a choice to be made around an approach that could be used in a codebase. There are 2 camps in the team; 1 camp advocating for 1 approach. The other for a different approach. Either one is functionally legitimate. They work. It's just a matter of preference of choice. How do you choose now? Let's look at a technique for splitting the difference.

Voting helps. But let's say 50% of the team wants 1 approach and 50% wants the other. What then? Or, to take a more interesting idea, what say 25% want 1 approach and 75% want the other? If it's just 1 person, 1 vote then the 75% wins and that's it.

But before we all move on, let's consider another factor. How much do people care? What if the 25% are really, really invested in the choice they're advocating for and the 75% just have a mild preference? From that point forwards the 25% are likely going to be less happy. Maybe they'll even burn inside. They're certainly going to be less productive.

It's because of situations like this that weighting votes becomes useful. Out of 5, how much do you care? If one person cares "5 out of 5" and the other three are "1 out of 5".... Well go with the 25% It matters to them and that it matters to them should matter to you.

I'll contend that rolling like this makes for more content, happier and more productive teams. Making strength of feeling a factor in choices reduces friction and increases the peace.

I've only recently discovered this technique and I can't claim credit for it. I learned it from the awesome Jamie McCrindle. I commend to you! Be happier!

Saturday, 28 April 2018

Using Reflection to Identify Unwanted Dependencies

I having a web app which is fairly complex. It's made up of services, controllers and all sorts of things. So far, so unremarkable. However, I needed to ensure that the controllers did not attempt to access the database via any of their dependencies. Or their dependencies, dependencies. Or their dependencies. You get my point.

The why is not important here. What's significant is the idea of walking a dependency tree and identifying, via a reflection based test, when such unwelcome dependencies occur, and where.

When they do occur the test should fail, like this:

[xUnit.net 00:00:01.6766691]     My.Web.Tests.HousekeepingTests.My_Api_Controllers_do_not_depend_upon_the_database [FAIL]
[xUnit.net 00:00:01.6782295]       Expected dependsUponTheDatabase.Any() to be False because My.Api.Controllers.ThingyController depends upon the database through My.Data.Services.OohItsAService, but found True.

What follows is an example of how you can accomplish this. It is exceedingly far from the most beautiful code I've ever written. But it works. One reservation I have about it is that it doesn't use the Dependency Injection mechanism used at runtime (AutoFac). If I had more time I would amend the code to use that instead; it would become an easier test to read if I did. Also it would better get round the limitations of the code below. Essentially the approach relies on the assumption of there being 1 interface and 1 implementation. That's often not true in complex systems. But this is good enough to roll with for now.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Reflection;
using FluentAssertions;
using My.Data;
using My.Web.Controllers;
using Xunit;

namespace My.Web.Tests {
    public class OiYouThereGetOutTests {
        [Fact]
        public void My_Controllers_do_not_depend_upon_the_database() {
            var myConcreteTypes = GetMyAssemblies()
                .SelectMany(assembly => assembly.GetTypes())
                .ToArray();

            var controllerTypes = typeof(My.Web.Startup).Assembly.GetTypes()
                .Where(myWebType =>
                    myWebType != typeof(Microsoft.AspNetCore.Mvc.Controller) &&
                    typeof(Microsoft.AspNetCore.Mvc.Controller).IsAssignableFrom(myWebType));

            foreach (var controllerType in controllerTypes) {
                var allTheTypes = GetDependentTypes(controllerType, myConcreteTypes);
                allTheTypes.Count.Should().BeGreaterThan(0);
                var dependsUponTheDatabase = allTheTypes.Where(keyValue => keyValue.Key == typeof(MyDbContext));
                dependsUponTheDatabase.Any().Should().Be(false, because: $"{controllerType} depends upon the database through {string.Join(", ", dependsUponTheDatabase.Select(dod => dod.Value))}");
            }
        }

        private static Dictionary<Type, Type> GetDependentTypes(Type type, Type[] typesToCheck, Dictionary<Type, Type> typesSoFar = null) {
            var types = typesSoFar ?? new Dictionary<Type, Type>();
            foreach (var constructor in type.GetConstructors().Where(ctor => ctor.IsPublic)) {
                foreach (var parameter in constructor.GetParameters()) {
                    if (parameter.ParameterType.IsInterface) {
                        if (parameter.ParameterType.IsGenericType) {
                            foreach (var genericType in parameter.ParameterType.GenericTypeArguments) {
                                AddIfMissing(types, genericType, type);
                            }
                        } else {
                            var typesImplementingInterface = TypesImplementingInterface(parameter.ParameterType, typesToCheck);
                            foreach (var typeImplementingInterface in typesImplementingInterface) {
                                AddIfMissing(types, typeImplementingInterface, type);
                                AddIfMissing(types, GetDependentTypes(typeImplementingInterface, typesToCheck, types).Keys.ToList(), type);
                            }
                        }
                    } else {
                        AddIfMissing(types, parameter.ParameterType, type);
                        AddIfMissing(types, GetDependentTypes(parameter.ParameterType, typesToCheck, types).Keys.ToList(), type);
                    }
                }
            }
            return types;
        }

        private static void AddIfMissing(Dictionary<Type, Type> types, Type typeToAdd, Type parentType) {
            if (!types.Keys.Contains(typeToAdd))
                types.Add(typeToAdd, parentType);
        }

        private static void AddIfMissing(Dictionary<Type, Type> types, IList<Type> typesToAdd, Type parentType) {
            foreach (var typeToAdd in typesToAdd) {
                AddIfMissing(types, typeToAdd, parentType);
            }
        }

        private static Type[] TypesImplementingInterface(Type interfaceType, Type[] typesToCheck) =>
            typesToCheck.Where(type => !type.IsInterface && interfaceType.IsAssignableFrom(type)).ToArray();

        private static bool IsRealClass(Type testType) =>
            testType.IsAbstract == false &&
            testType.IsGenericType == false &&
            testType.IsGenericTypeDefinition == false &&
            testType.IsInterface == false;

        private static Assembly[] GetMyAssemblies() =>
            AppDomain
            .CurrentDomain
            .GetAssemblies()
            // Not strictly necessary but it reduces the amount of types returned
            .Where(assembly => assembly.GetName().Name.StartsWith("My")) 
            .ToArray();
    }
}

Monday, 26 March 2018

It's Not Dead 2: mobx-react-devtools and the undead

I spent today digging through our webpack 4 config trying to work out why a production bundle contained code like this:

if("production"!==e.env.NODE_ENV){//...

My expectation was that with webpack 4 and 'mode': 'production' this meant that behind the scenes all process.env.NODE_ENV statements should be converted to 'production'. Subsequently Uglify would automatically get its groove on with the resulting if("production"!=="production") ... and et voilĂ !... Strip the dead code.

It seemed that was not the case. I was seeing (regrettably) undead code. And who here actually likes the undead?

Who Betrayed Me?

My beef was with webpack. It done did me wrong. Or... So I thought. webpack did nothing wrong. It is pure and good and unjustly complained about. It was my other love: mobx. Or to be more specific: mobx-react-devtools.

It turns out that the way you use mobx-react-devtools reliably makes the difference. It's the cause of the stray ("production"!==e.env.NODE_ENV) statements in our bundle output. After a long time I happened upon this issue which contained a gem by one Giles Butler. His suggested way to reference mobx-react-devtools is (as far as I can tell) the solution!

On a dummy project I had the mobx-react-devtools advised code in place:

import * as React from 'react';
import { Layout } from './components/layout';
import DevTools from 'mobx-react-devtools';

export const App: React.SFC<{}> = _props => (
    <div className="ui container">
        <Layout />
        {process.env.NODE_ENV !== 'production' ? <DevTools position={{ bottom: 20, right: 20 }} /> : null}
    </div>
);

With this I had a build size of 311kb. Closer examination of my bundle revealed that my bundle.js was riddled with ("production"!==e.env.NODE_ENV) statements. Sucks, right?

Then I tried this instead:

import * as React from 'react';
import { Layout } from './components/layout';
const { Fragment } = React;

const DevTools = process.env.NODE_ENV !== 'production' ? require('mobx-react-devtools').default : Fragment;

export const App: React.SFC<{}> = _props => (
    <div className="ui container">
        <Layout />
        <DevTools position={{ bottom: 20, right: 20 }} />
    </div>
);

With this approach I got a build size of 191kb. This was thanks to the dead code being actually stripped. That's a saving of 120kb!

Perhaps We Change the Advice?

There's a suggestion that the README should be changed to reflect this advice - until that happens, I wanted to share this solution. Also, I've a nagging feeling that I've missed something pertinent here; if someone knows something that I should... Tell me please!

Sunday, 25 March 2018

Uploading Images to Cloudinary with the Fetch API

I was recently checking out a very good post which explained how to upload images using React Dropzone and SuperAgent to Cloudinary.

It's a brilliant post; you should totally read it. Even if you hate images, uploads and JavaScript. However, there was one thing in there that I didn't want; SuperAgent. It's lovely but I'm a Fetch guy. That's just how I roll. The question is, how do I do the below using Fetch?

  handleImageUpload(file) {
    let upload = request.post(CLOUDINARY_UPLOAD_URL)
                     .field('upload_preset', CLOUDINARY_UPLOAD_PRESET)
                     .field('file', file);

    upload.end((err, response) => {
      if (err) {
        console.error(err);
      }

      if (response.body.secure_url !== '') {
        this.setState({
          uploadedFileCloudinaryUrl: response.body.secure_url
        });
      }
    });
  }

Well it actually took me longer to work out than I'd like to admit. But now I have, let me save you the bother. To do the above using Fetch you just need this:

  handleImageUpload(file) {
    const formData = new FormData();
    formData.append("file", file);
    formData.append("upload_preset", CLOUDINARY_UPLOAD_PRESET); // Replace the preset name with your own

    fetch(CLOUDINARY_UPLOAD_URL, {
      method: 'POST',
      body: formData
    })
      .then(response => response.json())
      .then(data => {
        if (data.secure_url !== '') {
          this.setState({
            uploadedFileCloudinaryUrl: data.secure_url
          });
        }
      })
      .catch(err => console.error(err))
  }

To get a pre-canned project to try this with take a look at Damon's repo.

Wednesday, 7 March 2018

It's Not Dead: webpack and dead code elimination limitations

Every now and then you can be surprised. Your assumptions turn out to be wrong.

Webpack has long supported the notion of dead code elimination. webpack facilitates this through use of the DefinePlugin. The compile time value of process.env.NODE_ENV is set either to 'production' or something else. If it's set to 'production' then some dead code hackery can happen. Libraries like React make use of this to serve up different, and crucially smaller, production builds.

A (pre-webpack 4) production config file will typically contain this code:

new webpack.DefinePlugin({
    'process.env.NODE_ENV': JSON.stringify('production')
}),
new UglifyJSPlugin(),

The result of the above config is that webpack will inject the value 'production' everywhere in the codebase where a process.env.NODE_ENV can be found. (In fact, as of webpack 4 setting this magic value is out-of-the-box behaviour for Production mode; yay the #0CJS!)

What this means is, if you've written:

if (process.env.NODE_ENV !== 'production') {
  // Do a development mode only thing
}

webpack can and will turn this into

if ('production' !== 'production') {
  // Do a development mode only thing
}

The UglifyJSPlugin is there to minify the JavaScript in your bundles. As an added benefit, this plugin is smart enough to know that 'production' !== 'production' is always false. And because it's smart, it chops the code. Dead code elimated.

You can read more about this in the webpack docs.

Limitations

Given what I've said, consider the following code:

export class Config {
    // Other properties

    get isDevelopment() {
        return process.env.NODE_ENV !== 'production';
    }
}

This is a config class that exposes the expression process.env.NODE_ENV !== 'production' with the friendly name isDevelopment. You'd think that dead code elimination would be your friend here. It's not.

My personal expection was that dead code elimination would treat Config.isDevelopment and the expression process.env.NODE_ENV !== 'production' identically. Because they're identical.

However, this turns out not to be the case. Dead code elimination works just as you would hope when using the expression process.env.NODE_ENV !== 'production' directly in code. However webpack only performs dead code elimination for the direct usage of the process.env.NODE_ENV !== 'production' expression. I'll say that again: if you want dead code elimination then use the injected values; not an encapsulated version of them. It turns out you cannot rely on webpack flowing values through and performing dead code elimination on that basis.

The TL;DR: if you want to elimate dead code then *always* use process.env.NODE_ENV !== 'production'; don't abstract it. It doesn't work.

UglifyJS is smart. But not that smart.

Sunday, 25 February 2018

ts-loader 4 / fork-ts-checker-webpack-plugin 0.4

webpack 4 has shipped!

ts-loader

The ts-loader 4 is available too. For details see our release here. To start using ts-loader 4:

Remember to use this in concert with the webpack 4. To see a working example take a look at the "vanilla" example.

fork-ts-checker-webpack-plugin

There's more! You may like to use the fork-ts-checker-webpack-plugin, (aka the ts-loader turbo-booster). The webpack compatible version has been released to npm as 0.4.1:

To see a working example take a look at the "fork-ts-checker" example.

Monday, 29 January 2018

finding webpack 4 (use a Map)

Update: 03/02/2018

Tobias Koppers has written a migration guide for plugins / loaders as well - take a read here. It's very useful.

webpack 4

webpack 4 is on the horizon. The beta dropped last Friday. So what do you, as a plugin / loader author need to do? What needs to change to make your loader / plugin webpack 4 friendly?

This is a guide that should inform you about the changes you might need to make. It's based on my own experiences migrating ts-loader and the fork-ts-checker-webpack-plugin. If you'd like to see this in action then take a look at the PRs related to these. The ts-loader PR can be found here. The fork-ts-checker-webpack-plugin PR can be found here.

Plugins

One of the notable changes to webpack with v4 is the change to the plugin architecture. In terms of implications it's worth reading the comments made by Tobias Koppers here and here.

Previously, if your plugin was tapping into a compiler hook you'd write code that looked something like this:

this.compiler.plugin('watch-close', () => {
   // do your thing here
});

With webpack 4 things done changed. You'd now write something like this:

this.compiler.hooks.watchClose.tap('name-to-identify-your-plugin-goes-here', () => {
   // do your thing here
});

Hopefully that's fairly clear; we're using the new hooks property and tapping into our event of choice by camelCasing what was previously kebab-cased. So in this case plugin('watch-close' => hooks.watchClose.tap.

In the example above we were attaching to a sync hook. Now let's look at an async hook:

this.compiler.plugin('watch-run', (watching, callback) => {
   // do your thing here
   callback();
});

This would change to be:

this.compiler.hooks.watchRun.tapAsync('name-to-identify-your-plugin-goes-here', (compiler, callback) => {
   // do your thing here
   callback();
});

Note that rather than using tap here, we're using tapAsync. If you're more into promises there's a tapPromise you could use instead.

Custom Hooks

Prior to webpack 4, you could use your own custom hooks within your plugin. Usage was as simple as this:

this.compiler.applyPluginsAsync('fork-ts-checker-service-before-start', () => {
   // do your thing here
});

You can still use custom hooks with webpack 4, but there's a little more ceremony involved. Essentially, you need to tell webpack up front what you're planning. Not hard, I promise you.

First of all, you'll need to add the package tapable as a dependency. Then, inside your plugin you'll need to import the type of hook that you want to use; in the case of the fork-ts-checker-webpack-plugin we used both a sync and an async hook:

const AsyncSeriesHook = require("tapable").AsyncSeriesHook;
const SyncHook = require("tapable").SyncHook;

Then, inside your apply method you need to register your hooks:

    if (this.compiler.hooks.forkTsCheckerServiceBeforeStart
      || this.compiler.hooks.forkTsCheckerCancel
      // other hooks...
      || this.compiler.hooks.forkTsCheckerEmit) {
      throw new Error('fork-ts-checker-webpack-plugin hooks are already in use');
    }
    this.compiler.hooks.forkTsCheckerServiceBeforeStart = new AsyncSeriesHook([]);

    this.compiler.hooks.forkTsCheckerCancel = new SyncHook([]);
    // other sync hooks...
    this.compiler.hooks.forkTsCheckerDone = new SyncHook([]);

If you're interested in backwards compatibility then you should use the _pluginCompat to wire that in:

    this.compiler._pluginCompat.tap('fork-ts-checker-webpack-plugin', options => {
      switch (options.name) {
        case 'fork-ts-checker-service-before-start':
          options.async = true;
          break;
        case 'fork-ts-checker-cancel':
        // other sync hooks...
        case 'fork-ts-checker-done':
          return true;
      }
      return undefined;
    });

With your registration in place, you just need to replace your calls to compiler.applyPlugins('sync-hook-name', and compiler.applyPluginsAsync('async-hook-name', with calls to compiler.hooks.syncHookName.call( and compiler.hooks.asyncHookName.callAsync(. So to migrate our fork-ts-checker-service-before-start hook we'd write:

this.compiler.hooks.forkTsCheckerServiceBeforeStart.callAsync(() => {
   // do your thing here
});

Loaders

Loaders are impacted by the changes to the plugin architecture. Mostly this means applying the same plugin changes as discussed above. ts-loader hooks into 2 plugin events:

    loader._compiler.plugin("after-compile", /* callback goes here */);
    loader._compiler.plugin("watch-run", /* callback goes here */);

With webpack 4 these become:

    loader._compiler.hooks.afterCompile.tapAsync("ts-loader", /* callback goes here */);
    loader._compiler.hooks.watchRun.tapAsync("ts-loader", /* callback goes here */);

Note again, we're using the string "ts-loader" to identify our loader.

I need a Map

When I initially ported to webpack 4, ts-loader simply wasn't working. In the end I tied this down to problems in our watch-run callback. There's 2 things of note here.

Firstly, as per the changelog, the watch-run hook now has the Compiler as the first parameter. Previously this was a subproperty on the supplied watching parameter. So swapping over to use the compiler directly was necessary. Incidentally, ts-loader previously made use of the watching.startTime property that was supplied in webpack's 1, 2 and 3. It seems to be coping without it; so hopefully that's fine.

Secondly, with webpack 4 it's "ES2015 all the things!" That is to say, with webpack now requiring a minimum of node 6, the codebase is free to start using ES2015. So if you're a consumer of compiler.fileTimestamps (and ts-loader is) then it's time to make a change to cater for the different API that a Map offers instead of indexing into an object literal with a string key.

What this means is, code that would once have looked like this:

    Object.keys(watching.compiler.fileTimestamps)
 .filter(filePath =>
  watching.compiler.fileTimestamps[filePath] > lastTimes[filePath]
 )
 .forEach(filePath => {
  lastTimes[filePath] = times[filePath];
  // ...
 });

Now looks more like this:

    for (const [filePath, date] of compiler.fileTimestamps) {
 if (date > lastTimes.get(filePath)) {
  continue;
 }

 lastTimes.set(filePath, date);
 // ...
    }

Happy Porting!

I hope your own port to webpack 4 goes well. Do let me know if there's anything I've missed out / any inaccuracies etc and I'll update this guide.

Sunday, 28 January 2018

webpack 4 - ts-loader / fork-ts-checker-webpack-plugin betas

The first webpack 4 beta dropped on Friday. Very exciting! Following hot on the heels of those announcements, I've some news to share too. Can you guess what it is?

ts-loader

Yes! The ts-loader beta to work with webpack 4 is available. To get hold of the beta:

Remember to use this in concert with the webpack 4 beta. To see a working example take a look at the "vanilla" example.

fork-ts-checker-webpack-plugin

There's more! You may like to use the fork-ts-checker-webpack-plugin, (which goes lovely with ts-loader and a biscuit). There is a beta available for that too:

  • When using yarn: yarn add johnnyreilly/fork-ts-checker-webpack-plugin#4.0.0-beta.1 -D
  • When using npm: npm install johnnyreilly/fork-ts-checker-webpack-plugin#4.0.0-beta.1 -D

To see a working example take a look at the "fork-ts-checker" example.

PRs

If you would like to track the progress of these betas then I encourage you to take a look at the PRs they were built from. The ts-loader PR can be found here. The fork-ts-checker-webpack-plugin PR can be found here.

These are betas so things may change further; though hopefully not significantly.

Sunday, 14 January 2018

Auth0, TypeScript and ASP.NET Core

Most applications I write have some need for authentication and perhaps authorisation too. In fact, most apps most people write fall into that bracket. Here's the thing: Auth done well is a *big* chunk of work. And the minute you start thinking about that you almost invariably lose focus on the thing you actually want to build and ship.

So this Christmas I decided it was time to take a look into offloading that particular problem onto someone else. I knew there were third parties who provided Auth-As-A-Service - time to give them a whirl. On the recommendation of a friend, I made Auth0 my first port of call. Lest you be expecting a full breakdown of the various players in this space, let me stop you now; I liked Auth0 so much I strayed no further. Auth0 kicks AAAS. (I'm so sorry)

What I wanted to build

My criteria for "auth success" was this:

  • I want to build a SPA, specifically a React SPA. Ideally, I shouldn't need a back end of my own at all
  • I want to use TypeScript on my client.

But, for when I do implement a back end:

  • I want that to be able to use the client side's Auth tokens to allow access to Auth routes on my server.
  • ‎I want to able to identify the user, given the token, to provide targeted data
  • Oh, and I want to use .NET Core 2 for my server.

And in achieving all of the I want to add minimal code to my app. Not War and Peace. My code should remain focused on doing what it does.

Boil a Plate

I ended up with unqualified ticks for all my criteria, but it took some work to find out. I will say that Auth0 do travel the extra mile in terms of getting you up and running. When you create a new Client in Auth0 you're given the option to download a quick start using the technology of your choice.

This was a massive plus for me. I took the quickstart provided and ran with it to get me to the point of meeting my own criteria. You can use this boilerplate for your own ends. Herewith, a walkthrough:

The Walkthrough

Fork and clone the repo at this location: https://github.com/johnnyreilly/auth0-react-typescript-asp-net-core.

What have we got? 2 folders, ClientApp contains the React app, Web contains the ASP.NET Core app. Now we need to get setup with Auth0 and customise our config.

Setup Auth0

Here's how to get the app set up with Auth0; you're going to need to sign up for a (free) Auth0 account. Then login into Auth0 and go to the management portal.

Client
  • Create a Client with the name of your choice and use the Single Page Web Applications template.
  • From the new Client Settings page take the Domain and Client ID and update the similarly named properties in the appsettings.Development.json and appsettings.Production.json files with these settings.
  • To the Allowed Callback URLs setting add the URLs: http://localhost:3000/callback,http://localhost:5000/callback - the first of these faciliates running in Debug mode, the second in Production mode. If you were to deploy this you'd need to add other callback URLs in here too.
API
  • Create an API with the name of your choice (I recommend the same as the Client to avoid confusion), an identifier which can be anything you like; I like to use the URL of my app but it's your call.
  • From the new API Settings page take the Identifier and update the Audience property in the appsettings.Development.json and appsettings.Production.json files with that value.

Running the App

Production build

Build the client app with yarn build in the ClientApp folder. (Don't forget to yarn install first.) Then, in the Web folder dotnet restore, dotnet run and open your browser to http://localhost:5000

Debugging

Run the client app using webpack-dev-server using yarn start in the ClientApp folder. Fire up VS Code in the root of the repo and hit F5 to debug the server. Then open your browser to http://localhost:3000

The Tour

When you fire up the app you're presented with "you are not logged in!" message and the option to login. Do it, it'll take you to the Auth0 "lock" screen where you can sign up / login. Once you do that you'll be asked to confirm access:

All this is powered by Auth0's auth0-js npm package. (Excellent type definition files are available from Definitely Typed; I'm using the @types/auth0-js package DT publishes.) Usage of which is super simple; it exposes an authorize method that when called triggers the Auth0 lock screen. Once you've "okayed" you'll be taken back to the app which will use the parseHash method to extract the access token that Auth0 has provided. Take a look at how our authStore makes use of auth0-js: (don't be scared; it uses mobx - but you could use anything)

authStore.ts
import { Auth0UserProfile, WebAuth } from 'auth0-js';
import { action, computed, observable, runInAction } from 'mobx';
import { IAuth0Config } from '../../config';
import { StorageFacade } from '../storageFacade';

interface IStorageToken {
  accessToken: string;
  idToken: string;
  expiresAt: number;
}

const STORAGE_TOKEN = 'storage_token';

export class AuthStore {
  @observable.ref auth0: WebAuth;
  @observable.ref userProfile: Auth0UserProfile;
  @observable.ref token: IStorageToken;

  constructor(config: IAuth0Config, private storage: StorageFacade) {
    this.auth0 = new WebAuth({
      domain: config.domain,
      clientID: config.clientId,
      redirectUri: config.redirectUri,
      audience: config.audience,
      responseType: 'token id_token',
      scope: 'openid email profile do:admin:thing' // the do:admin:thing scope is custom and defined in the scopes section of our API in the Auth0 dashboard
    });
  }

  initialise() {
    const token = this.parseToken(this.storage.getItem(STORAGE_TOKEN));
    if (token) {
      this.setSession(token);
    }
    this.storage.addEventListener(this.onStorageChanged);
  }

  parseToken(tokenString: string) {
    const token = JSON.parse(tokenString || '{}');
    return token;
  }

  onStorageChanged = (event: StorageEvent) => {
    if (event.key === STORAGE_TOKEN) {
      this.setSession(this.parseToken(event.newValue));
    }
  }

  @computed get isAuthenticated() {
    // Check whether the current time is past the 
    // access token's expiry time
    return this.token && new Date().getTime() < this.token.expiresAt;
  }

  login = () => {
    this.auth0.authorize();
  }

  handleAuthentication = () => {
    this.auth0.parseHash((err, authResult) => {
      if (authResult && authResult.accessToken && authResult.idToken) {
        const token = {
          accessToken: authResult.accessToken,
          idToken: authResult.idToken,
          // Set the time that the access token will expire at
          expiresAt: authResult.expiresIn * 1000 + new Date().getTime()
        };

        this.setSession(token);
      } else if (err) {
        // tslint:disable-next-line:no-console
        console.log(err);
        alert(`Error: ${err.error}. Check the console for further details.`);
      }
    });
  }

  @action
  setSession(token: IStorageToken) {
    this.token = token;
    this.storage.setItem(STORAGE_TOKEN, JSON.stringify(token));
  }

  getAccessToken = () => {
    const accessToken = this.token.accessToken;
    if (!accessToken) {
      throw new Error('No access token found');
    }
    return accessToken;
  }

  @action
  loadProfile = async () => {
    const accessToken = this.token.accessToken;
    if (!accessToken) {
      return;
    }

    this.auth0.client.userInfo(accessToken, (err, profile) => {
      if (err) { throw err; }

      if (profile) {
        runInAction(() => this.userProfile = profile);
        return profile;
      }

      return undefined;
    });
  }

  @action
  logout = () => {
    // Clear access token and ID token from local storage
    this.storage.removeItem(STORAGE_TOKEN);
    
    this.token = null;
    this.userProfile = null;
  }
}

Once you're logged in the app offers you more in the way of navigation options. A "Profile" screen shows you the details your React app has retrieved from Auth0 about you. This is backed by the client.userInfo method on auth0-js. There's also a "Ping" screen which is where your React app talks to your ASP.NET Core server. The screenshot below illustrates the result of hitting the "Get Private Data" button:

The "Get Server to Retrieve Profile Data" button is interesting as it illustrates that the server can get access to your profile data as well. There's nothing insecure here; it gets the details using the access token retrieved from Auth0 by the ClientApp and passed to the server. It's the API we set up in Auth0 that is in play here. The app uses the Domain and the access token to talk to Auth0 like so:

UserController.cs
    // Retrieve the access_token claim which we saved in the OnTokenValidated event
    var accessToken = User.Claims.FirstOrDefault(c => c.Type == "access_token").Value;
            
    // If we have an access_token, then retrieve the user's information
    if (!string.IsNullOrEmpty(accessToken))
    {
        var domain = _config["Auth0:Domain"];
        var apiClient = new AuthenticationApiClient(domain);
        var userInfo = await apiClient.GetUserInfoAsync(accessToken);

        return Ok(userInfo);
    }

We can also access the sub claim, which uniquely identifies the user:

UserController.cs
    // We're not doing anything with this, but hey! It's useful to know where the user id lives
    var userId = User.Claims.FirstOrDefault(c => c.Type == System.Security.Claims.ClaimTypes.NameIdentifier).Value; // our userId is the sub value

The reason our ASP.NET Core app works with Auth0 and that we have access to the access token here in the first place is because of our startup code:

Startup.cs
    public void ConfigureServices(IServiceCollection services)
    {
        var domain = $"https://{Configuration["Auth0:Domain"]}/";
        services.AddAuthentication(options =>
        {
            options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
            options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
        }).AddJwtBearer(options =>
        {
            options.Authority = domain;
            options.Audience = Configuration["Auth0:Audience"];
            options.Events = new JwtBearerEvents
            {
                OnTokenValidated = context =>
                {
                    if (context.SecurityToken is JwtSecurityToken token)
                    {
                        if (context.Principal.Identity is ClaimsIdentity identity)
                        {
                            identity.AddClaim(new Claim("access_token", token.RawData));
                        }
                    }

                    return Task.FromResult(0);
                }
            };
        });

        // ....

Authorization

We're pretty much done now; just one magic button to investigate: "Get Admin Data". If you presently try and access the admin data you'll get a 403 Forbidden. It's forbidden because that endpoint relies on the "do:admin:thing" scope in our claims:

UserController.cs
    [Authorize(Scopes.DoAdminThing)]
    [HttpGet("api/userDoAdminThing")]
    public IActionResult GetUserDoAdminThing()
    {
        return Ok("Admin endpoint");
    }
Scopes.cs
    public static class Scopes
    {
         // the do:admin:thing scope is custom and defined in the scopes section of our API in the Auth0 dashboard
        public const string DoAdminThing = "do:admin:thing";
    }

This wired up in our ASP.NET Core app like so:

Startup.cs
    services.AddAuthorization(options =>
    {
        options.AddPolicy(Scopes.DoAdminThing, policy => policy.Requirements.Add(new HasScopeRequirement(Scopes.DoAdminThing, domain)));
    });

    // register the scope authorization handler
    services.AddSingleton();
HasScopeHandler.cs
    public class HasScopeHandler : AuthorizationHandler
    {
        protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, HasScopeRequirement requirement)
        {
            // If user does not have the scope claim, get out of here
            if (!context.User.HasClaim(c => c.Type == "scope" && c.Issuer == requirement.Issuer))
                return Task.CompletedTask;

            // Split the scopes string into an array
            var scopes = context.User.FindFirst(c => c.Type == "scope" && c.Issuer == requirement.Issuer).Value.Split(' ');

            // Succeed if the scope array contains the required scope
            if (scopes.Any(s => s == requirement.Scope))
                context.Succeed(requirement);

            return Task.CompletedTask;
        }
    }

The reason we're 403ing at present is because when our HasScopeHandler executes, requirement.Scope has the value of "do:admin:thing" and our scopes do not contain that value. To add it, go to your API in the Auth0 management console and add it:

Note that you can control how this scope is acquired using "Rules" in the Auth0 management portal.

You won't be able to access the admin endpoint yet because you're still rocking with the old access token; pre-newly-added scope. But when you next login to Auth0 you'll see a prompt like this:

Which demonstrates that you're being granted an extra scope. With your new shiny access token you can now access the oh-so-secret Admin endpoint.

I had some more questions about Auth0 as I'm still new to it myself. To see my question (and the very helpful answer!) go here:
https://community.auth0.com/questions/13786/get-user-data-server-side-what-is-a-good-approach