Skip to main content

8 posts tagged with "asp.net core"

View All Tags

Autofac, WebApplicationFactory and integration tests

Updated 2nd Oct 2020: for an approach that works with Autofac 6 see this post.


This is one of those occasions where I'm not writing up my own work so much as my discovery after in depth googling.

Integration tests with ASP.NET Core are the best. They spin up an in memory version of your application and let you fire requests at it. They've gone through a number of iterations since ASP.NET Core has been around. You may also be familiar with the TestServer approach of earlier versions. For some time, the advised approach has been using <a href="https://docs.microsoft.com/en-us/aspnet/core/test/integration-tests?view=aspnetcore-3.1#basic-tests-with-the-default-webapplicationfactory">WebApplicationFactory</a>.

What makes this approach particularly useful / powerful is that you can swap out dependencies of your running app with fakes / stubs etc. Just like unit tests! But potentially more useful because they run your whole app and hence give you a greater degree of confidence. What does this mean? Well, imagine you changed a piece of middleware in your application; this could potentially break functionality. Unit tests would probably not reveal this. Integration tests would.

There is a fly in the ointment. A hair in the gazpacho. ASP.NET Core ships with dependency injection in the box. It has its own Inversion of Control container which is perfectly fine. However, many people are accustomed to using other IOC containers such as Autofac.

What's the problem? Well, swapping out dependencies registered using ASP.NET Core's IOC requires using a hook called ConfigureTestServices. There's an equivalent hook for swapping out services registered using a custom IOC container: ConfigureTestContainer. Unfortunately, there is a bug in ASP.NET Core as of version 3.0: When using GenericHost, in tests ConfigureTestContainer is not executed

This means you cannot swap out dependencies that have been registered with Autofac and the like. According to the tremendous David Fowler of the ASP.NET team, this will hopefully be resolved.

In the meantime, there's a workaround thanks to various commenters on the thread. Instead of using WebApplicationFactory directly, subclass it and create a custom AutofacWebApplicationFactory (the name is not important). This custom class overrides the behavior of ConfigureServices and CreateHost with a CustomServiceProviderFactory:

namespace My.Web.Tests.Helpers {
/// <summary>
/// Based upon https://github.com/dotnet/AspNetCore.Docs/tree/master/aspnetcore/test/integration-tests/samples/3.x/IntegrationTestsSample
/// </summary>
/// <typeparam name="TStartup"></typeparam>
public class AutofacWebApplicationFactory<TStartup> : WebApplicationFactory<TStartup> where TStartup : class {
protected override void ConfigureWebHost(IWebHostBuilder builder) {
builder.ConfigureServices(services => {
services.AddSingleton<IAuthorizationHandler>(new PassThroughPermissionedRolesHandler());
})
.ConfigureTestServices(services => {
}).ConfigureTestContainer<Autofac.ContainerBuilder>(builder => {
// called after Startup.ConfigureContainer
});
}
protected override IHost CreateHost(IHostBuilder builder) {
builder.UseServiceProviderFactory(new CustomServiceProviderFactory());
return base.CreateHost(builder);
}
}
/// <summary>
/// Based upon https://github.com/dotnet/aspnetcore/issues/14907#issuecomment-620750841 - only necessary because of an issue in ASP.NET Core
/// </summary>
public class CustomServiceProviderFactory : IServiceProviderFactory<CustomContainerBuilder> {
public CustomContainerBuilder CreateBuilder(IServiceCollection services) => new CustomContainerBuilder(services);
public IServiceProvider CreateServiceProvider(CustomContainerBuilder containerBuilder) =>
new AutofacServiceProvider(containerBuilder.CustomBuild());
}
public class CustomContainerBuilder : Autofac.ContainerBuilder {
private readonly IServiceCollection services;
public CustomContainerBuilder(IServiceCollection services) {
this.services = services;
this.Populate(services);
}
public Autofac.IContainer CustomBuild() {
var sp = this.services.BuildServiceProvider();
#pragma warning disable CS0612 // Type or member is obsolete
var filters = sp.GetRequiredService<IEnumerable<IStartupConfigureContainerFilter<Autofac.ContainerBuilder>>>();
#pragma warning restore CS0612 // Type or member is obsolete
foreach (var filter in filters) {
filter.ConfigureContainer(b => { }) (this);
}
return this.Build();
}
}
}

I'm going to level with you; I don't understand all of this code. I'm not au fait with the inner workings of ASP.NET Core or Autofac but I can tell you what this allows. With this custom WebApplicationFactory in play you get ConfigureTestContainer back in the mix! You get to write code like this:

using System;
using System.Net;
using System.Net.Http.Headers;
using System.Threading.Tasks;
using FakeItEasy;
using FluentAssertions;
using Microsoft.AspNetCore.TestHost;
using Microsoft.Extensions.DependencyInjection;
using Xunit;
using Microsoft.Extensions.Options;
using Autofac;
using System.Net.Http;
using Newtonsoft.Json;
namespace My.Web.Tests.Controllers
{
public class MyControllerTests : IClassFixture<AutofacWebApplicationFactory<My.Web.Startup>> {
private readonly AutofacWebApplicationFactory<My.Web.Startup> _factory;
public MyControllerTests(
AutofacWebApplicationFactory<My.Web.Startup> factory
) {
_factory = factory;
}
[Fact]
public async Task My() {
var fakeSomethingService = A.Fake<IMySomethingService>();
var fakeConfig = Options.Create(new MyConfiguration {
SomeConfig = "Important thing",
OtherConfigMaybeAnEmailAddress = "[email protected]"
});
A.CallTo(() => fakeSomethingService.DoSomething(A<string>.Ignored))
.Returns(Task.FromResult(true));
void ConfigureTestServices(IServiceCollection services) {
services.AddSingleton(fakeConfig);
}
void ConfigureTestContainer(ContainerBuilder builder) {
builder.RegisterInstance(fakeSomethingService);
}
var client = _factory
.WithWebHostBuilder(builder => {
builder.ConfigureTestServices(ConfigureTestServices);
builder.ConfigureTestContainer<Autofac.ContainerBuilder>(ConfigureTestContainer);
})
.CreateClient();
// Act
var request = StringContent("{\"sommat\":\"to see\"}");
request.Headers.ContentType = MediaTypeHeaderValue.Parse("application/json");
var response = await client.PostAsync("/something/submit", request);
// Assert
response.StatusCode.Should().Be(HttpStatusCode.OK);
A.CallTo(() => fakeSomethingService.DoSomething(A<string>.Ignored))
.MustHaveHappened();
}
}
}

Up to the clouds!

This last four months has been quite the departure for me. Most typically I find myself building applications; for this last period of time I've been taking the platform that I work on, and been migrating it from running on our on premise servers to running in the cloud.

This turned out to be much more difficult than I'd expected and for reasons that often surprised me. We knew where we wanted to get to, but not all of what we'd need to do to get there. So many things you can only learn by doing. Whilst these experiences are still fresh in my mind I wanted to document some of the challenges we faced.

The mission#

At the start of January, the team decided to make a concerted effort to take our humble ASP.NET Core application and migrate it to the cloud. We sat down with some friends from the DevOps team who are part of our organisation. We're fortunate in that these marvellous people are very talented engineers indeed. It was going to be a collaboration between our two teams of budding cloudmongers that would make this happen.

Now our application is young. It is not much more than a year old. However it is growing fast. And as we did the migration from on premise to the cloud, that wasn't going to stop. Development of the application was to continue as is, shipping new versions daily. Without impeding that, we were to try and get the application migrated to the cloud.

I would liken it to boarding a speeding train, fighting your way to the front, taking the driver hostage and then diverting the train onto a different track. It was challenging. Really, really challenging.

So many things had to change for us to get from on premise servers to the cloud, all the while keeping our application a going (and shipping) concern. Let's go through them one by one.

Kubernetes and Docker#

Our application was built using ASP.NET Core. A technology that is entirely cloud friendly (that's one of the reasons we picked it). We were running on a collection of hand installed, hand configured Windows servers. That had to change. We wanted to move our application to run on Kubernetes; so we didn't have to manually configure servers. Rather k8s would manage the provisioning and deployment of containers running our application. Worth saying now: I knew nothing about Kubernetes. Or nearly nothing. I learned a bunch along the way, but, as I've said, this was a collaboration between our team and the mighty site reliability engineers of the DevOps team. They knew a lot about this k8s stuff and moreoften than not, our team stood back and let them work their magic.

In order that we could migrate to running in k8s, we first needed to containerise our application. We needed a Dockerfile. There followed a good amount of experimentation as we worked out how to build ourselves images. There's an art to building an optimal Docker image.

So that we can cover a lot of ground, this post will remain relatively high level. So here's a number of things that we encountered along the way that are worth considering:

  • Multi-stage builds were an absolute necessity for us. We'd build the front end of our app (React / TypeScript) using one stage with a Node base image. Then we'd build our app using a .NET Core SDK base image. Finally, we'd use a ASP.Net image to run the app; copying in the output of previous stages.
  • Our application accesses various SQL Server databases. We struggled to get our application to connect to them. The issue related to the SSL configuration of our runner image. The fix was simple but frustrating; use a -bionic image as it has the configuration you need. We found that gem here.
  • Tests. Automated tests. We want to run them in our build; but how? Once more multi-stage builds to the rescue. We'd build our application, then in a separate stage we'd run the tests; copying in the app from the build stage. If the tests failed, the build failed. If they passed then the intermediate stage containing the tests would be discarded by Docker. No unnecessary bloat of the image; all that testing goodness still; now in containerised form!

Jenkins#

Our on premise world used TeamCity for our continuous integration needs and Octopus for deployment. We liked these tools well enough; particularly Octopus. However, the DevOps team were very much of the mind that we should be use Jenkins instead. And Pipeline. It was here that we initially struggled. To quote the docs:

Jenkins Pipeline (or simply "Pipeline" with a capital "P") is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins.

Whilst continuous delivery is super cool, and is something our team was interested in, we weren't ready for it yet. We didn't yet have the kind of automated testing in place that gave us the confidence that we'd need to move to it. One day, but not today. For now there was still some manual testing done on each release, prior to shipping. Octopus suited us very well here as it allowed us to deploy, on demand, a build of our choice to a given environment. So the question was: what to do? Fortunately the immensely talented Aby Egea came up with a mechanism that supported that very notion. A pipeline that would, optionally, deploy our build to a specified environment. So we were good!

One thing we got to really appreciate about Jenkins was that the build is scripted with a Jenkinsfile. This was in contrast to our TeamCity world where it was all manually configured. Configuration as code is truly a wonderful thing as your build pipeline becomes part of your codebase; open for everyone to see and understand. If anyone wants to change the build pipeline it has to get code reviewed like everything else. It was as code in our Jenkinsfile that the deployment mechanism lived.

Vault#

Another thing that we used Octopus for was secrets. Applications run on configuration; these are settings that drive the behaviour of your application. A subset of configuration is "secrets". Secrets are configuration that can't be stored in source code; they would represent a risk if they did. For instance a database connection string. We'd been merrily using Octopus for this; as Octopus deploys an application to a server it enriches the appsettings.json file with any required secrets.

Without Octopus in the mix, how were we to handle our secrets? The answer is with Hashicorp Vault. We'd store our secrets in there and, thanks to clever work by Robski of the DevOps team, when our container was brought up by Kubernetes, it would mount into the filesystem an appsettings.Vault.json file which we read thanks to our trusty friend .AddJsonFile with optional: true. (As the file didn't exist in our development environment.)

Hey presto! Safe secrets in k8s.

Networking#

Our on premise servers sat on the company network. They could see everything that there was to see. All the other servers around them on the network, bleeping and blooping. The opposite was true in AWS. There was nothing to see. Nothing to access. As it should be. It's safer that way should a machine become compromised. For each database and each API our application depended upon, we needed to specifically whitelist access.

Kerberos#

There's always a fly in the ointment. A nasty surprise on a dark night. Ours was realising that our application depended upon an API that was secured using Windows Authentication. Our application was accessing it by running under a service account which had been permissioned to access it. However, in AWS, our application wasn't running as under a service account on the company network. Disappointingly, in the short term the API was not going to support an alternate authentication mechanism.

What to do? Honestly it wasn't looking good. We were considering proxying through one of our Windows servers just to get access to that API. I was tremendously disappointed. At this point our hero arrived; one JMac hacked together a Kerberos sidecar approach one weekend. You can see a similar approach here. This got us to a point that allowed us to access the API we needed to.

I'm kind of amazed that there isn't better documentation out there around have a Kerberos sidecar in a k8s setup. Tragically Windows Authentication is a widely used authentication mechanism. That being the case, having good docs to show how you can get a Kerberos sidecar in place would likely greatly advance the ability of enterprises to migrate to the cloud. The best docs I've found are here. It is super hard though. So hard!

Hangfire#

We were using Hosted Services to perform background task running in our app. The nature of our background tasks meant that it was important to only run a single instance of a background task at a time. Or bad things would happen. This was going to become a problem since we had ambitions to be able to horizontally scale our application; to add new pods as running our app as demand determined.

So we started to use Hangfire to perform task running in our app. With Hangfire, when a job is picked up it gets locked so other servers can't pick it up. That's what we need.

Hangfire is pretty awesome. However it turns out that there's quirks when you move to a containerised environment. We have a number of recurring jobs that are scheduled to run at certain dates and times. In order that Hangfire can ascertain what time it is, it needs a timezone. It turns out that timezones on Windows != timezones in Docker / Linux.

This was a problem because, as we limbered up for the great migration, we were trying to run our cloud implementation side by side with our on premise one. And Windows picked a fight with Linux over timezones. You can see others bumping into this condition here. We learned this the hard way; jobs mysteriously stopping due to timezone related errors. Windows Hangfire not able to recognise Linux Hangfire timezones and vica versa.

The TL;DR is that we had to do a hard switch with Hangfire; it couldn't run side by side. Not the end of the world, but surprising.

Azure Active Directory Single Sign-On#

Historically our application had used two modes of authentication; Windows Authentication and cookies. Windows Authentication doesn't generally play nicely with Docker. It's doable, but it's not the hill you want to die on. So we didn't; we swapped out Windows Authentication for Azure AD SSO and didn't look back.

We also made some changes so our app would support cookies auth alongside Azure AD auth; I've written about this previously.

Do the right thing and tell people about it#

We're there now; we've made the move. It was a difficult journey but one worth making; it sets up our platform for where we want to take it in the future. Having infrastructure as code makes all kinds of approaches possible that weren't before. Here's some things we're hoping to get out of the move:

  • blue green deployments - shipping without taking down our platform
  • provision environments on demand - currently we have a highly contended situation when it comes to test environments. With k8s and AWS we can look at spinning up environments as we need them and throwing them away also
  • autoscaling for need - we can start to look at spinning up new containers in times of high load and removing excessive containers in times of low load

We've also become more efficient as a team. We are no longer maintaining servers, renewing certificates, installing software, RDPing onto boxes. All that time and effort we can plough back into making awesome experiences for our users.

There's a long list of other benefits and it's very exciting indeed! It's not enough for us to have done this though. It's important that we tell the story of what we've done and how and why we've done it. That way people have empathy for the work. Also they can start to think about how they could start to reap similar benefits themselves. By talking to others about the road we've travelled, we can save them time and help them to travel a similar road. This is good for them and it's good for us; it helps our relationships and it helps us all to move forwards together.

A rising tide lifts all boats. By telling others about our journey, we raise the water level. Up to the clouds!

Hard-coding a Claim in Development Mode in ASP.Net Core

I was recently part of a hackathon team that put together an API in just 30 hours. We came second. (Not bitter, not bitter...)

We were moving pretty quickly during the hackathon and, when we came to the end of it, we had a working API which we were able to demo. The good news is that the API is going to graduate to be a product! We're going to ship this. Before we can do that though, there's a little tidy up to do.

The first thing I remembered / realised when I picked up the codebase again, was the shortcuts we'd made on the developer experience. We'd put the API together using ASP.Net Core. We're handling authentication using JWTs which is nicely supported. When we're deployed, an external facing proxy calls our application with the appropriate JWT and everything works as you'd hope.

The question is, what's it like to develop against this on your laptop? Getting a JWT for when I'm debugging locally is too much friction. I want to be able to work on the problem at hand, going away to get a JWT each time is a timesuck. So what to do? Well, during the hackathon, we just commented out [Authorize] attributes and hardcoded user ids in our controllers. This works, but it's a messy developer experience; it's easy to forget to uncomment things you've commented and break things. There must be a better way.

The solution I landed on was this: in development mode (which we only use whilst debugging) we hardcode an authenticated user. The way our authentication works is that we have a claim on our principal called something like "our-user-id", the value of which is our authenticated user id. So in the ConfigureServices method of our Startup.cs we have a conditional authentication registration like this:

// Whilst developing, we don't want to authenticate; we hardcode to a particular users id
if (Env.IsDevelopment()) {
services.AddAuthentication(nameof(DevelopmentModeAuthenticationHandler))
.AddScheme<DevelopmentModeAuthenticationOptions, DevelopmentModeAuthenticationHandler>(
nameof(DevelopmentModeAuthenticationHandler),
options => {
options.UserIdToSetInClaims = "this-is-a-user-id";
}
);
}
else {
// The application typically uses this
services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
.AddJwtBearer(options => {
// ...
});
}

As you can see, we're using a special DevelopmentModeAuthenticationHandler authentication scheme in development mode, instead of JWT. As we register that, we declare the user id that we want to use. Whenever the app runs using the DevelopmentModeAuthenticationHandler auth, all requests will arrive using a principal with an "our-user-id" claim with a value of "this-is-a-user-id" (or whatever you've set it to.)

The DevelopmentModeAuthenticationHandler looks like this:

using System.Collections.Generic;
using System.Security.Claims;
using System.Text.Encodings.Web;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Authentication;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;
namespace OurApp
{
public class DevelopmentModeAuthenticationOptions : AuthenticationSchemeOptions
{
public string UserIdToSetInClaims { get; set; }
}
public class DevelopmentModeAuthenticationHandler : AuthenticationHandler<DevelopmentModeAuthenticationOptions> {
private readonly ILoggingService _loggingService;
public DevelopmentModeAuthenticationHandler(
IOptionsMonitor<DevelopmentModeAuthenticationOptions> options,
ILoggerFactory logger,
UrlEncoder encoder,
ISystemClock clock
) : base(options, logger, encoder, clock) {
}
protected override Task<AuthenticateResult> HandleAuthenticateAsync() {
var claims = new List<Claim> { new Claim("our-user-id", Options.UserIdToSetInClaims) };
var identity = new ClaimsIdentity(claims, nameof(DevelopmentModeAuthenticationHandler));
var ticket = new AuthenticationTicket(new ClaimsPrincipal(identity), Scheme.Name);
return Task.FromResult(AuthenticateResult.Success(ticket));
}
}
}

Now, developing locally is frictionless! We don't comment out [Authorize] attributes, we don't hard code user ids in controllers.

Google Analytics API and ASP.Net Core

Some of my posts are meaningful treaties on the nature of software development. Some are detailed explanations of approaches you can use. Some are effectively code dumps. This is one of those.

I recently had need to be able to access the API for Google Analytics from ASP.Net Core. Getting this up and running turned out to be surprisingly tough because of an absence of good examples. So here it is; an example of how you can access a simple page access stat using the API:

async Task<SomeKindOfDataStructure[]> GetUsageFromGoogleAnalytics(DateTime startAtThisDate, DateTime endAtThisDate)
{
// Create the DateRange object. Here we want data from last week.
var dateRange = new DateRange
{
StartDate = startAtThisDate.ToString("yyyy-MM-dd"),
EndDate = endAtThisDate.ToString("yyyy-MM-dd")
};
// Create the Metrics and dimensions object.
// var metrics = new List<Metric> { new Metric { Expression = "ga:sessions", Alias = "Sessions" } };
// var dimensions = new List<Dimension> { new Dimension { Name = "ga:pageTitle" } };
var metrics = new List<Metric> { new Metric { Expression = "ga:uniquePageviews" } };
var dimensions = new List<Dimension> {
new Dimension { Name = "ga:date" },
new Dimension { Name = "ga:dimension1" }
};
// Get required View Id from configuration
var viewId = $"ga:{"[VIEWID]"}";
// Create the Request object.
var reportRequest = new ReportRequest
{
DateRanges = new List<DateRange> { dateRange },
Metrics = metrics,
Dimensions = dimensions,
FiltersExpression = "ga:pagePath==/index.html",
ViewId = viewId
};
var getReportsRequest = new GetReportsRequest {
ReportRequests = new List<ReportRequest> { reportRequest }
};
//Invoke Google Analytics API call and get report
var analyticsService = GetAnalyticsReportingServiceInstance();
var response = await (analyticsService.Reports.BatchGet(getReportsRequest)).ExecuteAsync();
var logins = response.Reports[0].Data.Rows.Select(row => new SomeKindOfDataStructure {
Date = new DateTime(
year: Convert.ToInt32(row.Dimensions[0].Substring(0, 4)),
month: Convert.ToInt32(row.Dimensions[0].Substring(4, 2)),
day: Convert.ToInt32(row.Dimensions[0].Substring(6, 2))),
NumberOfLogins = Convert.ToInt32(row.Metrics[0].Values[0])
})
.OrderByDescending(login => login.Date)
.ToArray();
return logins;
}
/// <summary>
/// Intializes and returns Analytics Reporting Service Instance
/// </summary>
AnalyticsReportingService GetAnalyticsReportingServiceInstance() {
var googleAuthFlow = new GoogleAuthorizationCodeFlow(new GoogleAuthorizationCodeFlow.Initializer {
ClientSecrets = new ClientSecrets {
ClientId = "[CLIENTID]",
ClientSecret = "[CLIENTSECRET]"
}
});
var responseToken = new TokenResponse {
AccessToken = "[ANALYTICSTOKEN]",
RefreshToken = "[REFRESHTOKEN]",
Scope = AnalyticsReportingService.Scope.AnalyticsReadonly, //Read-only access to Google Analytics,
TokenType = "Bearer",
};
var credential = new UserCredential(googleAuthFlow, "", responseToken);
// Create the Analytics service.
return new AnalyticsReportingService(new BaseClientService.Initializer {
HttpClientInitializer = credential,
ApplicationName = "my-super-applicatio",
});
}

You can see above that you need various credentials to be able to use the API. You can acquire these by logging into GA. Enjoy!

WhiteList Proxying with ASP.Net Core

Once upon a time there lived a young team who were building a product. They were ready to go live with their beta and so they set off on a journey to a mystical land they had heard tales of. This magical kingdom was called "Production". However, Production was a land with walls and but one gate. That gate was jealously guarded by a defender named "InfoSec". InfoSec was there to make sure that only the the right people, noble of thought and pure of deed were allowed into the promised land. InfoSec would ask questions like "are you serving over HTTPS" and "what are you doing about cross site scripting"?

The team felt they had good answers to InfoSec's questions. However, just as they were about to step through the gate, InfoSec held up their hand and said "your application wants to access a database... database access needs to take place on our own internal network. Not over the publicly accessible internet. You shall not pass."

The team, with one foot in the air, paused. They swallowed and said "can you give us five minutes?"

The Proxy Regroup#

And so it came to pass that the teams product (which took the form of ASP.Net Core web application) had to be changed. Where once there had been a single application, there would now be two; one that lived on the internet (the web app) and one that lived on the companies private network (the API app). The API app would do all the database access. In fact the product team opted to move all significant operations into the API as well. This left the web app with two purposes:

  1. the straightforward serving of HTML, CSS, JS and images
  2. the proxying of API calls through to the API app

Proxy Part 1#

In the early days of this proxying the team reached for AspNetCore.Proxy. It's a great open source project that allows you to proxy HTTP requests. It gives you complete control over the construction of proxy requests, so that you can have a request come into your API and end up proxying it to a URL with a completely different path on the proxy server.

Proxy Part 2#

The approach offered by AspNetCore.Proxy is fantastically powerful in terms of control. However, we didn't actually need that level of configurability. In fact, it resulted in us writing a great deal of boilerplate code. You see in our case we'd opted to proxy path for path, changing only the server name on each proxied request. So if a GET request came in going to https://web.app.com/api/version then we would want to proxy it to a GET request to https://api.app.com/api/version. You see? All we did was swap https://web.app.com for https://api.app.com. Nothing more. We did that as a rule. We knew we always wanted to do just this.

So we ended up spinning up our own solution which allowed just the specification of paths we wanted to proxy with their corresponding HTTP verbs. Let's talk through it. Usage of our approach ended up as a middleware within our web app's Startup.cs:

public void Configure(IApplicationBuilder app) {
// ...
app.UseProxyWhiteList(
// where ServerToProxyToBaseUrl is the server you want requests to be proxied to
proxyAddressTweaker: (requestPath) => $"{ServerToProxyToBaseUrl}{requestPath}",
whiteListProxyRoutes: new [] {
// An anonymous request
WhiteListProxy.AnonymousRoute("api/version", HttpMethod.Get),
// An authenticated request; to send this we must know who the user is
WhiteListProxy.Route("api/account/{accountId:int}/all-the-secret-info", HttpMethod.Get, HttpMethod.Post),
});
app.UseMvc();
// ...
}

If you look at the code above you can see that we are proxing all our requests to a single server: ServerToProxyToBaseUrl. We're proxying 2 different requests:

  1. GET requests to api/version are proxied through as anonymousGET requests.
  2. GET and POST requests to api/account/{accountId:int}/all-the-secret-info are proxied through as GET and POST requests. These requests require that a user be authenticated first.

The WhiteListProxy proxy class we've been using looks like this:

using System;
using System.Collections.Generic;
using System.Net.Http;
namespace My.Web.Proxy {
public class WhiteListProxy {
public string Path { get; set; }
public IEnumerable<HttpMethod> Methods { get; set; }
public bool IsAnonymous { get; set; }
private WhiteListProxy(string path, bool isAnonymous, params HttpMethod[] methods) {
if (methods == null || methods.Length == 0)
throw new ArgumentException($"You need at least a single HttpMethod to be specified for {path}");
Path = path;
IsAnonymous = isAnonymous;
Methods = methods;
}
public static WhiteListProxy Route(string path, params HttpMethod[] methods) => new WhiteListProxy(path, isAnonymous : false, methods: methods);
public static WhiteListProxy AnonymousRoute(string path, params HttpMethod[] methods) => new WhiteListProxy(path, isAnonymous : true, methods: methods);
}
}

The middleware for proxying (our UseProxyWhiteList) looks like this:

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Linq;
using System.Net.Http;
using System.Reflection;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Authentication;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Routing;
using Microsoft.Extensions.DependencyModel;
using Microsoft.Extensions.DependencyInjection;
using Serilog;
namespace My.Web.Proxy {
public static class ProxyRouteExtensions {
/// <summary>
/// Middleware which proxies the supplied whitelist routes
/// </summary>
public static void UseProxyWhiteList(
this IApplicationBuilder app,
Func<string, string> proxyAddressTweaker,
Action<HttpContext, HttpRequestMessage> preSendProxyRequestAction,
IEnumerable<WhiteListProxy> whiteListProxyRoutes
) {
app.UseRouter(builder => {
foreach (var whiteListProxy in whiteListProxyRoutes) {
foreach (var method in whiteListProxy.Methods) {
builder.MapMiddlewareVerb(method.ToString(), whiteListProxy.Path, proxyApp => {
proxyApp.UseProxy_Challenge(whiteListProxy.IsAnonymous);
proxyApp.UseProxy_Run(proxyAddressTweaker, preSendProxyRequestAction);
});
}
}
});
}
private static void UseProxy_Challenge(this IApplicationBuilder app, bool allowAnonymous) {
app.Use((context, next) =>
{
var routePath = context.Request.Path.Value;
var weAreAuthenticatedOrWeDontNeedToBe =
context.User.Identity.IsAuthenticated || allowAnonymous;
if (weAreAuthenticatedOrWeDontNeedToBe)
return next();
return context.ChallengeAsync();
});
}
private static void UseProxy_Run(
this IApplicationBuilder app,
Func<string, string> proxyAddressTweaker,
Action<HttpContext, HttpRequestMessage> preSendProxyRequestAction
)
{
app.Run(async context => {
var proxyAddress = "";
try {
proxyAddress = proxyAddressTweaker(context.Request.Path.Value);
var proxyRequest = context.Request.CreateProxyHttpRequest(proxyAddress);
if (preSendProxyRequestAction != null)
preSendProxyRequestAction(context, proxyRequest);
var httpClients = context.RequestServices.GetService<IHttpClients>(); // IHttpClients is just a wrapper for HttpClient - insert your own here
var proxyResponse = await httpClients.SendRequestAsync(proxyRequest,
HttpCompletionOption.ResponseHeadersRead, context.RequestAborted)
.ConfigureAwait(false);
await context.CopyProxyHttpResponse(proxyResponse).ConfigureAwait(false);
}
catch (OperationCanceledException ex) {
if (ex.CancellationToken.IsCancellationRequested)
return;
if (!context.Response.HasStarted)
{
context.Response.StatusCode = 408;
await context.Response
.WriteAsync("Request timed out.");
}
}
catch (Exception e) {
if (!context.Response.HasStarted)
{
context.Response.StatusCode = 500;
await context.Response
.WriteAsync(
$"Request could not be proxied.\n\n{e.Message}\n\n{e.StackTrace}.");
}
}
});
}
public static void AddOrReplaceHeader(this HttpRequestMessage request, string headerName, string headerValue) {
// It's possible for there to be multiple headers with the same name; we only want a single header to remain. Our one.
while (request.Headers.TryGetValues(headerName, out var existingAuthorizationHeader)) {
request.Headers.Remove(headerName);
}
request.Headers.TryAddWithoutValidation(headerName, headerValue);
}
public static HttpRequestMessage CreateProxyHttpRequest(this HttpRequest request, string uriString) {
var uri = new Uri(uriString + request.QueryString);
var requestMessage = new HttpRequestMessage();
var requestMethod = request.Method;
if (!HttpMethods.IsGet(requestMethod) &&
!HttpMethods.IsHead(requestMethod) &&
!HttpMethods.IsDelete(requestMethod) &&
!HttpMethods.IsTrace(requestMethod)) {
var streamContent = new StreamContent(request.Body);
requestMessage.Content = streamContent;
}
// Copy the request headers.
if (requestMessage.Content != null)
foreach (var header in request.Headers)
if (!requestMessage.Headers.TryAddWithoutValidation(header.Key, header.Value.ToArray()))
requestMessage.Content?.Headers.TryAddWithoutValidation(header.Key, header.Value.ToArray());
requestMessage.Headers.Host = uri.Authority;
requestMessage.RequestUri = uri;
requestMessage.Method = new HttpMethod(request.Method);
return requestMessage;
}
public static async Task CopyProxyHttpResponse(this HttpContext context, HttpResponseMessage responseMessage) {
var response = context.Response;
response.StatusCode = (int) responseMessage.StatusCode;
foreach (var header in responseMessage.Headers) {
response.Headers[header.Key] = header.Value.ToArray();
}
if (responseMessage.Content != null) {
foreach (var header in responseMessage.Content.Headers) {
response.Headers[header.Key] = header.Value.ToArray();
}
}
response.Headers.Remove("transfer-encoding");
using(var responseStream = await responseMessage.Content.ReadAsStreamAsync().ConfigureAwait(false)) {
await responseStream.CopyToAsync(response.Body, 81920, context.RequestAborted).ConfigureAwait(false);
}
}
}
}

Cache Rules Everything Around Me

One thing that ASP.Net Core really got right was caching. <a href="https://docs.microsoft.com/en-us/aspnet/core/performance/caching/memory">IMemoryCache</a> is a caching implementation that does just what I want. I love it. I take it everywhere. I've introduced it to my family.

TimeSpan, TimeSpan Expiration Y'all#

To make usage of the IMemoryCacheeven more lovely I've written an extension method. I follow pretty much one cache strategy: SetAbsoluteExpiration and I just vary the expiration by an amount of time. This extension method implements that in a simple way; I call it GetOrCreateForTimeSpanAsync - catchy right? It looks like this:

using System;
using System.Threading.Tasks;
using Microsoft.Extensions.Caching.Memory;
namespace My.Helpers {
public static class CacheHelpers {
public static async Task<TItem> GetOrCreateForTimeSpanAsync<TItem>(
this IMemoryCache cache,
string key,
Func<Task<TItem>> itemGetterAsync,
TimeSpan timeToCache
) {
if (!cache.TryGetValue(key, out object result)) {
result = await itemGetterAsync();
if (result == null)
return default(TItem);
var cacheEntryOptions = new MemoryCacheEntryOptions()
.SetAbsoluteExpiration(timeToCache);
cache.Set(key, result, cacheEntryOptions);
}
return (TItem) result;
}
}
}

Usage looks like this:

private Task<superinterestingthing> GetSuperInterestingThingFromCache(Guid superInterestingThingId) =>
_cache.GetOrCreateForTimeSpanAsync(
key: $"{nameof(MyClass)}:GetSuperInterestingThing:{superInterestingThingId}",
itemGetterAsync: () => GetSuperInterestingThing(superInterestingThingId),
timeToCache: TimeSpan.FromMinutes(5)
);
</superinterestingthing>

This helper allows the consumer to provide three things:

  • The key key for the item to be cached with
  • A itemGetterAsync which is the method that is used to retrieve a new value if an item cannot be found in the cache
  • A timeToCache which is the period of time that an item should be cached

If an item can't be looked up by the itemGetterAsync then nothing will be cached and a the default value of the expected type will be returned. This is important because lookups can fail, and there's nothing worse than a lookup failing and you caching null as a result.

Go on, ask me how I know.

This is a simple, clear and helpful API which makes interacting with IMemoryCache even more lovely than it was. Peep it y'all.

Auth0, TypeScript and ASP.NET Core

Most applications I write have some need for authentication and perhaps authorisation too. In fact, most apps most people write fall into that bracket. Here's the thing: Auth done well is a *big* chunk of work. And the minute you start thinking about that you almost invariably lose focus on the thing you actually want to build and ship.

So this Christmas I decided it was time to take a look into offloading that particular problem onto someone else. I knew there were third parties who provided Auth-As-A-Service - time to give them a whirl. On the recommendation of a friend, I made Auth0 my first port of call. Lest you be expecting a full breakdown of the various players in this space, let me stop you now; I liked Auth0 so much I strayed no further. Auth0 kicks AAAS. (I'm so sorry)

What I wanted to build#

My criteria for "auth success" was this:

  • I want to build a SPA, specifically a React SPA. Ideally, I shouldn't need a back end of my own at all
  • I want to use TypeScript on my client.

But, for when I do implement a back end:

  • I want that to be able to use the client side's Auth tokens to allow access to Auth routes on my server.
  • 鈥嶪 want to able to identify the user, given the token, to provide targeted data
  • Oh, and I want to use .NET Core 2 for my server.

And in achieving all of the I want to add minimal code to my app. Not War and Peace. My code should remain focused on doing what it does.

Boil a Plate#

I ended up with unqualified ticks for all my criteria, but it took some work to find out. I will say that Auth0 do travel the extra mile in terms of getting you up and running. When you create a new Client in Auth0 you're given the option to download a quick start using the technology of your choice.

This was a massive plus for me. I took the quickstart provided and ran with it to get me to the point of meeting my own criteria. You can use this boilerplate for your own ends. Herewith, a walkthrough:

The Walkthrough#

Fork and clone the repo at this location: https://github.com/johnnyreilly/auth0-react-typescript-asp-net-core.

What have we got? 2 folders, ClientApp contains the React app, Web contains the ASP.NET Core app. Now we need to get setup with Auth0 and customise our config.

Setup Auth0#

Here's how to get the app set up with Auth0; you're going to need to sign up for a (free) Auth0 account. Then login into Auth0 and go to the management portal.

Client#

  • Create a Client with the name of your choice and use the Single Page Web Applications template.
  • From the new Client Settings page take the Domain and Client ID and update the similarly named properties in the appsettings.Development.json and appsettings.Production.json files with these settings.
  • To the Allowed Callback URLs setting add the URLs: http://localhost:3000/callback,http://localhost:5000/callback - the first of these faciliates running in Debug mode, the second in Production mode. If you were to deploy this you'd need to add other callback URLs in here too.

API#

  • Create an API with the name of your choice (I recommend the same as the Client to avoid confusion), an identifier which can be anything you like; I like to use the URL of my app but it's your call.
  • From the new API Settings page take the Identifier and update the Audience property in the appsettings.Development.json and appsettings.Production.json files with that value.

Running the App#

Production build#

Build the client app with yarn build in the ClientApp folder. (Don't forget to yarn install first.) Then, in the Web folder dotnet restore, dotnet run and open your browser to http://localhost:5000

Debugging#

Run the client app using webpack-dev-server using yarn start in the ClientApp folder. Fire up VS Code in the root of the repo and hit F5 to debug the server. Then open your browser to http://localhost:3000

The Tour#

When you fire up the app you're presented with "you are not logged in!" message and the option to login. Do it, it'll take you to the Auth0 "lock" screen where you can sign up / login. Once you do that you'll be asked to confirm access:

All this is powered by Auth0's auth0-js npm package. (Excellent type definition files are available from Definitely Typed; I'm using the @types/auth0-js package DT publishes.) Usage of which is super simple; it exposes an authorize method that when called triggers the Auth0 lock screen. Once you've "okayed" you'll be taken back to the app which will use the parseHash method to extract the access token that Auth0 has provided. Take a look at how our authStore makes use of auth0-js: (don't be scared; it uses mobx - but you could use anything)

authStore.ts#

import { Auth0UserProfile, WebAuth } from 'auth0-js';
import { action, computed, observable, runInAction } from 'mobx';
import { IAuth0Config } from '../../config';
import { StorageFacade } from '../storageFacade';
interface IStorageToken {
accessToken: string;
idToken: string;
expiresAt: number;
}
const STORAGE_TOKEN = 'storage_token';
export class AuthStore {
@observable.ref auth0: WebAuth;
@observable.ref userProfile: Auth0UserProfile;
@observable.ref token: IStorageToken;
constructor(config: IAuth0Config, private storage: StorageFacade) {
this.auth0 = new WebAuth({
domain: config.domain,
clientID: config.clientId,
redirectUri: config.redirectUri,
audience: config.audience,
responseType: 'token id_token',
scope: 'openid email profile do:admin:thing' // the do:admin:thing scope is custom and defined in the scopes section of our API in the Auth0 dashboard
});
}
initialise() {
const token = this.parseToken(this.storage.getItem(STORAGE_TOKEN));
if (token) {
this.setSession(token);
}
this.storage.addEventListener(this.onStorageChanged);
}
parseToken(tokenString: string) {
const token = JSON.parse(tokenString || '{}');
return token;
}
onStorageChanged = (event: StorageEvent) => {
if (event.key === STORAGE_TOKEN) {
this.setSession(this.parseToken(event.newValue));
}
}
@computed get isAuthenticated() {
// Check whether the current time is past the
// access token's expiry time
return this.token && new Date().getTime() < this.token.expiresAt;
}
login = () => {
this.auth0.authorize();
}
handleAuthentication = () => {
this.auth0.parseHash((err, authResult) => {
if (authResult && authResult.accessToken && authResult.idToken) {
const token = {
accessToken: authResult.accessToken,
idToken: authResult.idToken,
// Set the time that the access token will expire at
expiresAt: authResult.expiresIn * 1000 + new Date().getTime()
};
this.setSession(token);
} else if (err) {
// tslint:disable-next-line:no-console
console.log(err);
alert(`Error: ${err.error}. Check the console for further details.`);
}
});
}
@action
setSession(token: IStorageToken) {
this.token = token;
this.storage.setItem(STORAGE_TOKEN, JSON.stringify(token));
}
getAccessToken = () => {
const accessToken = this.token.accessToken;
if (!accessToken) {
throw new Error('No access token found');
}
return accessToken;
}
@action
loadProfile = async () => {
const accessToken = this.token.accessToken;
if (!accessToken) {
return;
}
this.auth0.client.userInfo(accessToken, (err, profile) => {
if (err) { throw err; }
if (profile) {
runInAction(() => this.userProfile = profile);
return profile;
}
return undefined;
});
}
@action
logout = () => {
// Clear access token and ID token from local storage
this.storage.removeItem(STORAGE_TOKEN);
this.token = null;
this.userProfile = null;
}
}

Once you're logged in the app offers you more in the way of navigation options. A "Profile" screen shows you the details your React app has retrieved from Auth0 about you. This is backed by the client.userInfo method on auth0-js. There's also a "Ping" screen which is where your React app talks to your ASP.NET Core server. The screenshot below illustrates the result of hitting the "Get Private Data" button:

The "Get Server to Retrieve Profile Data" button is interesting as it illustrates that the server can get access to your profile data as well. There's nothing insecure here; it gets the details using the access token retrieved from Auth0 by the ClientApp and passed to the server. It's the API we set up in Auth0 that is in play here. The app uses the Domain and the access token to talk to Auth0 like so:

UserController.cs#

// Retrieve the access_token claim which we saved in the OnTokenValidated event
var accessToken = User.Claims.FirstOrDefault(c => c.Type == "access_token").Value;
// If we have an access_token, then retrieve the user's information
if (!string.IsNullOrEmpty(accessToken))
{
var domain = _config["Auth0:Domain"];
var apiClient = new AuthenticationApiClient(domain);
var userInfo = await apiClient.GetUserInfoAsync(accessToken);
return Ok(userInfo);
}

We can also access the sub claim, which uniquely identifies the user:

UserController.cs#

// We're not doing anything with this, but hey! It's useful to know where the user id lives
var userId = User.Claims.FirstOrDefault(c => c.Type == System.Security.Claims.ClaimTypes.NameIdentifier).Value; // our userId is the sub value

The reason our ASP.NET Core app works with Auth0 and that we have access to the access token here in the first place is because of our startup code:

Startup.cs#

public void ConfigureServices(IServiceCollection services)
{
var domain = $"https://{Configuration["Auth0:Domain"]}/";
services.AddAuthentication(options =>
{
options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
}).AddJwtBearer(options =>
{
options.Authority = domain;
options.Audience = Configuration["Auth0:Audience"];
options.Events = new JwtBearerEvents
{
OnTokenValidated = context =>
{
if (context.SecurityToken is JwtSecurityToken token)
{
if (context.Principal.Identity is ClaimsIdentity identity)
{
identity.AddClaim(new Claim("access_token", token.RawData));
}
}
return Task.FromResult(0);
}
};
});
// ....

Authorization#

We're pretty much done now; just one magic button to investigate: "Get Admin Data". If you presently try and access the admin data you'll get a 403 Forbidden. It's forbidden because that endpoint relies on the "do:admin:thing" scope in our claims:

UserController.cs#

[Authorize(Scopes.DoAdminThing)]
[HttpGet("api/userDoAdminThing")]
public IActionResult GetUserDoAdminThing()
{
return Ok("Admin endpoint");
}

Scopes.cs#

public static class Scopes
{
// the do:admin:thing scope is custom and defined in the scopes section of our API in the Auth0 dashboard
public const string DoAdminThing = "do:admin:thing";
}

This wired up in our ASP.NET Core app like so:

Startup.cs#

services.AddAuthorization(options =>
{
options.AddPolicy(Scopes.DoAdminThing, policy => policy.Requirements.Add(new HasScopeRequirement(Scopes.DoAdminThing, domain)));
});
// register the scope authorization handler
services.AddSingleton<iauthorizationhandler, hasscopehandler="">();
</iauthorizationhandler,>

HasScopeHandler.cs#

public class HasScopeHandler : AuthorizationHandler<hasscoperequirement>
{
protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, HasScopeRequirement requirement)
{
// If user does not have the scope claim, get out of here
if (!context.User.HasClaim(c => c.Type == "scope" && c.Issuer == requirement.Issuer))
return Task.CompletedTask;
// Split the scopes string into an array
var scopes = context.User.FindFirst(c => c.Type == "scope" && c.Issuer == requirement.Issuer).Value.Split(' ');
// Succeed if the scope array contains the required scope
if (scopes.Any(s => s == requirement.Scope))
context.Succeed(requirement);
return Task.CompletedTask;
}
}
</hasscoperequirement>

The reason we're 403ing at present is because when our HasScopeHandler executes, requirement.Scope has the value of "do:admin:thing" and our scopes do not contain that value. To add it, go to your API in the Auth0 management console and add it:

Note that you can control how this scope is acquired using "Rules" in the Auth0 management portal.

You won't be able to access the admin endpoint yet because you're still rocking with the old access token; pre-newly-added scope. But when you next login to Auth0 you'll see a prompt like this:

Which demonstrates that you're being granted an extra scope. With your new shiny access token you can now access the oh-so-secret Admin endpoint.

I had some more questions about Auth0 as I'm still new to it myself. To see my question (and the very helpful answer!) go here: https://community.auth0.com/questions/13786/get-user-data-server-side-what-is-a-good-approach

Debugging ASP.Net Core in VS or Code

I've been using Visual Studio for a long time. Very good it is too. However, it is heavyweight; it does far more than I need. What I really want when I'm working is a fast snappy editor, with intellisense and debugging. What I've basically described is VS Code. It rocks and has long become my go-to editor for TypeScript.

Since I'm a big C# fan as well I was delighted that editing C# was also possible in Code. What I want now is to be able to debug ASP.Net Core in Visual Studio OR VS Code. Can it be done? Let's see....

I fire up Visual Studio and File -&gt; New Project (yes it's a verb now). Select .NET Core and then ASP.Net Core Web Application. OK. We'll go for a Web Application. Let's not bother with authentication. OK. Wait a couple of seconds and Visual Studio serves up a new project. Hit F5 and we're debugging in Visual Studio.

So far, so straightforward. What will VS Code make of this?

I cd my way to the root of my new ASP.Net Core Web Application and type the magical phrase "code .". Up it fires. I feel lucky, let's hit "F5". Huh, a dropdown shows up saying "Select Environment" and offering me the options of Chrome and Node. Neither do I want. It's about this time I remember this is a clean install of VS Code and doesn't yet have the C# extension installed. In fact, if I open a C# file it up it tells me and recommends that I install. Well that's nice. I take it up on the kind offer; install and reload.

When it comes back up I see the following entries in the "output" tab:

Updating C# dependencies...
Platform: win32, x86_64 (win7-x64)
Downloading package 'OmniSharp (.NET 4.6 / x64)' (20447 KB) .................... Done!
Downloading package '.NET Core Debugger (Windows / x64)' (39685 KB) .................... Done!
Installing package 'OmniSharp (.NET 4.6 / x64)'
Installing package '.NET Core Debugger (Windows / x64)'
Finished

Note that mention of "debugger" there? Sounds super-promising. There's also some prompts: "There are unresolved dependencies from 'WebApplication1/WebApplication1.csproj'. Please execute the restore command to continue"

So it wants me to dotnet restore. It's even offering to do that for me! Have at you; I let it.

Welcome to .NET Core!
---------------------
Learn more about .NET Core @ https://aka.ms/dotnet-docs. Use dotnet --help to see available commands or go to https://aka.ms/dotnet-cli-docs.
Telemetry
--------------
The .NET Core tools collect usage data in order to improve your experience. The data is anonymous and does not include command-line arguments. The data is collected by Microsoft and shared with the community.
You can opt out of telemetry by setting a DOTNET_CLI_TELEMETRY_OPTOUT environment variable to 1 using your favorite shell.
You can read more about .NET Core tools telemetry @ https://aka.ms/dotnet-cli-telemetry.
Configuring...
-------------------
A command is running to initially populate your local package cache, to improve restore speed and enable offline access. This command will take up to a minute to complete and will only happen once.
Decompressing Decompressing 100% 4026 ms
Expanding 100% 34814 ms
Restoring packages for c:\Source\Debugging\WebApplication1\WebApplication1\WebApplication1.csproj...
Restoring packages for c:\Source\Debugging\WebApplication1\WebApplication1\WebApplication1.csproj...
Restore completed in 734.05 ms for c:\Source\Debugging\WebApplication1\WebApplication1\WebApplication1.csproj.
Generating MSBuild file c:\Source\Debugging\WebApplication1\WebApplication1\obj\WebApplication1.csproj.nuget.g.props.
Writing lock file to disk. Path: c:\Source\Debugging\WebApplication1\WebApplication1\obj\project.assets.json
Restore completed in 1.26 sec for c:\Source\Debugging\WebApplication1\WebApplication1\WebApplication1.csproj.
NuGet Config files used:
C:\Users\johnr\AppData\Roaming\NuGet\NuGet.Config
C:\Program Files (x86)\NuGet\Config\Microsoft.VisualStudio.Offline.config
Feeds used:
https://api.nuget.org/v3/index.json
C:\Program Files (x86)\Microsoft SDKs\NuGetPackages\
Done: 0.

The other prompt says "Required assets to build and debug are missing from 'WebApplication1'. Add them?". This also sounds very promising and I give it the nod. This creates a .vscode directory and 2 enclosed files; launch.json and tasks.json.

So lets try that F5 thing again... http://localhost:5000/ is now serving the same app. That looks pretty good. So lets add a breakpoint to the HomeController and see if we can hit it:

Well I can certainly add a breakpoint but all those red squigglies are unnerving me. Let's clean the slate. If you want to simply do that in VS Code hold down CTRL+SHIFT+P and then type "reload". Pick "Reload window". A couple of seconds later we're back in and Code is looking much happier. Can we hit our breakpoint?

Yes we can! So you're free to develop in either Code or VS; the choice is yours. I think that's pretty awesome - and well done to all the peeople behind Code who've made this a pretty seamless experience!