Skip to main content

3 posts tagged with "easy auth"

View All Tags

Goodbye Client Affinity, Hello Data Protection with Azure

· 4 min read

I've written lately about zero downtime releases with Azure App Service. Zero downtime releases are only successful if your authentication mechanism survives a new deployment. We looked in my last post at how to achieve this with Azure's in-built authentication mechanism; Easy Auth.

We're now going to look at how the same goal can be achieved if your ASP.NET application is authenticating another way. We achieve this through use of the ASP.NET Data Protection system. Andrew Lock has written an excellent walkthrough on the topic and I encourage you to read it.

We're interested in the ASP.NET data-protection system because it encrypts and decrypts sensitive data including the authentication cookie. It's wonderful that the data protection does this, but at the same time it presents a problem. We would like to route traffic to multiple instances of our application… So traffic could go to instance 1, instance 2 of our app etc.

traffic to app service

How can we ensure the different instances of our app can read the authentication cookies regardless of the instance that produced them? How can we ensure that instance 1 can read cookies produced by instance 2 and vice versa? And for that matter, we'd like all instances to be able to read cookies whether they were produced by an instance in a production or staging slot.

We're aiming to avoid the use of "sticky sessions" and ARRAffinity cookies. These ensure that traffic is continually routed to the same instance. Routing to the same instance explicitly prevents us from stopping routing traffic to an old instance and starting routing to a new one.

With the data protection activated and multiple instances of your app service you immediately face the issue that different instances of the app will be unable to read cookies they did not create. This is the default behaviour of data protection. To quote the docs:

Data Protection relies upon a set of cryptographic keys stored in a key ring. When the Data Protection system is initialized, it applies default settings that store the key ring locally. Under the default configuration, a unique key ring is stored on each node of the web farm. Consequently, each web farm node can't decrypt data that's encrypted by an app on any other node.

The problem here is the data protection keys (the key ring) is being stored locally on each instance. What are the implications of this? Well, For example, instance 2 doesn't have access to the keys instance 1 is using and so can't decrypt instance 1 cookies.

Sharing is caring#

What we need to do is move away from storing keys locally, and to storing it in a shared place instead. We're going to store data protection keys in Azure Blob Storage and protect the keys with Azure Key Vault:

persist keys to azure blob

All instances of the application can access the key ring and consequently sharing cookies is enabled. As the documentation attests, enabling this is fairly simple. It amounts to adding the following packages to your ASP.NET app:

And adding the following to the ConfigureServices in your ASP.NET app:

services.AddDataProtection().SetApplicationName("OurWebApp")        // azure credentials require storage blob contributor role permissions        // eg        .PersistKeysToAzureBlobStorage(new Uri($"https://{Configuration["StorageAccountName"]}"), new DefaultAzureCredential())
        // azure credentials require key vault crypto role permissions        // eg        .ProtectKeysWithAzureKeyVault(new Uri($"https://{Configuration["KeyVaultName"]}"), new DefaultAzureCredential());

In the above example you can see we're passing the name of our Storage account and Key Vault via configuration.

There's one more crucial piece of the puzzle here; and it's role assignments, better known as permissions. Your App Service needs to be able to read and write to Azure Key Vault and the Azure Blob Storage. The permissions of Storage Blob Data Contributor and Key Vault Crypto Officer are sufficient to enable this. (If you'd like to see what configuring that looks like via ARM templates then check out this post.)

With this in place we're able to route traffic to any instance of our application, secure in the knowledge that it will be able to read the cookies. Furthermore, we've enabled zero downtime releases as a direct consequence.

Making Easy Auth tokens survive releases on Linux Azure App Service

· 4 min read

I wrote recently about zero downtime deployments on Azure App Service. Many applications require authentication, and ours is no exception. In our case we're using Azure Active Directory facilitated by "Easy Auth" which provides authentication to our App Service.

Our app uses a Linux App Service. It's worth knowing that Linux App Services run as a Docker container. As a consequence, Easy Auth works in a slightly different way; effectively as a middleware. To quote the docs on Easy Auth:

This module handles several things for your app:

  • Authenticates users with the specified provider
  • Validates, stores, and refreshes tokens
  • Manages the authenticated session
  • Injects identity information into request headers The module runs separately from your application code and is configured using app settings. No SDKs, specific languages, or changes to your application code are required.

The authentication and authorization module runs in a separate container, isolated from your application code. Using what's known as the Ambassador pattern, it interacts with the incoming traffic to perform similar functionality as on Windows.

However, Microsoft have acknowledged there is a potential bug in Easy Auth support at present. When the app service is restarted, the stored tokens are removed, and authentication begins to fail. As you might well imagine, authentication similarly starts to fail when a new app service is introduced - as is the case during deployment.

This is really significant. You may well have "zero downtime deployment", but it doesn't amount to a hill of beans if the moment you've deployed your users find they're effectively logged out. The advice from Microsoft is to use Blob Storage for Token Cache:

Chris Gillum said in a blog on the topic:

you can provision an Azure Blob Storage container and configure your web app with a SaS URL (with read/write/list access) pointing to that blob container. This SaS URL can then be saved to the WEBSITE_AUTH_TOKEN_CONTAINER_SASURL app setting. When this app setting is present, all tokens will be stored in and fetched from the specified blob container.

To turn that into something visual, what's suggested is this:

diagram of Easy Auth with blog storage

SaS-sy ARM Templates#

I have the good fortune to work with some very talented people. One of them, John McCormick turned his hand to putting this proposed solution into azure-pipelines.yml and ARM template-land. First of all, let's look at our azure-pipelines.yml. We add the following, prior to our deployment job:

- job: SASGen        displayName: Generate SAS Token
        steps:          - task: [email protected]            name: ObtainSasTokenTask            inputs:              azureSubscription: $(serviceConnection)              ScriptType: inlineScript              Inline: |                $startTime = Get-Date                $expiryTime = $startTime.AddDays(90)                $storageAcc = Get-AzStorageAccount -ResourceGroupName $(azureResourceGroup) -Name $(storageAccountName)                $ctx = $storageAcc.Context                $sas = New-AzStorageContainerSASToken -Context $ctx -Name "tokens" -Permission "rwl" -Protocol HttpsOnly -StartTime $startTime -ExpiryTime $expiryTime -FullUri                Write-Host "##vso[task.setvariable variable=sasToken;issecret=true;isOutput=true]$sas"              azurePowerShellVersion: 'LatestVersion'
      - job: DeployAppARMTemplates        variables:          sasToken: $[dependencies.SASGen.outputs['ObtainSasTokenTask.sasToken'] ]        displayName: Deploy App ARM Templates        dependsOn:        - SASGen
        steps:          - task: [email protected]            displayName: Deploy app-service ARM Template            inputs:              deploymentScope: Resource Group              azureResourceManagerConnection: $(serviceConnection)              subscriptionId: $(subscriptionId)              action: Create Or Update Resource Group              resourceGroupName: $(azureResourceGroup)              location: $(location)              templateLocation: Linked artifact              csmFile: 'infra/app-service/azuredeploy.json'              csmParametersFile: 'infra/azuredeploy.parameters.json'              overrideParameters: >-                -sasUrl $(sasToken)              deploymentMode: Incremental

There's two notable things happening above:

  1. In the SASGen job, a PowerShell script runs that generates a SaS token URL with read, write and list permissions that will last for 90 days. (Incidentally, there is a way to do this via ARM templates, and without PowerShell - but alas it didn't seem to work when we experimented with it.)
  2. The generated (secret) token URL (sasUrl) is passed as a parameter to our App Service ARM template. The ARM template sets an appsetting for the app service:
{    "apiVersion": "2020-09-01",    "name": "appsettings",    "type": "config",    "properties": {        "WEBSITE_AUTH_TOKEN_CONTAINER_SASURL": "[parameters('sasUrl')]"    }},

If you google WEBSITE_AUTH_TOKEN_CONTAINER_SASURL you will not find a geat deal. Documentation is short. What you will find is Jeff Sanders excellent blog on the topic. It is, in terms of content, it has some commonality with this post; except in Jeff's example he's manually implementing the workaround in the Azure Portal.

What's actually happening?#

With this in place, every time someone logs into your app a JSON token is written to the storage like so:

token in storage account

If you take the trouble to look inside you'll find something like this tucked away:

{    "encrypted": true,    "tokens": {        "aad": "herewith_a_very_very_long_encrypted_token"    },    "version": 1}

With this in place, you can safely restart your app service and / or deploy a new one, safe in the knowledge that the tokens will live on in the storage account, and that consequently you will not be unauthenticating users.

Azure Easy Auth and Roles with .NET and Microsoft.Identity.Web

· 3 min read

I wrote recently about how to get Azure Easy Auth to work with roles. This involved borrowing the approach used by MaximeRouiller.Azure.AppService.EasyAuth.

As a consequence of writing that post I came to learn that official support for Azure Easy Auth had landed in October 2020 in v1.2 of Microsoft.Identity.Web. This was great news; I was delighted.

However, it turns out that the same authorization issue that MaximeRouiller.Azure.AppService.EasyAuth suffers from, is visited upon Microsoft.Identity.Web as well.

Getting set up#

We're using a .NET 5 project, running in an Azure App Service (Linux). In our .csproj we have:

<PackageReference Include="Microsoft.Identity.Web" Version="1.4.1" />

In our Startup.cs we're using:

public void ConfigureServices(IServiceCollection services) {    //...    services.AddMicrosoftIdentityWebAppAuthentication(Configuration);    //...}
public void Configure(IApplicationBuilder app, IWebHostEnvironment env) {    //...    app.UseAuthentication();    app.UseAuthorization();    //...}

You gotta roles with it#

Whilst the authentication works, authorization does not. So whilst my app knows who I am - the authorization is not working with relation to roles.

When directly using Microsoft.Identity.Web when running locally, we see these claims:

[    // ...    {        "type": "",        "value": "Administrator"    },    {        "type": "",        "value": "Reader"    },    // ...

However, we get different behaviour with EasyAuth; it provides roles related claims with a different type:

[    // ...    {        "type": "roles",        "value": "Administrator"    },    {        "type": "roles",        "value": "Reader"    },    // ...]

This means that roles related authorization does not work with Easy Auth:

[Authorize(Roles = "Reader")][HttpGet("api/reader")]public string GetWithReader() =>    "this is a secure endpoint that users with the Reader role can access";

This is because .NET is looking for claims with a type of "" and not finding them with Easy Auth.

Claims transformation FTW#

There is a way to work around this issue .NET using IClaimsTransformation. This is a poorly documented feature, but fortunately Gunnar Peipman's blog does a grand job of explaining it.

Inside our Startup.cs I've registered a claims transformer:

services.AddScoped<IClaimsTransformation, AddRolesClaimsTransformation>();

And that claims transformer looks like this:

public class AddRolesClaimsTransformation : IClaimsTransformation {    private readonly ILogger<AddRolesClaimsTransformation> _logger;
    public AddRolesClaimsTransformation(ILogger<AddRolesClaimsTransformation> logger) {        _logger = logger;    }
    public Task<ClaimsPrincipal> TransformAsync(ClaimsPrincipal principal) {        var mappedRolesClaims = principal.Claims            .Where(claim => claim.Type == "roles")            .Select(claim => new Claim(ClaimTypes.Role, claim.Value))            .ToList();
        // Clone current identity        var clone = principal.Clone();
        if (clone.Identity is not ClaimsIdentity newIdentity) return Task.FromResult(principal);
        // Add role claims to cloned identity        foreach (var mappedRoleClaim in mappedRolesClaims)             newIdentity.AddClaim(mappedRoleClaim);
        if (mappedRolesClaims.Count > 0)            _logger.LogInformation("Added roles claims {mappedRolesClaims}", mappedRolesClaims);        else            _logger.LogInformation("No roles claims added");
        return Task.FromResult(clone);    }}

The class above creates a new principal with "roles" claims mapped across to "". This is enough to get .NET treating roles the way you'd hope.

I've raised an issue against the Microsoft.Identity.Web repo about this. Perhaps one day this workaround will no longer be necessary.