Skip to main content

2 posts tagged with "Role Assignments"

View All Tags

Β· 9 min read

How can we deploy resources to Azure, and then run an integration test through them in the context of an Azure Pipeline? This post will show how to do this by permissioning our Azure Pipeline to access these resources using Azure RBAC role assignments. It will also demonstrate a dotnet test that runs in the context of the pipeline and makes use of those role assignments.

title image reading "Permissioning Azure Pipelines with Bicep and Role Assignments" and some Azure logos

We're following this approach as an alternative to exporting connection strings, as these can be viewed in the Azure Portal; which may be an security issue if you have many people who are able to access the portal and view deployment outputs.

We're going to demonstrate this approach using Event Hubs. It's worth calling out that this is a generally useful approach which can be applied to any Azure resources that support Azure RBAC Role Assignments. So wherever in this post you read "Event Hubs", imagine substituting other Azure resources you're working with.

The post will do the following:

  • Add Event Hubs to our Azure subscription
  • Permission our service connection / service principal
  • Deploy to Azure with Bicep
  • Write an integration test
  • Write a pipeline to bring it all together

Add Event Hubs to your subscription

First of all, we may need to add Event Hubs to our Azure subscription.

Without this in place, we may encounter errors of the type:

##[error]MissingSubscriptionRegistration: The subscription is not registered to use namespace 'Microsoft.EventHub'. See https://aka.ms/rps-not-found for how to register subscriptions.

We do this by going to "Resource Providers" in the Azure Portal and registering the resources you need. Lots are registered by default, but not all.

Screenshot of the Azure Portal, subscriptions -> resource providers section, showing that Event Hubs have been registered

Permission our service connection / service principal

In order that we can run pipelines related to Azure, we mostly need to have an Azure Resource Manager service connection set up in Azure DevOps. Once that exists, we also need to give it a role assignment to allow it to create role assignments of its own when pipelines are running.

Without this in place, we may encounter errors of the type:

##[error]The template deployment failed with error: 'Authorization failed for template resource '{GUID-THE-FIRST}' of type 'Microsoft.Authorization/roleAssignments'. The client '{GUID-THE-SECOND}' with object id '{GUID-THE-SECOND}' does not have permission to perform action 'Microsoft.Authorization/roleAssignments/write' at scope '/subscriptions/***/resourceGroups/johnnyreilly/providers/Microsoft.EventHub/namespaces/evhns-demo/providers/Microsoft.Authorization/roleAssignments/{GUID-THE-FIRST}'.'.

Essentially, we want to be able to run pipelines that say "hey Azure, we want to give permissions to our service connection". We are doing this with the self same service connection, so (chicken and egg) we first need to give it permission to give those commands in future. This is a little confusing; but let's role with it. (Pun most definitely intended. πŸ˜‰)

To grant that permission / add that role assignment, we go to the service connection in Azure Devops:

Screenshot of the service connection in Azure DevOps

We can see there's two links here; first we'll click on "Manage Service Principal", which will take us to the service principal in the Azure Portal:

Screenshot of the service principal in the Azure Portal

Take note of the display name of the service principal; we'll need that as we click on the "Manage service connection roles" link, which will take us to the resource groups IAM page in the Azure Portal:

Screenshot of the resource groups IAM page in the Azure Portal

Here we can click on "Add role assignment", select "Owner":

Screenshot of the add role assignment IAM page in the Azure Portal

Then when selecting members we should be able to look up the service principal to assign it:

Screenshot of the add role assignment select member IAM page in the Azure Portal

We now have a service connection which we should be able to use for granting permissions / role assignments, which is what we need.

Event Hub and Role Assignment with Bicep

Next we want a Bicep file that will, when run, provision an Event Hub and a role assignment which will allow our Azure Pipeline (via its service connection) to interact with it.

@description('Name of the eventhub namespace')
param eventHubNamespaceName string

@description('Name of the eventhub name')
param eventHubName string

@description('The service principal')
param principalId string

// Create an Event Hub namespace
resource eventHubNamespace 'Microsoft.EventHub/[email protected]' = {
name: eventHubNamespaceName
location: resourceGroup().location
sku: {
name: 'Standard'
tier: 'Standard'
capacity: 1
}
properties: {
zoneRedundant: true
}
}

// Create an Event Hub inside the namespace
resource eventHub 'Microsoft.EventHub/namespaces/[email protected]' = {
parent: eventHubNamespace
name: eventHubName
properties: {
messageRetentionInDays: 7
partitionCount: 1
}
}

// give Azure Pipelines Service Principal permissions against the Event Hub

var roleDefinitionAzureEventHubsDataOwner = subscriptionResourceId('Microsoft.Authorization/roleDefinitions', 'f526a384-b230-433a-b45c-95f59c4a2dec')

resource integrationTestEventHubReceiverNamespaceRoleAssignment 'Microsoft.Authorization/[email protected]' = {
name: guid(principalId, eventHub.id, roleDefinitionAzureEventHubsDataOwner)
scope: eventHubNamespace
properties: {
roleDefinitionId: roleDefinitionAzureEventHubsDataOwner
principalId: principalId
}
}

Do note that our bicep template takes the service principal id as a parameter. We're going to supply this later from our Azure Pipeline.

Our test

We're now going to write a dotnet integration test which will make use of the infrastructure deployed by our Bicep template. Let's create a new test project:

mkdir src
cd src
dotnet new xunit -o IntegrationTests
cd IntegrationTests
dotnet add package Azure.Identity
dotnet add package Azure.Messaging.EventHubs
dotnet add package FluentAssertions
dotnet add package Microsoft.Extensions.Configuration.EnvironmentVariables

We'll create a test file called EventHubTest.cs with these contents:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using Azure.Identity;
using Azure.Messaging.EventHubs;
using Azure.Messaging.EventHubs.Consumer;
using Azure.Messaging.EventHubs.Producer;
using FluentAssertions;
using Microsoft.Extensions.Configuration;
using Newtonsoft.Json;
using Xunit;
using Xunit.Abstractions;

namespace IntegrationTests
{
public record EchoMessage(string Id, string Message, DateTime Timestamp);

public class EventHubTest
{
private readonly ITestOutputHelper _output;

public EventHubTest(ITestOutputHelper output)
{
_output = output;
}

[Fact]
public async Task Can_post_message_to_event_hub_and_read_it_back()
{
// ARRANGE
var configuration = new ConfigurationBuilder()
.AddEnvironmentVariables()
.Build();

// populated by variables specified in the Azure Pipeline
var eventhubNamespaceName = configuration["EVENTHUBNAMESPACENAME"];
eventhubNamespaceName.Should().NotBeNull();
var eventhubName = configuration["EVENTHUBNAME"];
eventhubName.Should().NotBeNull();
var tenantId = configuration["TENANTID"];
tenantId.Should().NotBeNull();

// populated as a consequence of the addSpnToEnvironment in the azure-pipelines.yml
var servicePrincipalId = configuration["SERVICEPRINCIPALID"];
servicePrincipalId.Should().NotBeNull();
var servicePrincipalKey = configuration["SERVICEPRINCIPALKEY"];
servicePrincipalKey.Should().NotBeNull();

var fullyQualifiedNamespace = $"{eventhubNamespaceName}.servicebus.windows.net";

var clientCredential = new ClientSecretCredential(tenantId, servicePrincipalId, servicePrincipalKey);
var eventHubClient = new EventHubProducerClient(
fullyQualifiedNamespace: fullyQualifiedNamespace,
eventHubName: eventhubName,
credential: clientCredential
);
var ourGuid = Guid.NewGuid().ToString();
var now = DateTime.UtcNow;
var sentEchoMessage = new EchoMessage(Id: ourGuid, Message: $"Test message", Timestamp: now);
var sentEventData = new EventData(
Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(sentEchoMessage))
);

// ACT
await eventHubClient.SendAsync(new List<EventData> { sentEventData }, CancellationToken.None);

var eventHubConsumerClient = new EventHubConsumerClient(
consumerGroup: EventHubConsumerClient.DefaultConsumerGroupName,
fullyQualifiedNamespace: fullyQualifiedNamespace,
eventHubName: eventhubName,
credential: clientCredential
);

List<PartitionEvent> partitionEvents = new();
await foreach (var partitionEvent in eventHubConsumerClient.ReadEventsAsync(new ReadEventOptions
{
MaximumWaitTime = TimeSpan.FromSeconds(10)
}))
{
if (partitionEvent.Data == null) break;
_output.WriteLine(Encoding.UTF8.GetString(partitionEvent.Data.EventBody.ToArray()));
partitionEvents.Add(partitionEvent);
}

// ASSERT
partitionEvents.Count.Should().BeGreaterOrEqualTo(1);
var firstOne = partitionEvents.FirstOrDefault(evnt =>
ExtractTypeFromEventBody<EchoMessage>(evnt, _output)?.Id == ourGuid
);
var receivedEchoMessage = ExtractTypeFromEventBody<EchoMessage>(firstOne, _output);
receivedEchoMessage.Should().BeEquivalentTo(sentEchoMessage, because: "the event body should be the same one posted to the message queue");
}

private static T ExtractTypeFromEventBody<T>(PartitionEvent evnt, ITestOutputHelper _output)
{
try
{
return JsonConvert.DeserializeObject<T>(Encoding.UTF8.GetString(evnt.Data.EventBody.ToArray()));
}
catch (JsonException)
{
_output.WriteLine("[" + Encoding.UTF8.GetString(evnt.Data.EventBody.ToArray()) + "] is probably not JSON");
return default(T);
}
}
}
}

Let's talk through what happens in the test above:

  1. We read in Event Hub connection configuration for the test from environment variables. (These will be supplied by an Azure Pipeline that we will create shortly.)
  2. We post a message to the Event Hub.
  3. We read a message back from the Event Hub.
  4. We confirm that the message we read back matches the one we posted.

Now that we have our test, we want to be able to execute it. For that we need an Azure Pipeline!

Azure Pipeline

We're going to add an azure-pipelines.yml file which Azure DevOps can use to power a pipeline:

variables:
- name: eventHubNamespaceName
value: evhns-demo
- name: eventHubName
value: evh-demo

pool:
vmImage: ubuntu-latest

steps:
- task: [email protected]
displayName: Get Service Principal Id
inputs:
azureSubscription: $(serviceConnection)
scriptType: bash
scriptLocation: inlineScript
addSpnToEnvironment: true
inlineScript: |
PRINCIPAL_ID=$(az ad sp show --id $servicePrincipalId --query objectId -o tsv)
echo "##vso[task.setvariable variable=PIPELINE_PRINCIPAL_ID;]$PRINCIPAL_ID"

- bash: az bicep build --file infra/main.bicep
displayName: 'Compile Bicep to ARM'

- task: [email protected]
name: DeployEventHubInfra
displayName: Deploy Event Hub infra
inputs:
deploymentScope: Resource Group
azureResourceManagerConnection: $(serviceConnection)
subscriptionId: $(subscriptionId)
action: Create Or Update Resource Group
resourceGroupName: $(azureResourceGroup)
location: $(location)
templateLocation: Linked artifact
csmFile: 'infra/main.json' # created by bash script
overrideParameters: >-
-eventHubNamespaceName $(eventHubNamespaceName)
-eventHubName $(eventHubName)
-principalId $(PIPELINE_PRINCIPAL_ID)
deploymentMode: Incremental

- task: [email protected]
displayName: 'Install .NET SDK 5.0.x'
inputs:
packageType: 'sdk'
version: 5.0.x

- task: [email protected]
displayName: dotnet integration test
inputs:
azureSubscription: $(serviceConnection)
scriptType: pscore
scriptLocation: inlineScript
addSpnToEnvironment: true # allows access to service principal details in script
inlineScript: |
cd $(Build.SourcesDirectory)/src/IntegrationTests
dotnet test

When the pipeline is run, it does the following:

  1. Gets the service principal id from the service connection.
  2. Compiles our Bicep into an ARM template
  3. Deploys the compiled ARM template to Azure
  4. Installs the dotnet SDK
  5. Uses the Azure CLI task which allows us to access service principal details in the pipeline to run our dotnet test.

We'll create a pipeline in Azure DevOps pointing to this file, and we'll also create the variables that it depends upon:

  • azureResourceGroup - the name of your resource group in Azure where the app will be deployed
  • location - where your app is deployed, eg northeurope
  • serviceConnection - the name of your AzureRM service connection in Azure DevOps
  • subscriptionId - your Azure subscription id from the Azure Portal
  • tenantId - the Azure tenant id from the Azure Portal

Running the pipeline

Now we're ready to run our pipeline:

screenshot of pipeline running successfully

Here we can see that the pipeline runs and the test passes. That means we've successfully provisioned the Event Hub and permissioned our pipeline to be able to access it using Azure RBAC role assignments. We then wrote a test which used the pipeline credentials to interact with the Event Hub. To see the repo that demostrates this, look here.

Just to reiterate: we've demonstrated this approach using Event Hubs. This is a generally useful approach which can be applied to any Azure resources that support Azure RBAC Role Assignments.

Thanks to Jamie McCrindle for helping out with permissioning the service connection / service principal. His post on rotating AZURE_CREDENTIALS in GitHub with Terraform provides useful background for those who would like to do similar permissioning using Terraform.

Β· 7 min read

This post is about Azure's role assignments and ARM templates. Role assignments can be thought of as "permissions for Azure".

If you're deploying to Azure, there's a good chance you're using ARM templates to do so. Once you've got past "Hello World", you'll probably find yourself in a situation when you're deploying multiple types of resource to make your solution. For instance, you may be deploying an App Service alongside Key Vault and Storage.

One of the hardest things when it comes to deploying software and having it work, is permissions. Without adequate permissions configured, the most beautiful code can do nothing. Incidentally, this is a good thing. We're deploying to the web; many people are there, not all good. As a different kind of web-head once said:

Spider-man saying with great power, comes great responsibility

Azure has great power and suggests you use it wisely.

Access management for cloud resources is critical for any organization that uses the cloud. Azure role-based access control (Azure RBAC) helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.

Designating groups or individual roles responsible for specific functions in Azure helps avoid confusion that can lead to human and automation errors that create security risks. Restricting access based on the need to know and least privilege security principles is imperative for organizations that want to enforce security policies for data access.

This is good advice. With that in mind, how can we ensure that the different resources we're deploying to Azure can talk to one another?

Role (up for your) assignments

The answer is roles. There's a number of roles that exist in Azure that can be assigned to users, groups, service principals and managed identities. In our own case we're using managed identity for our resources. What we can do is use "role assignments" to give our managed identity access to given resources. Arturo Lucatero gives a great short explanation of this:

Whilst this explanation is delightfully simple, the actual implementation when it comes to ARM templates is a little more involved. Because now it's time to talk "magic" GUIDs. Consider the following truncated ARM template, which gives our managed identity (and hence our App Service which uses this identity) access to Key Vault and Storage:

{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
// ...
"variables": {
// ...
"managedIdentity": "[concat('mi-', parameters('applicationName'), '-', parameters('environment'), '-', '001')]",
"appInsightsName": "[concat('appi-', parameters('applicationName'), '-', parameters('environment'), '-', '001')]",
"keyVaultName": "[concat('kv-', parameters('applicationName'), '-', parameters('environment'), '-', '001')]",
"storageAccountName": "[concat('st', parameters('applicationName'), parameters('environment'), '001')]",
"storageBlobDataContributor": "[subscriptionResourceId('Microsoft.Authorization/roleDefinitions', 'ba92f5b4-2d11-453d-a403-e96b0029c9fe')]",
"keyVaultSecretsOfficer": "[subscriptionResourceId('Microsoft.Authorization/roleDefinitions', 'b86a8fe4-44ce-4948-aee5-eccb2c155cd7')]",
"keyVaultCryptoOfficer": "[subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '14b46e9e-c2b7-41b4-b07b-48a6ebf60603')]",
"uniqueRoleGuidKeyVaultSecretsOfficer": "[guid(resourceId('Microsoft.KeyVault/vaults', variables('keyVaultName')), variables('keyVaultSecretsOfficer'), resourceId('Microsoft.KeyVault/vaults', variables('keyVaultName')))]",
"uniqueRoleGuidKeyVaultCryptoOfficer": "[guid(resourceId('Microsoft.KeyVault/vaults', variables('keyVaultName')), variables('keyVaultCryptoOfficer'), resourceId('Microsoft.KeyVault/vaults', variables('keyVaultName')))]",
"uniqueRoleGuidStorageAccount": "[guid(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), variables('storageBlobDataContributor'), resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')))]"
},
"resources": [
{
"type": "Microsoft.ManagedIdentity/userAssignedIdentities",
"name": "[variables('managedIdentity')]",
"apiVersion": "2018-11-30",
"location": "[parameters('location')]"
},
// ...
{
"type": "Microsoft.Storage/storageAccounts/providers/roleAssignments",
"apiVersion": "2020-04-01-preview",
"name": "[concat(variables('storageAccountName'), '/Microsoft.Authorization/', variables('uniqueRoleGuidStorageAccount'))]",
"dependsOn": [
"[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', variables('managedIdentity'))]"
],
"properties": {
"roleDefinitionId": "[variables('storageBlobDataContributor')]",
"principalId": "[reference(resourceId('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('managedIdentity')), '2018-11-30').principalId]",
"scope": "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
"principalType": "ServicePrincipal"
}
},
{
"type": "Microsoft.KeyVault/vaults/providers/roleAssignments",
"apiVersion": "2018-01-01-preview",
"name": "[concat(variables('keyVaultName'), '/Microsoft.Authorization/', variables('uniqueRoleGuidKeyVaultSecretsOfficer'))]",
"dependsOn": [
"[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', variables('managedIdentity'))]"
],
"properties": {
"roleDefinitionId": "[variables('keyVaultSecretsOfficer')]",
"principalId": "[reference(resourceId('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('managedIdentity')), '2018-11-30').principalId]",
"scope": "[resourceId('Microsoft.KeyVault/vaults', variables('keyVaultName'))]",
"principalType": "ServicePrincipal"
}
},
{
"type": "Microsoft.KeyVault/vaults/providers/roleAssignments",
"apiVersion": "2018-01-01-preview",
"name": "[concat(variables('keyVaultName'), '/Microsoft.Authorization/', variables('uniqueRoleGuidKeyVaultCryptoOfficer'))]",
"dependsOn": [
"[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', variables('managedIdentity'))]"
],
"properties": {
"roleDefinitionId": "[variables('keyVaultCryptoOfficer')]",
"principalId": "[reference(resourceId('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('managedIdentity')), '2018-11-30').principalId]",
"scope": "[resourceId('Microsoft.KeyVault/vaults', variables('keyVaultName'))]",
"principalType": "ServicePrincipal"
}
}
]
}

Let's take a look at these three variables:

"storageBlobDataContributor": "[subscriptionResourceId('Microsoft.Authorization/roleDefinitions', 'ba92f5b4-2d11-453d-a403-e96b0029c9fe')]",
"keyVaultSecretsOfficer": "[subscriptionResourceId('Microsoft.Authorization/roleDefinitions', 'b86a8fe4-44ce-4948-aee5-eccb2c155cd7')]",
"keyVaultCryptoOfficer": "[subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '14b46e9e-c2b7-41b4-b07b-48a6ebf60603')]",

The three variables above contain the subscription resource ids for the roles Storage Blob Data Contributor, Key Vault Secrets Officer and Key Vault Crypto Officer. The first question on your mind is likely: "what is ba92f5b4-2d11-453d-a403-e96b0029c9fe and where does it come from?" Great question! Well, each of these GUIDs represents a built-in role in Azure RBAC. The ba92f5b4-2d11-453d-a403-e96b0029c9fe represents the Storage Blob Data Contributor role.

How can I look these up? Well, there's two ways; there's an article which documents them here or you could crack open the Cloud Shell and look up a role by GUID like so:

Get-AzRoleDefinition | ? {$_.id -eq "ba92f5b4-2d11-453d-a403-e96b0029c9fe" }

Name : Storage Blob Data Contributor
Id : ba92f5b4-2d11-453d-a403-e96b0029c9fe
IsCustom : False
Description : Allows for read, write and delete access to Azure Storage blob containers and data
Actions : {Microsoft.Storage/storageAccounts/blobServices/containers/delete, Microsoft.Storage/storageAccounts/blobServices/containers/read,
Microsoft.Storage/storageAccounts/blobServices/containers/write, Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey/action}
NotActions : {}
DataActions : {Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete, Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read,
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write, Microsoft.Storage/storageAccounts/blobServices/containers/blobs/move/action…}
NotDataActions : {}
AssignableScopes : {/}

Or by name like so:

Get-AzRoleDefinition | ? {$_.name -like "*Crypto Officer*" }

Name : Key Vault Crypto Officer
Id : 14b46e9e-c2b7-41b4-b07b-48a6ebf60603
IsCustom : False
Description : Perform any action on the keys of a key vault, except manage permissions. Only works for key vaults that use the 'Azure role-based access control' permission model.
Actions : {Microsoft.Authorization/*/read, Microsoft.Insights/alertRules/*, Microsoft.Resources/deployments/*, Microsoft.Resources/subscriptions/resourceGroups/read…}
NotActions : {}
DataActions : {Microsoft.KeyVault/vaults/keys/*}
NotDataActions : {}
AssignableScopes : {/}

As you can see, the Actions section of the output above (and in even more detail on the linked article) provides information about what the different roles can do. So if you're looking to enable one Azure resource to talk to another, you should be able to refer to these to identify a role that you might want to use.

Creating a role assignment

So now we understand how you identify the roles in question, let's take the final leap and look at assigning those roles to our managed identity. For each role assignment, you'll need a roleAssignments resource defined that looks like this:

{
"type": "Microsoft.KeyVault/vaults/providers/roleAssignments",
"apiVersion": "2018-01-01-preview",
"name": "[concat(variables('keyVaultName'), '/Microsoft.Authorization/', variables('uniqueRoleGuidKeyVaultCryptoOfficer'))]",
"dependsOn": [
"[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', variables('managedIdentity'))]"
],
"properties": {
"roleDefinitionId": "[variables('keyVaultCryptoOfficer')]",
"principalId": "[reference(resourceId('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('managedIdentity')), '2018-11-30').principalId]",
"scope": "[resourceId('Microsoft.KeyVault/vaults', variables('keyVaultName'))]",
"principalType": "ServicePrincipal"
}
}

Let's go through the above, significant property by significant property (it's also worth checking the official reference here):

  • type - the type of role assignment we want to create, for a key vault it's "Microsoft.KeyVault/vaults/providers/roleAssignments", for storage it's "Microsoft.Storage/storageAccounts/providers/roleAssignments". The pattern is that it's the resource type, followed by "/providers/roleAssignments".
  • dependsOn - before we can create a role assignment, we need the service principal we desire to permission (in our case a managed identity) to exist
  • properties.roleDefinitionId - the role that we're assigning, provided as an id. So for this example it's the keyVaultCryptoOfficer variable, which was earlier defined as [subscriptionResourceId('Microsoft.Authorization/roleDefinitions', 'ba92f5b4-2d11-453d-a403-e96b0029c9fe')]. (Note the use of the GUID)
  • properties.principalId - the id of the principal we're adding permissions for. In our case this is a managed identity (a type of service principal).
  • properties.scope - we're modifying another resource; our key vault isn't defined in this ARM template and we want to specify the resource we're granting permissions to.
  • properties.principalType - the type of principal that we're creating an assignment for; in our this is "ServicePrincipal" - our managed identity.

There is an alternate approach that you can use where the type is "Microsoft.Authorization/roleAssignments". Whilst this also works, it displayed errors in the Azure tooling for VS Code. As such, we've opted not to use that approach in our ARM templates.

Many thanks to the awesome John McCormick who wrangled permissions with me until we bent Azure RBAC to our will.