Skip to main content

8 posts tagged with "Azure Pipelines"

View All Tags

Β· 9 min read

How can we deploy resources to Azure, and then run an integration test through them in the context of an Azure Pipeline? This post will show how to do this by permissioning our Azure Pipeline to access these resources using Azure RBAC role assignments. It will also demonstrate a dotnet test that runs in the context of the pipeline and makes use of those role assignments.

title image reading "Permissioning Azure Pipelines with Bicep and Role Assignments" and some Azure logos

We're following this approach as an alternative to exporting connection strings, as these can be viewed in the Azure Portal; which may be an security issue if you have many people who are able to access the portal and view deployment outputs.

We're going to demonstrate this approach using Event Hubs. It's worth calling out that this is a generally useful approach which can be applied to any Azure resources that support Azure RBAC Role Assignments. So wherever in this post you read "Event Hubs", imagine substituting other Azure resources you're working with.

The post will do the following:

  • Add Event Hubs to our Azure subscription
  • Permission our service connection / service principal
  • Deploy to Azure with Bicep
  • Write an integration test
  • Write a pipeline to bring it all together

Add Event Hubs to your subscription

First of all, we may need to add Event Hubs to our Azure subscription.

Without this in place, we may encounter errors of the type:

##[error]MissingSubscriptionRegistration: The subscription is not registered to use namespace 'Microsoft.EventHub'. See https://aka.ms/rps-not-found for how to register subscriptions.

We do this by going to "Resource Providers" in the Azure Portal and registering the resources you need. Lots are registered by default, but not all.

Screenshot of the Azure Portal, subscriptions -> resource providers section, showing that Event Hubs have been registered

Permission our service connection / service principal

In order that we can run pipelines related to Azure, we mostly need to have an Azure Resource Manager service connection set up in Azure DevOps. Once that exists, we also need to give it a role assignment to allow it to create role assignments of its own when pipelines are running.

Without this in place, we may encounter errors of the type:

##[error]The template deployment failed with error: 'Authorization failed for template resource '{GUID-THE-FIRST}' of type 'Microsoft.Authorization/roleAssignments'. The client '{GUID-THE-SECOND}' with object id '{GUID-THE-SECOND}' does not have permission to perform action 'Microsoft.Authorization/roleAssignments/write' at scope '/subscriptions/***/resourceGroups/johnnyreilly/providers/Microsoft.EventHub/namespaces/evhns-demo/providers/Microsoft.Authorization/roleAssignments/{GUID-THE-FIRST}'.'.

Essentially, we want to be able to run pipelines that say "hey Azure, we want to give permissions to our service connection". We are doing this with the self same service connection, so (chicken and egg) we first need to give it permission to give those commands in future. This is a little confusing; but let's role with it. (Pun most definitely intended. πŸ˜‰)

To grant that permission / add that role assignment, we go to the service connection in Azure Devops:

Screenshot of the service connection in Azure DevOps

We can see there's two links here; first we'll click on "Manage Service Principal", which will take us to the service principal in the Azure Portal:

Screenshot of the service principal in the Azure Portal

Take note of the display name of the service principal; we'll need that as we click on the "Manage service connection roles" link, which will take us to the resource groups IAM page in the Azure Portal:

Screenshot of the resource groups IAM page in the Azure Portal

Here we can click on "Add role assignment", select "Owner":

Screenshot of the add role assignment IAM page in the Azure Portal

Then when selecting members we should be able to look up the service principal to assign it:

Screenshot of the add role assignment select member IAM page in the Azure Portal

We now have a service connection which we should be able to use for granting permissions / role assignments, which is what we need.

Event Hub and Role Assignment with Bicep

Next we want a Bicep file that will, when run, provision an Event Hub and a role assignment which will allow our Azure Pipeline (via its service connection) to interact with it.

@description('Name of the eventhub namespace')
param eventHubNamespaceName string

@description('Name of the eventhub name')
param eventHubName string

@description('The service principal')
param principalId string

// Create an Event Hub namespace
resource eventHubNamespace 'Microsoft.EventHub/[email protected]' = {
name: eventHubNamespaceName
location: resourceGroup().location
sku: {
name: 'Standard'
tier: 'Standard'
capacity: 1
}
properties: {
zoneRedundant: true
}
}

// Create an Event Hub inside the namespace
resource eventHub 'Microsoft.EventHub/namespaces/[email protected]' = {
parent: eventHubNamespace
name: eventHubName
properties: {
messageRetentionInDays: 7
partitionCount: 1
}
}

// give Azure Pipelines Service Principal permissions against the Event Hub

var roleDefinitionAzureEventHubsDataOwner = subscriptionResourceId('Microsoft.Authorization/roleDefinitions', 'f526a384-b230-433a-b45c-95f59c4a2dec')

resource integrationTestEventHubReceiverNamespaceRoleAssignment 'Microsoft.Authorization/[email protected]' = {
name: guid(principalId, eventHub.id, roleDefinitionAzureEventHubsDataOwner)
scope: eventHubNamespace
properties: {
roleDefinitionId: roleDefinitionAzureEventHubsDataOwner
principalId: principalId
}
}

Do note that our bicep template takes the service principal id as a parameter. We're going to supply this later from our Azure Pipeline.

Our test

We're now going to write a dotnet integration test which will make use of the infrastructure deployed by our Bicep template. Let's create a new test project:

mkdir src
cd src
dotnet new xunit -o IntegrationTests
cd IntegrationTests
dotnet add package Azure.Identity
dotnet add package Azure.Messaging.EventHubs
dotnet add package FluentAssertions
dotnet add package Microsoft.Extensions.Configuration.EnvironmentVariables

We'll create a test file called EventHubTest.cs with these contents:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using Azure.Identity;
using Azure.Messaging.EventHubs;
using Azure.Messaging.EventHubs.Consumer;
using Azure.Messaging.EventHubs.Producer;
using FluentAssertions;
using Microsoft.Extensions.Configuration;
using Newtonsoft.Json;
using Xunit;
using Xunit.Abstractions;

namespace IntegrationTests
{
public record EchoMessage(string Id, string Message, DateTime Timestamp);

public class EventHubTest
{
private readonly ITestOutputHelper _output;

public EventHubTest(ITestOutputHelper output)
{
_output = output;
}

[Fact]
public async Task Can_post_message_to_event_hub_and_read_it_back()
{
// ARRANGE
var configuration = new ConfigurationBuilder()
.AddEnvironmentVariables()
.Build();

// populated by variables specified in the Azure Pipeline
var eventhubNamespaceName = configuration["EVENTHUBNAMESPACENAME"];
eventhubNamespaceName.Should().NotBeNull();
var eventhubName = configuration["EVENTHUBNAME"];
eventhubName.Should().NotBeNull();
var tenantId = configuration["TENANTID"];
tenantId.Should().NotBeNull();

// populated as a consequence of the addSpnToEnvironment in the azure-pipelines.yml
var servicePrincipalId = configuration["SERVICEPRINCIPALID"];
servicePrincipalId.Should().NotBeNull();
var servicePrincipalKey = configuration["SERVICEPRINCIPALKEY"];
servicePrincipalKey.Should().NotBeNull();

var fullyQualifiedNamespace = $"{eventhubNamespaceName}.servicebus.windows.net";

var clientCredential = new ClientSecretCredential(tenantId, servicePrincipalId, servicePrincipalKey);
var eventHubClient = new EventHubProducerClient(
fullyQualifiedNamespace: fullyQualifiedNamespace,
eventHubName: eventhubName,
credential: clientCredential
);
var ourGuid = Guid.NewGuid().ToString();
var now = DateTime.UtcNow;
var sentEchoMessage = new EchoMessage(Id: ourGuid, Message: $"Test message", Timestamp: now);
var sentEventData = new EventData(
Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(sentEchoMessage))
);

// ACT
await eventHubClient.SendAsync(new List<EventData> { sentEventData }, CancellationToken.None);

var eventHubConsumerClient = new EventHubConsumerClient(
consumerGroup: EventHubConsumerClient.DefaultConsumerGroupName,
fullyQualifiedNamespace: fullyQualifiedNamespace,
eventHubName: eventhubName,
credential: clientCredential
);

List<PartitionEvent> partitionEvents = new();
await foreach (var partitionEvent in eventHubConsumerClient.ReadEventsAsync(new ReadEventOptions
{
MaximumWaitTime = TimeSpan.FromSeconds(10)
}))
{
if (partitionEvent.Data == null) break;
_output.WriteLine(Encoding.UTF8.GetString(partitionEvent.Data.EventBody.ToArray()));
partitionEvents.Add(partitionEvent);
}

// ASSERT
partitionEvents.Count.Should().BeGreaterOrEqualTo(1);
var firstOne = partitionEvents.FirstOrDefault(evnt =>
ExtractTypeFromEventBody<EchoMessage>(evnt, _output)?.Id == ourGuid
);
var receivedEchoMessage = ExtractTypeFromEventBody<EchoMessage>(firstOne, _output);
receivedEchoMessage.Should().BeEquivalentTo(sentEchoMessage, because: "the event body should be the same one posted to the message queue");
}

private static T ExtractTypeFromEventBody<T>(PartitionEvent evnt, ITestOutputHelper _output)
{
try
{
return JsonConvert.DeserializeObject<T>(Encoding.UTF8.GetString(evnt.Data.EventBody.ToArray()));
}
catch (JsonException)
{
_output.WriteLine("[" + Encoding.UTF8.GetString(evnt.Data.EventBody.ToArray()) + "] is probably not JSON");
return default(T);
}
}
}
}

Let's talk through what happens in the test above:

  1. We read in Event Hub connection configuration for the test from environment variables. (These will be supplied by an Azure Pipeline that we will create shortly.)
  2. We post a message to the Event Hub.
  3. We read a message back from the Event Hub.
  4. We confirm that the message we read back matches the one we posted.

Now that we have our test, we want to be able to execute it. For that we need an Azure Pipeline!

Azure Pipeline

We're going to add an azure-pipelines.yml file which Azure DevOps can use to power a pipeline:

variables:
- name: eventHubNamespaceName
value: evhns-demo
- name: eventHubName
value: evh-demo

pool:
vmImage: ubuntu-latest

steps:
- task: [email protected]
displayName: Get Service Principal Id
inputs:
azureSubscription: $(serviceConnection)
scriptType: bash
scriptLocation: inlineScript
addSpnToEnvironment: true
inlineScript: |
PRINCIPAL_ID=$(az ad sp show --id $servicePrincipalId --query objectId -o tsv)
echo "##vso[task.setvariable variable=PIPELINE_PRINCIPAL_ID;]$PRINCIPAL_ID"

- bash: az bicep build --file infra/main.bicep
displayName: 'Compile Bicep to ARM'

- task: [email protected]
name: DeployEventHubInfra
displayName: Deploy Event Hub infra
inputs:
deploymentScope: Resource Group
azureResourceManagerConnection: $(serviceConnection)
subscriptionId: $(subscriptionId)
action: Create Or Update Resource Group
resourceGroupName: $(azureResourceGroup)
location: $(location)
templateLocation: Linked artifact
csmFile: 'infra/main.json' # created by bash script
overrideParameters: >-
-eventHubNamespaceName $(eventHubNamespaceName)
-eventHubName $(eventHubName)
-principalId $(PIPELINE_PRINCIPAL_ID)
deploymentMode: Incremental

- task: [email protected]
displayName: 'Install .NET SDK 5.0.x'
inputs:
packageType: 'sdk'
version: 5.0.x

- task: [email protected]
displayName: dotnet integration test
inputs:
azureSubscription: $(serviceConnection)
scriptType: pscore
scriptLocation: inlineScript
addSpnToEnvironment: true # allows access to service principal details in script
inlineScript: |
cd $(Build.SourcesDirectory)/src/IntegrationTests
dotnet test

When the pipeline is run, it does the following:

  1. Gets the service principal id from the service connection.
  2. Compiles our Bicep into an ARM template
  3. Deploys the compiled ARM template to Azure
  4. Installs the dotnet SDK
  5. Uses the Azure CLI task which allows us to access service principal details in the pipeline to run our dotnet test.

We'll create a pipeline in Azure DevOps pointing to this file, and we'll also create the variables that it depends upon:

  • azureResourceGroup - the name of your resource group in Azure where the app will be deployed
  • location - where your app is deployed, eg northeurope
  • serviceConnection - the name of your AzureRM service connection in Azure DevOps
  • subscriptionId - your Azure subscription id from the Azure Portal
  • tenantId - the Azure tenant id from the Azure Portal

Running the pipeline

Now we're ready to run our pipeline:

screenshot of pipeline running successfully

Here we can see that the pipeline runs and the test passes. That means we've successfully provisioned the Event Hub and permissioned our pipeline to be able to access it using Azure RBAC role assignments. We then wrote a test which used the pipeline credentials to interact with the Event Hub. To see the repo that demostrates this, look here.

Just to reiterate: we've demonstrated this approach using Event Hubs. This is a generally useful approach which can be applied to any Azure resources that support Azure RBAC Role Assignments.

Thanks to Jamie McCrindle for helping out with permissioning the service connection / service principal. His post on rotating AZURE_CREDENTIALS in GitHub with Terraform provides useful background for those who would like to do similar permissioning using Terraform.

Β· 5 min read

This post demonstrates how to deploy Azure Static Web Apps using Bicep and Azure DevOps. It includes a few workarounds for the "Provider is invalid. Cannot change the Provider. Please detach your static site first if you wish to use to another deployment provider." issue.

title image reading &quot;Publish Azure Static Web Apps with Bicep and Azure DevOps&quot; and some Azure logos

Bicep template

The first thing we're going to do is create a folder where our Bicep file for deploying our Azure Static Web App will live:

mkdir infra/static-web-app -p

Then we'll create a main.bicep file:

param repositoryUrl string
param repositoryBranch string

param location string = 'westeurope'
param skuName string = 'Free'
param skuTier string = 'Free'

param appName string

resource staticWebApp 'Microsoft.Web/[email protected]' = {
name: appName
location: location
sku: {
name: skuName
tier: skuTier
}
properties: {
// The provider, repositoryUrl and branch fields are required for successive deployments to succeed
// for more details see: https://github.com/Azure/static-web-apps/issues/516
provider: 'DevOps'
repositoryUrl: repositoryUrl
branch: repositoryBranch
buildProperties: {
skipGithubActionWorkflowGeneration: true
}
}
}

output deployment_token string = listSecrets(staticWebApp.id, staticWebApp.apiVersion).properties.apiKey

There's some things to draw attention to in the code above:

  1. The provider, repositoryUrl and branch fields are required for successive deployments to succeed. In our case we're deploying via Azure DevOps and so our provider is 'DevOps'. For more details, look at this issue.
  2. We're creating a deployment_token which we'll need in order that we can deploy into the Azure Static Web App resource.

Static Web App

In order that we can test out Azure Static Web Apps, what we need is a static web app. You could use pretty much anything here; we're going to use Docusaurus. We'll execute this single command:

npx @docusaurus/[email protected] init static-web-app classic

Which will scaffold a Docusaurus site in a folder named static-web-app. We don't need to change it any further; let's just see if we can deploy it.

Azure Pipeline

We're going to add an azure-pipelines.yml file which Azure DevOps can use to power a pipeline:

trigger:
- main

pool:
vmImage: ubuntu-latest

steps:
- checkout: self
submodules: true

- bash: az bicep build --file infra/static-web-app/main.bicep
displayName: 'Compile Bicep to ARM'

- task: [email protected]
name: DeployStaticWebAppInfra
displayName: Deploy Static Web App infra
inputs:
deploymentScope: Resource Group
azureResourceManagerConnection: $(serviceConnection)
subscriptionId: $(subscriptionId)
action: Create Or Update Resource Group
resourceGroupName: $(azureResourceGroup)
location: $(location)
templateLocation: Linked artifact
csmFile: 'infra/static-web-app/main.json' # created by bash script
overrideParameters: >-
-repositoryUrl $(repo)
-repositoryBranch $(Build.SourceBranchName)
-appName $(staticWebAppName)
deploymentMode: Incremental
deploymentOutputs: deploymentOutputs

- task: [email protected]
name: 'SetDeploymentOutputVariables'
displayName: 'Set Deployment Output Variables'
inputs:
targetType: inline
script: |
$armOutputObj = '$(deploymentOutputs)' | ConvertFrom-Json
$armOutputObj.PSObject.Properties | ForEach-Object {
$keyname = $_.Name
$value = $_.Value.value

# Creates a standard pipeline variable
Write-Output "##vso[task.setvariable variable=$keyName;]$value"

# Creates an output variable
Write-Output "##vso[task.setvariable variable=$keyName;issecret=true;isOutput=true]$value"

# Display keys in pipeline
Write-Output "output variable: $keyName"
}
pwsh: true

- task: [email protected]
name: DeployStaticWebApp
displayName: Deploy Static Web App
inputs:
app_location: 'static-web-app'
# api_location: 'api'
output_location: 'build'
azure_static_web_apps_api_token: $(deployment_token) # captured from deploymentOutputs

When the pipeline is run, it does the following:

  1. Compiles our Bicep into an ARM template
  2. Deploys the compiled ARM template to Azure
  3. Captures the deployment outputs (essentially the deployment_token) and converts them into variables to use in the pipeline
  4. Deploys our Static Web App using the deployment_token

The pipeline depends upon a number of variables:

  • azureResourceGroup - the name of your resource group in Azure where the app will be deployed
  • location - where your app is deployed, eg northeurope
  • repo - the URL of your repository in Azure DevOps, eg https://dev.azure.com/johnnyreilly/_git/azure-static-web-apps
  • serviceConnection - the name of your AzureRM service connection in Azure DevOps
  • staticWebAppName - the name of your static web app, eg azure-static-web-apps-johnnyreilly
  • subscriptionId - your Azure subscription id from the Azure Portal

A successful pipeline looks something like this:

Screenshot of successfully running Azure Pipeline

What you might notice is that the AzureStaticWebApp is itself installing and building our application. This is handled by Microsoft Oryx. The upshot of this is that we don't need to manually run npm install and npm build ourselves; the AzureStaticWebApp task will take care of it for us.

Finally, let's see if we've deployed something successfully...

Screenshot of deployed Azure Static Web App

We have! It's worth noting that you'll likely want to give your Azure Static Web App a lovelier URL, and perhaps even put it behind Azure Front Door as well.

Provider is invalid workaround 2

Shane Neff was attempting to follow the instructions in this post and encountered issues. He shared his struggles with me as he encountered the "Provider is invalid. Cannot change the Provider. Please detach your static site first if you wish to use to another deployment provider." issue.

He was good enough to share his solution as well, which is inserting this task at the start of the pipeline (before the az bicep build step):

- task: [email protected]
inputs:
azureSubscription: '<name of your service connection>'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: 'az staticwebapp disconnect -n <name of your app>'

I haven't had the problems that Shane has had myself, but I wanted to share his fix for the people out there who almost certainly are bumping on this.

Β· 2 min read

Creating an Azure Pipeline using the Azure DevOps REST API is possible, but badly documented. This post goes through how to do this.

curling a pipeline

The documentation for creating an Azure Pipeline using the Azure DevOps API is somewhat lacking. However it isn't actually too hard, you just need the recipe.

Here's a curl to make you a pipeline:

curl  --user '':'PERSONAL_ACCESS_TOKEN' --header "Content-Type: application/json" --header "Accept:application/json" https://dev.azure.com/organisation-name/sandbox/_apis/pipelines?api-version=6.1-preview.1 -d @makepipeline.json

Looking at the above there's two things you need:

  1. A personal access token. You can make one of those here: https://dev.azure.com/organisation-name/_usersSettings/tokens (where organisation-name is the name of your organisation)
  2. A makepipeline.json file, which contains the details of the pipeline you want to create:
{
"folder": null,
"name": "pipeline-made-by-api",
"configuration": {
"type": "yaml",
"path": "/azure-pipelines.yml",
"repository": {
"id": "guid-of-repo-id",
"name": "my-repo",
"type": "azureReposGit"
}
}
}

Let's talk through the significant properties above:

  • folder - can be null if you'd like the pipeline to be created in the root of Pipelines; otherwise provide the folder name. Incidentally a null will be translated into a value of \\ which appears to be the magic value which represents the root.
  • name - your pipeline needs a name
  • path - this is the path to the yaml pipelines file in the repo. Note we're creating the pipeline itself here; what's actually in the pipeline sits in that file.
  • repository.id - this is the guid that represents the repo you're creating the pipeline for. You can find this out by going to your equivalent https://dev.azure.com/organisation-name/project-name/_settings/repositories (substituting in appropriate values) and looking up your repository there.
  • repository.name - the name of your repo

When you execute your curl you should be returned some JSON along these lines:

{
"_links": {
"self": {
"href": "https://dev.azure.com/organisation-name/2184049d-8bc4-484a-91e6-00fca6b5b19f/_apis/pipelines/975?revision=1"
},
"web": {
"href": "https://dev.azure.com/organisation-name/2184049d-8bc4-484a-91e6-00fca6b5b19f/_build/definition?definitionId=975"
}
},
"configuration": {
"path": "/azure-pipelines.yml",
"repository": {
"id": "9a72560d-1622-4016-93dd-32ac85b96d03",
"type": "azureReposGit"
},
"type": "yaml"
},
"url": "https://dev.azure.com/organisation-name/2184049d-8bc4-484a-91e6-00fca6b5b19f/_apis/pipelines/975?revision=1",
"id": 975,
"revision": 1,
"name": "pipeline-made-by-api",
"folder": "\\"
}

And inside Azure DevOps you'll now have a shiny new pipeline:

The new pipeline

Β· 2 min read

Last time I wrote about how to use the Azure CLI to run Bicep within the context of an Azure Pipeline. The solution was relatively straightforward, and involved using az deployment group create in a task. There's an easier way.

Bicep meet Azure Pipelines

The easier way

The target reader of the previous post was someone who was already using [email protected] in an Azure Pipeline to deploy an ARM template. Rather than replacing your existing [email protected] tasks, all you need do is insert a prior bash step that compiles the Bicep to ARM, which your existing template can then process. It looks like this:

- bash: az bicep build --file infra/app-service/azuredeploy.bicep
displayName: 'Compile Bicep to ARM'

This will take your Bicep template of azuredeploy.bicep, transpile it into an ARM template named azuredeploy.json which a subsequent [email protected] task can process. Since this is just exercising the Azure CLI, using bash is not required; powershell etc would also be fine; it's just required that the Azure CLI is available in a pipeline.

In fact this simple task could even be a one-liner if you didn't fancy using the displayName. (Though I say keep it; optimising for readability is generally a good shout.) A full pipeline could look like this:

- bash: az bicep build --file infra/app-service/azuredeploy.bicep
displayName: 'Compile Bicep to ARM'

- task: [email protected]
displayName: 'Deploy Hello Azure ARM'
inputs:
azureResourceManagerConnection: '$(azureSubscription)'
action: Create Or Update Resource Group
resourceGroupName: '$(resourceGroupName)'
location: 'North Europe'
templateLocation: Linked artifact
csmFile: 'infra/app-service/azuredeploy.json' # created by bash script
csmParametersFile: 'infra/app-service/azuredeploy.parameters.json'
deploymentMode: Incremental
deploymentOutputs: resourceGroupDeploymentOutputs
overrideParameters: -applicationName $(Build.Repository.Name)

- pwsh: |
$outputs = ConvertFrom-Json '$(resourceGroupDeploymentOutputs)'
foreach ($output in $outputs.PSObject.Properties) {
Write-Host "##vso[task.setvariable variable=RGDO_$($output.Name)]$($output.Value.value)"
}
displayName: 'Turn ARM outputs into variables'

And when it's run, it may result in something along these lines:

Bicep in an Azure Pipeline

So if you want to get using Bicep right now with minimal effort, this an on ramp that could work for you! Props to Jamie McCrindle for suggesting this.

Β· 5 min read

Bicep is a terser and more readable alternative language to ARM templates. Running ARM templates in Azure Pipelines is straightforward. However, there isn't yet a first class experience for running Bicep in Azure Pipelines. This post demonstrates an approach that can be used until a Bicep task is available.

Bicep meet Azure Pipelines

Bicep: mostly ARMless

If you've been working with Azure and infrastructure as code, you'll likely have encountered ARM templates. They're a domain specific language that lives inside JSON, used to define the infrastructure that is deployed to Azure; App Services, Key Vaults and the like.

ARM templates are quite verbose and not the easiest thing to read. This is a consequence of being effectively a language nestled inside another language. Bicep is an alternative language which is far more readable. Bicep transpiles down to ARM templates, in the same way that TypeScript transpiles down to JavaScript.

Bicep is quite new, but already it enjoys feature parity with ARM templates (as of v0.3) and ships as part of the Azure CLI. However, as Bicep is new, it doesn't yet have a dedicated Azure Pipelines task for deployment. This should exist in future, perhaps as soon as the v0.4 release. In the meantime there's an alternative way to achieve this which we'll go through.

App Service with Bicep

Let's take a simple Bicep file, azuredeploy.bicep, which is designed to deploy an App Service resource to Azure. It looks like this:

@description('Tags that our resources need')
param tags object = {
costCenter: 'todo: replace'
environment: 'todo: replace'
application: 'todo: replace with app name'
description: 'todo: replace'
managedBy: 'ARM'
}

@minLength(2)
@description('Base name of the resource such as web app name and app service plan')
param applicationName string

@description('Location for all resources.')
param location string = resourceGroup().location

@description('The SKU of App Service Plan')
param sku string

var appServicePlanName_var = 'plan-${applicationName}-${tags.environment}'
var linuxFxVersion = 'DOTNETCORE|5.0'
var fullApplicationName_var = 'app-${applicationName}-${uniqueString(applicationName)}'

resource appServicePlanName 'Microsoft.Web/[email protected]' = {
name: appServicePlanName_var
location: location
sku: {
name: sku
}
kind: 'linux'
tags: {
CostCenter: tags.costCenter
Environment: tags.environment
Description: tags.description
ManagedBy: tags.managedBy
}
properties: {
reserved: true
}
}

resource fullApplicationName 'Microsoft.Web/[email protected]' = {
name: fullApplicationName_var
location: location
kind: 'app'
tags: {
CostCenter: tags.costCenter
Environment: tags.environment
Description: tags.description
ManagedBy: tags.managedBy
}
properties: {
serverFarmId: appServicePlanName.id
clientAffinityEnabled: true
siteConfig: {
appSettings: []
linuxFxVersion: linuxFxVersion
alwaysOn: false
ftpsState: 'Disabled'
http20Enabled: true
minTlsVersion: '1.2'
remoteDebuggingEnabled: false
}
httpsOnly: true
}
identity: {
type: 'SystemAssigned'
}
}

output fullApplicationName string = fullApplicationName_var

When transpiled down to an ARM template, this Bicep file more than doubles in size:

  • azuredeploy.bicep - 1782 bytes
  • azuredeploy.json - 3863 bytes

This tells you something of the advantage of Bicep. The template comes with an associated azuredeploy.parameters.json file:

{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"tags": {
"value": {
"costCenter": "8888",
"environment": "stg",
"application": "hello-azure",
"description": "App Service for hello-azure",
"managedBy": "ARM"
}
},
"sku": {
"value": "B1"
}
}
}

It's worth remembering that you can use the same parameters files with Bicep that you can use with ARM templates. This is great for minimising friction when it comes to migrating.

Bicep in azure-pipelines.yml

Now we have our Bicep file, we want to execute it from the context of an Azure Pipeline. If we were working directly with the ARM template we'd likely have something like this in place:

- task: [email protected]
displayName: 'Deploy Hello Azure ARM'
inputs:
azureResourceManagerConnection: '$(azureSubscription)'
action: Create Or Update Resource Group
resourceGroupName: '$(resourceGroupName)'
location: 'North Europe'
templateLocation: Linked artifact
csmFile: 'infra/app-service/azuredeploy.json'
csmParametersFile: 'infra/app-service/azuredeploy.parameters.json'
deploymentMode: Incremental
deploymentOutputs: resourceGroupDeploymentOutputs
overrideParameters: -applicationName $(Build.Repository.Name)

- pwsh: |
$outputs = ConvertFrom-Json '$(resourceGroupDeploymentOutputs)'
foreach ($output in $outputs.PSObject.Properties) {
Write-Host "##vso[task.setvariable variable=RGDO_$($output.Name)]$($output.Value.value)"
}
displayName: 'Turn ARM outputs into variables'

There's two tasks above. The first is the native task for ARM deployments which takes our ARM template and our parameters and deploys them. The second task takes the output variables from the first task and converts them into Azure Pipeline variables such that they can be referenced later in the pipeline. In this case this variablifies our fullApplicationName output.

There is, as yet, no [email protected]. Though it's coming. In the meantime, the marvellous Alex Frankel advised:

I'd recommend using the Azure CLI task to deploy. As long as that task is updated to Az CLI version 2.20 or later, it will automatically install the bicep CLI when calling az deployment group create -f main.bicep.

Let's give it a go!

- task: [email protected]
displayName: 'Deploy Hello Azure Bicep'
inputs:
azureSubscription: '$(azureSubscription)'
scriptType: bash
scriptLocation: inlineScript
inlineScript: |
az --version

echo "az deployment group create --resource-group '$(resourceGroupName)' --name appservicedeploy"
az deployment group create --resource-group '$(resourceGroupName)' --name appservicedeploy \
--template-file infra/app-service/azuredeploy.bicep \
--parameters infra/app-service/azuredeploy.parameters.json \
--parameters applicationName='$(Build.Repository.Name)'

echo "az deployment group show --resource-group '$(resourceGroupName)' --name appservicedeploy"
deploymentoutputs=$(az deployment group show --resource-group '$(resourceGroupName)' --name appservicedeploy \
--query properties.outputs)

echo 'convert outputs to variables'
echo $deploymentoutputs | jq -c '. | to_entries[] | [.key, .value.value]' |
while IFS=$"\n" read -r c; do
outputname=$(echo "$c" | jq -r '.[0]')
outputvalue=$(echo "$c" | jq -r '.[1]')
echo "setting variable RGDO_$outputname=$outputvalue"
echo "##vso[task.setvariable variable=RGDO_$outputname]$outputvalue"
done

The above is just a single Azure CLI task (as advised). It invokes az deployment group create passing the relevant parameters. It then acquires the output properties using az deployment group show. Finally it once again converts these outputs to Azure Pipeline variables with some jq smarts.

This works right now, and running it results in something like the output below. So if you're excited about Bicep and don't want to wait for 0.4 to start moving on this, then this can get you going. To track the progress of the custom task, keep an eye on this issue.

Bicep in an Azure Pipeline

Update: an even simpler alternative

There is even a simpler way to do this which I discovered subsequent to writing this. Have a read.

Β· 7 min read

How do you answer the question: "what version of my application is running in Production right now?" This post demonstrates how to surface the build metadata that represents the version of your app, from your app using Azure Pipelines and ASP.NET.

Many is the time where I've been pondering over why something isn't working as expected and burned a disappointing amount of time before realising that I'm playing with an old version of an app. Wouldn't it be great give our app a way to say: "Hey! I'm version 1.2.3.4 of your app; built from this commit hash, I was built on Wednesday, I was the nineth build that day and I was built from the main branch. And I'm an Aries." Or something like that.

This post was inspired by Scott Hanselman's similar post on the topic. Ultimately this ended up going in a fairly different direction and so seemed worthy of a post of its own.

A particular difference is that this is targeting SPAs. Famously, cache invalidation is hard. It's possible for the HTML/JS/CSS of your app to be stale due to aggressive caching. So we're going to make it possible to see build information for both when the SPA (or "client") is built, as well as when the .NET app (or "server") is built. We're using a specific type of SPA here; a React SPA built with TypeScript and Material UI, however the principles here are general; you could surface this up any which way you choose.

Putting build info into azure-pipelines.yml

The first thing we're going to do is to inject our build details into two identical buildinfo.json files; one that sits in the server codebase and which will be used to drive the server build information, and one that sits in the client codebase to drive the client equivalent. They'll end up looking something like this:

{
"buildNumber": "20210130.1",
"buildId": "123456",
"branchName": "main",
"commitHash": "7089620222c30c1ad88e4b556c0a7908ddd34a8e"
}

We generate this by adding the following yml to the beginning of our azure-pipelines.yml (crucially before the client or server build take place):

- script: |
echo -e -n "{\"buildNumber\":\"$(Build.BuildNumber)\",\"buildId\":\"$(Build.BuildId)\",\"branchName\":\"$(Build.SourceBranchName)\",\"commitHash\":\"$(Build.SourceVersion)\"}" > "$(Build.SourcesDirectory)/src/client-app/src/buildinfo.json"
echo -e -n "{\"buildNumber\":\"$(Build.BuildNumber)\",\"buildId\":\"$(Build.BuildId)\",\"branchName\":\"$(Build.SourceBranchName)\",\"commitHash\":\"$(Build.SourceVersion)\"}" > "$(Build.SourcesDirectory)/src/server-app/Server/buildinfo.json"
displayName: "emit build details as JSON"
failOnStderr: true

As you can see, we're placing the following variables that are available at build time in Azure Pipelines, into the buildinfo.json:

  • BuildNumber - The name of the completed build; which usually takes the form of a date in the yyyyMMdd format, suffixed by .x where x is a number that increments representing the number of builds that have taken place on the given day.
  • BuildId - The ID of the record for the completed build.
  • SourceVersion - This is the commit hash of the source code in Git
  • SourceBranchName - The name of the branch in Git.

There's many variables available in Azure Pipelines that can be used - we've picked out the ones most interesting to us.

Surfacing the server build info

Our pipeline is dropping the buildinfo.json over pre-existing stub buildinfo.json files in both our client and server codebases. The stub files look like this:

{
"buildNumber": "yyyyMMdd.x",
"buildId": "xxxxxx",
"branchName": "",
"commitHash": "LOCAL_BUILD"
}

In our .NET app, the buildinfo.json file has been dropped in the root of the app. And as luck would have it, all JSON files are automatically included in a .NET build and so it will be available at runtime. We want to surface this file through an API, and we also want to use it to stamp details into our logs.

So we need to parse the file, and for that we'll use this:

using System;
using System.IO;
using System.Text.Json;

namespace Server {
public record BuildInfo(string BranchName, string BuildNumber, string BuildId, string CommitHash);

public static class AppVersionInfo {
private const string _buildFileName = "buildinfo.json";
private static BuildInfo _fileBuildInfo = new(
BranchName: "",
BuildNumber: DateTime.UtcNow.ToString("yyyyMMdd") + ".0",
BuildId: "xxxxxx",
CommitHash: $"Not yet initialised - call {nameof(InitialiseBuildInfoGivenPath)}"
);

public static void InitialiseBuildInfoGivenPath(string path) {
var buildFilePath = Path.Combine(path, _buildFileName);
if (File.Exists(buildFilePath)) {
try {
var buildInfoJson = File.ReadAllText(buildFilePath);
var buildInfo = JsonSerializer.Deserialize<BuildInfo>(buildInfoJson, new JsonSerializerOptions {
PropertyNamingPolicy = JsonNamingPolicy.CamelCase
});
if (buildInfo == null) throw new Exception($"Failed to deserialise {_buildFileName}");

_fileBuildInfo = buildInfo;
} catch (Exception) {
_fileBuildInfo = new BuildInfo(
BranchName: "",
BuildNumber: DateTime.UtcNow.ToString("yyyyMMdd") + ".0",
BuildId: "xxxxxx",
CommitHash: "Failed to load build info from buildinfo.json"
);
}
}
}

public static BuildInfo GetBuildInfo() => _fileBuildInfo;
}
}

The above code reads the buildinfo.json file and deserialises it into a BuildInfo record which is then surfaced up by the GetBuildInfo method. We initialise this at the start of our Program.cs like so:

public static int Main(string[] args) {
AppVersionInfo.InitialiseBuildInfoGivenPath(Directory.GetCurrentDirectory());
// Now we're free to call AppVersionInfo.GetBuildInfo()
// ....
}

Now we need a controller to surface this information up. We'll add ourselves a BuildInfoController.cs:

using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;

namespace Server.Controllers {
[ApiController]
public class BuildInfoController : ControllerBase {
[AllowAnonymous]
[HttpGet("api/build")]
public BuildInfo GetBuild() => AppVersionInfo.GetBuildInfo();
}
}

This exposes an api/build endpoint in our .NET app that, when hit, will display the following JSON:

screenshot of api/build output

Surfacing the client build info

Our server now lets the world know which version it is running and this is tremendous. Now let's make our client do the same.

Very little is required to achieve this. Again we have a buildinfo.json sat in the root of our codebase. We're able to import it as a module in TypeScript because we've set the following property in our tsconfig.json:

"resolveJsonModule": true,

As a consequence, consumption is as simple as:

import clientBuildInfo from './buildinfo.json';

Which provides us with a clientBuildInfo which TypeScript automatically derives as this type:

type ClientBuildInfo = {
buildNumber: string;
buildId: string;
branchName: string;
commitHash: string;
}

How you choose to use that information is entirely your choice. We're going to add ourselves an "about" screen in our app, which displays both client info (loaded using the mechanism above) and server info (fetched from the /api/build endpoint).

import {
Card,
CardContent,
CardHeader,
createStyles,
Grid,
makeStyles,
Theme,
Typography,
Zoom,
} from '@material-ui/core';
import React from 'react';
import clientBuildInfo from '../../buildinfo.json';
import { projectsPurple } from '../shared/colors';
import { Loading } from '../shared/Loading';
import { TransitionContainer } from '../shared/TransitionContainer';

const useStyles = (cardColor: string) =>
makeStyles((theme: Theme) =>
createStyles({
card: {
padding: theme.spacing(0),
backgroundColor: cardColor,
color: theme.palette.common.white,
minHeight: theme.spacing(28),
},
avatar: {
backgroundColor: theme.palette.getContrastText(cardColor),
color: cardColor,
},
main: {
padding: theme.spacing(2),
},
})
)();

type Styles = ReturnType<typeof useStyles>;

const AboutPage: React.FC = () => {
const [serverBuildInfo, setServerBuildInfo] =
React.useState<typeof clientBuildInfo>();

React.useEffect(() => {
fetch('/api/build')
.then((response) => response.json())
.then(setServerBuildInfo);
}, []);

const classes = useStyles(projectsPurple);

return (
<TransitionContainer>
<Grid container spacing={3}>
<Grid item xs={12} sm={12} container alignItems="center">
<Grid item>
<Typography variant="h4" component="h1">
About
</Typography>
</Grid>
</Grid>
</Grid>
<Grid container spacing={1}>
<BuildInfo
classes={classes}
title="Client Version"
{...clientBuildInfo}
/>
</Grid>
<br />
<Grid container spacing={1}>
{serverBuildInfo ? (
<BuildInfo
classes={classes}
title="Server Version"
{...serverBuildInfo}
/>
) : (
<Loading />
)}
</Grid>
</TransitionContainer>
);
};

interface Props {
classes: Styles;
title: string;
branchName: string;
buildNumber: string;
buildId: string;
commitHash: string;
}

const BuildInfo: React.FC<Props> = ({
classes,
title,
branchName,
buildNumber,
buildId,
commitHash,
}) => (
<Zoom mountOnEnter unmountOnExit in={true}>
<Card className={classes.card}>
<CardHeader title={title} />
<CardContent className={classes.main}>
<Typography variant="body1" component="p">
<b>Build Number</b> {buildNumber}
</Typography>
<Typography variant="body1" component="p">
<b>Build Id</b> {buildId}
</Typography>
<Typography variant="body1" component="p">
<b>Branch Name</b> {branchName}
</Typography>
<Typography variant="body1" component="p">
<b>Commit Hash</b> {commitHash}
</Typography>
</CardContent>
</Card>
</Zoom>
);

export default AboutPage;

When the above page is viewed it looks like this:

screenshot of our web app surfacing up the build information

And that's it! Our app is clearly telling us what version is being run, both on the server and in the client. Thanks to Scott Hanselman for his work which inspired this.

Β· 4 min read

This post explains how to integrate the tremendous test runner Jest with the continuous integration platform Azure Pipelines. Perhaps we're setting up a new project and we've created a new React app with Create React App. This ships with Jest support out of the box. How do we get that plugged into Pipelines such that:

  1. Tests run as part of our pipeline
  2. A failing test fails the build
  3. Test results are reported in Azure Pipelines UI?

Tests run as part of our pipeline

First of all, lets get the tests running. Crack open your azure-pipelines.yml file and, in the appropriate place add the following:

- task: [email protected]
displayName: npm run test
inputs:
command: 'custom'
workingDir: 'src/client-app'
customCommand: 'run test'

The above will, when run, trigger a npm run test in the src/client-app folder of my project (it's here where my React app lives). You'd imagine this would just workℒ️ - but life is not that simple. This is because Jest, by default, runs in watch mode. This is blocking and so not appropriate for CI.

In our src/client-app/package.json let's create a new script that runs the tests but not in watch mode:

"test:ci": "npm run test -- --watchAll=false",

and switch our azure-pipelines.yml to use it:

- task: [email protected]
displayName: npm run test
inputs:
command: 'custom'
workingDir: 'src/client-app'
customCommand: 'run test:ci'

Boom! We're now running tests as part of our pipeline. And also, failing tests will fail the build, because of Jest's default behaviour of exiting with status code 1 on failed tests.

Tests results are reported in Azure Pipelines UI

Pipelines has a really nice UI for reporting test results. If you're using something like .NET then you'll find that test results just magically show up there. We'd like that for our Jest tests as well. And we can have it.

The way we achieve this is by:

  1. Producing test results in a format that can be subsequently processed
  2. Using those test results to publish to Azure Pipelines

The way that you configure Jest test output is through usage of reporters. However, Create React App doesn't support these. However that's not an issue, as the marvellous Dan Abramov demonstrates here.

We need to install the jest-junit package to our client-app:

npm install jest-junit --save-dev

And we'll tweak our test:ci script to use the jest-junit reporter as well:

"test:ci": "npm run test -- --watchAll=false --reporters=default --reporters=jest-junit",

We also need to add some configuration to our package.json in the form of a jest-junit element:

"jest-junit": {
"suiteNameTemplate": "{filepath}",
"outputDirectory": ".",
"outputName": "junit.xml"
}

The above configuration will use the name of the test file as the suite name in the results, which should speed up the tracking down of the failing test. The other values specify where the test results should be published to, in this case the root of our client-app with the filename junit.xml.

Now our CI is producing our test results, how do we get them into Pipelines? For that we need the Publish test results task and a new step in our azure-pipelines.yml after our npm run test step:

- task: [email protected]
displayName: npm run test
inputs:
command: 'custom'
workingDir: 'src/client-app'
customCommand: 'run test:ci'

- task: [email protected]
displayName: 'supply npm test results to pipelines'
condition: succeededOrFailed() # because otherwise we won't know what tests failed
inputs:
testResultsFiles: 'src/client-app/junit.xml'

This will read the test results from our src/client-app/junit.xml file and pump them into Pipelines. Do note that we're always running this step; so if the previous step failed (as it would in the case of a failing test) we still pump out the details of what that failure was. Like so:

screenshot of test results being published to Azure Pipelines regardless of passing or failing tests

And that's it! Azure Pipelines and Jest integrated.

screenshot of test results published to Azure Pipelines

Β· 2 min read

Some blog posts are insightful treatises on the future of web development, some are "here's how I solved my problem". This is most assuredly the latter.

I'm writing an custom pipelines task extension for Azure Pipelines. It's written with TypeScript and the azure-pipelines-task-lib.

The pipeline needs to output a variable. Azure Pipelines does that using the setvariable command combined with isOutput=true. This looks something like this: ##vso[task.setvariable variable=myOutputVar;isOutput=true]this is the value".

The bad news is that the lib doesn't presently support isOutput=true. Gosh it makes me sad. Hopefully in future it will be resolved. But what now?

For now we can hack ourselves a workaround:

import * as tl from 'azure-pipelines-task-lib/task';
import * as tcm from 'azure-pipelines-task-lib/taskcommand';
import * as os from 'os';

/**
* Sets a variable which will be output as well.
*
* @param name name of the variable to set
* @param val value to set
* @param secret whether variable is secret. Multi-line secrets are not allowed. Optional, defaults to false
* @returns void
*/
export function setOutputVariable(
name: string,
val: string,
secret = false
): void {
// use the implementation of setVariable to set all the internals,
// then subsequently set the output variable manually
tl.setVariable(name, val, secret);

const varValue = val || '';

// write the command
// see https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#set-a-multi-job-output-variable
_command(
'task.setvariable',
{
variable: name || '',
isOutput: 'true',
issecret: (secret || false).toString(),
},
varValue
);
}

const _outStream = process.stdout;

function _writeLine(str: string): void {
_outStream.write(str + os.EOL);
}

function _command(command: string, properties: any, message: string) {
const taskCmd = new tcm.TaskCommand(command, properties, message);
_writeLine(taskCmd.toString());
}

The above is effectively a wrapper for the existing setVariable. However, once it's called into the initial implementation, setOutputVariable then writes out the same variable once more, but this time bolting on isOutput=true.

Finally, I've raised a PR to see if isOutput can be added directly to the library. You can track progress on that here.