Skip to main content

8 posts tagged with "Bicep"

View All Tags

Β· 9 min read

How can we deploy resources to Azure, and then run an integration test through them in the context of an Azure Pipeline? This post will show how to do this by permissioning our Azure Pipeline to access these resources using Azure RBAC role assignments. It will also demonstrate a dotnet test that runs in the context of the pipeline and makes use of those role assignments.

title image reading "Permissioning Azure Pipelines with Bicep and Role Assignments" and some Azure logos

We're following this approach as an alternative to exporting connection strings, as these can be viewed in the Azure Portal; which may be an security issue if you have many people who are able to access the portal and view deployment outputs.

We're going to demonstrate this approach using Event Hubs. It's worth calling out that this is a generally useful approach which can be applied to any Azure resources that support Azure RBAC Role Assignments. So wherever in this post you read "Event Hubs", imagine substituting other Azure resources you're working with.

The post will do the following:

  • Add Event Hubs to our Azure subscription
  • Permission our service connection / service principal
  • Deploy to Azure with Bicep
  • Write an integration test
  • Write a pipeline to bring it all together

Add Event Hubs to your subscription

First of all, we may need to add Event Hubs to our Azure subscription.

Without this in place, we may encounter errors of the type:

##[error]MissingSubscriptionRegistration: The subscription is not registered to use namespace 'Microsoft.EventHub'. See https://aka.ms/rps-not-found for how to register subscriptions.

We do this by going to "Resource Providers" in the Azure Portal and registering the resources you need. Lots are registered by default, but not all.

Screenshot of the Azure Portal, subscriptions -> resource providers section, showing that Event Hubs have been registered

Permission our service connection / service principal

In order that we can run pipelines related to Azure, we mostly need to have an Azure Resource Manager service connection set up in Azure DevOps. Once that exists, we also need to give it a role assignment to allow it to create role assignments of its own when pipelines are running.

Without this in place, we may encounter errors of the type:

##[error]The template deployment failed with error: 'Authorization failed for template resource '{GUID-THE-FIRST}' of type 'Microsoft.Authorization/roleAssignments'. The client '{GUID-THE-SECOND}' with object id '{GUID-THE-SECOND}' does not have permission to perform action 'Microsoft.Authorization/roleAssignments/write' at scope '/subscriptions/***/resourceGroups/johnnyreilly/providers/Microsoft.EventHub/namespaces/evhns-demo/providers/Microsoft.Authorization/roleAssignments/{GUID-THE-FIRST}'.'.

Essentially, we want to be able to run pipelines that say "hey Azure, we want to give permissions to our service connection". We are doing this with the self same service connection, so (chicken and egg) we first need to give it permission to give those commands in future. This is a little confusing; but let's role with it. (Pun most definitely intended. πŸ˜‰)

To grant that permission / add that role assignment, we go to the service connection in Azure Devops:

Screenshot of the service connection in Azure DevOps

We can see there's two links here; first we'll click on "Manage Service Principal", which will take us to the service principal in the Azure Portal:

Screenshot of the service principal in the Azure Portal

Take note of the display name of the service principal; we'll need that as we click on the "Manage service connection roles" link, which will take us to the resource groups IAM page in the Azure Portal:

Screenshot of the resource groups IAM page in the Azure Portal

Here we can click on "Add role assignment", select "Owner":

Screenshot of the add role assignment IAM page in the Azure Portal

Then when selecting members we should be able to look up the service principal to assign it:

Screenshot of the add role assignment select member IAM page in the Azure Portal

We now have a service connection which we should be able to use for granting permissions / role assignments, which is what we need.

Event Hub and Role Assignment with Bicep

Next we want a Bicep file that will, when run, provision an Event Hub and a role assignment which will allow our Azure Pipeline (via its service connection) to interact with it.

@description('Name of the eventhub namespace')
param eventHubNamespaceName string

@description('Name of the eventhub name')
param eventHubName string

@description('The service principal')
param principalId string

// Create an Event Hub namespace
resource eventHubNamespace 'Microsoft.EventHub/[email protected]' = {
name: eventHubNamespaceName
location: resourceGroup().location
sku: {
name: 'Standard'
tier: 'Standard'
capacity: 1
}
properties: {
zoneRedundant: true
}
}

// Create an Event Hub inside the namespace
resource eventHub 'Microsoft.EventHub/namespaces/[email protected]' = {
parent: eventHubNamespace
name: eventHubName
properties: {
messageRetentionInDays: 7
partitionCount: 1
}
}

// give Azure Pipelines Service Principal permissions against the Event Hub

var roleDefinitionAzureEventHubsDataOwner = subscriptionResourceId('Microsoft.Authorization/roleDefinitions', 'f526a384-b230-433a-b45c-95f59c4a2dec')

resource integrationTestEventHubReceiverNamespaceRoleAssignment 'Microsoft.Authorization/[email protected]' = {
name: guid(principalId, eventHub.id, roleDefinitionAzureEventHubsDataOwner)
scope: eventHubNamespace
properties: {
roleDefinitionId: roleDefinitionAzureEventHubsDataOwner
principalId: principalId
}
}

Do note that our bicep template takes the service principal id as a parameter. We're going to supply this later from our Azure Pipeline.

Our test

We're now going to write a dotnet integration test which will make use of the infrastructure deployed by our Bicep template. Let's create a new test project:

mkdir src
cd src
dotnet new xunit -o IntegrationTests
cd IntegrationTests
dotnet add package Azure.Identity
dotnet add package Azure.Messaging.EventHubs
dotnet add package FluentAssertions
dotnet add package Microsoft.Extensions.Configuration.EnvironmentVariables

We'll create a test file called EventHubTest.cs with these contents:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using Azure.Identity;
using Azure.Messaging.EventHubs;
using Azure.Messaging.EventHubs.Consumer;
using Azure.Messaging.EventHubs.Producer;
using FluentAssertions;
using Microsoft.Extensions.Configuration;
using Newtonsoft.Json;
using Xunit;
using Xunit.Abstractions;

namespace IntegrationTests
{
public record EchoMessage(string Id, string Message, DateTime Timestamp);

public class EventHubTest
{
private readonly ITestOutputHelper _output;

public EventHubTest(ITestOutputHelper output)
{
_output = output;
}

[Fact]
public async Task Can_post_message_to_event_hub_and_read_it_back()
{
// ARRANGE
var configuration = new ConfigurationBuilder()
.AddEnvironmentVariables()
.Build();

// populated by variables specified in the Azure Pipeline
var eventhubNamespaceName = configuration["EVENTHUBNAMESPACENAME"];
eventhubNamespaceName.Should().NotBeNull();
var eventhubName = configuration["EVENTHUBNAME"];
eventhubName.Should().NotBeNull();
var tenantId = configuration["TENANTID"];
tenantId.Should().NotBeNull();

// populated as a consequence of the addSpnToEnvironment in the azure-pipelines.yml
var servicePrincipalId = configuration["SERVICEPRINCIPALID"];
servicePrincipalId.Should().NotBeNull();
var servicePrincipalKey = configuration["SERVICEPRINCIPALKEY"];
servicePrincipalKey.Should().NotBeNull();

var fullyQualifiedNamespace = $"{eventhubNamespaceName}.servicebus.windows.net";

var clientCredential = new ClientSecretCredential(tenantId, servicePrincipalId, servicePrincipalKey);
var eventHubClient = new EventHubProducerClient(
fullyQualifiedNamespace: fullyQualifiedNamespace,
eventHubName: eventhubName,
credential: clientCredential
);
var ourGuid = Guid.NewGuid().ToString();
var now = DateTime.UtcNow;
var sentEchoMessage = new EchoMessage(Id: ourGuid, Message: $"Test message", Timestamp: now);
var sentEventData = new EventData(
Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(sentEchoMessage))
);

// ACT
await eventHubClient.SendAsync(new List<EventData> { sentEventData }, CancellationToken.None);

var eventHubConsumerClient = new EventHubConsumerClient(
consumerGroup: EventHubConsumerClient.DefaultConsumerGroupName,
fullyQualifiedNamespace: fullyQualifiedNamespace,
eventHubName: eventhubName,
credential: clientCredential
);

List<PartitionEvent> partitionEvents = new();
await foreach (var partitionEvent in eventHubConsumerClient.ReadEventsAsync(new ReadEventOptions
{
MaximumWaitTime = TimeSpan.FromSeconds(10)
}))
{
if (partitionEvent.Data == null) break;
_output.WriteLine(Encoding.UTF8.GetString(partitionEvent.Data.EventBody.ToArray()));
partitionEvents.Add(partitionEvent);
}

// ASSERT
partitionEvents.Count.Should().BeGreaterOrEqualTo(1);
var firstOne = partitionEvents.FirstOrDefault(evnt =>
ExtractTypeFromEventBody<EchoMessage>(evnt, _output)?.Id == ourGuid
);
var receivedEchoMessage = ExtractTypeFromEventBody<EchoMessage>(firstOne, _output);
receivedEchoMessage.Should().BeEquivalentTo(sentEchoMessage, because: "the event body should be the same one posted to the message queue");
}

private static T ExtractTypeFromEventBody<T>(PartitionEvent evnt, ITestOutputHelper _output)
{
try
{
return JsonConvert.DeserializeObject<T>(Encoding.UTF8.GetString(evnt.Data.EventBody.ToArray()));
}
catch (JsonException)
{
_output.WriteLine("[" + Encoding.UTF8.GetString(evnt.Data.EventBody.ToArray()) + "] is probably not JSON");
return default(T);
}
}
}
}

Let's talk through what happens in the test above:

  1. We read in Event Hub connection configuration for the test from environment variables. (These will be supplied by an Azure Pipeline that we will create shortly.)
  2. We post a message to the Event Hub.
  3. We read a message back from the Event Hub.
  4. We confirm that the message we read back matches the one we posted.

Now that we have our test, we want to be able to execute it. For that we need an Azure Pipeline!

Azure Pipeline

We're going to add an azure-pipelines.yml file which Azure DevOps can use to power a pipeline:

variables:
- name: eventHubNamespaceName
value: evhns-demo
- name: eventHubName
value: evh-demo

pool:
vmImage: ubuntu-latest

steps:
- task: [email protected]
displayName: Get Service Principal Id
inputs:
azureSubscription: $(serviceConnection)
scriptType: bash
scriptLocation: inlineScript
addSpnToEnvironment: true
inlineScript: |
PRINCIPAL_ID=$(az ad sp show --id $servicePrincipalId --query objectId -o tsv)
echo "##vso[task.setvariable variable=PIPELINE_PRINCIPAL_ID;]$PRINCIPAL_ID"

- bash: az bicep build --file infra/main.bicep
displayName: 'Compile Bicep to ARM'

- task: [email protected]
name: DeployEventHubInfra
displayName: Deploy Event Hub infra
inputs:
deploymentScope: Resource Group
azureResourceManagerConnection: $(serviceConnection)
subscriptionId: $(subscriptionId)
action: Create Or Update Resource Group
resourceGroupName: $(azureResourceGroup)
location: $(location)
templateLocation: Linked artifact
csmFile: 'infra/main.json' # created by bash script
overrideParameters: >-
-eventHubNamespaceName $(eventHubNamespaceName)
-eventHubName $(eventHubName)
-principalId $(PIPELINE_PRINCIPAL_ID)
deploymentMode: Incremental

- task: [email protected]
displayName: 'Install .NET SDK 5.0.x'
inputs:
packageType: 'sdk'
version: 5.0.x

- task: [email protected]
displayName: dotnet integration test
inputs:
azureSubscription: $(serviceConnection)
scriptType: pscore
scriptLocation: inlineScript
addSpnToEnvironment: true # allows access to service principal details in script
inlineScript: |
cd $(Build.SourcesDirectory)/src/IntegrationTests
dotnet test

When the pipeline is run, it does the following:

  1. Gets the service principal id from the service connection.
  2. Compiles our Bicep into an ARM template
  3. Deploys the compiled ARM template to Azure
  4. Installs the dotnet SDK
  5. Uses the Azure CLI task which allows us to access service principal details in the pipeline to run our dotnet test.

We'll create a pipeline in Azure DevOps pointing to this file, and we'll also create the variables that it depends upon:

  • azureResourceGroup - the name of your resource group in Azure where the app will be deployed
  • location - where your app is deployed, eg northeurope
  • serviceConnection - the name of your AzureRM service connection in Azure DevOps
  • subscriptionId - your Azure subscription id from the Azure Portal
  • tenantId - the Azure tenant id from the Azure Portal

Running the pipeline

Now we're ready to run our pipeline:

screenshot of pipeline running successfully

Here we can see that the pipeline runs and the test passes. That means we've successfully provisioned the Event Hub and permissioned our pipeline to be able to access it using Azure RBAC role assignments. We then wrote a test which used the pipeline credentials to interact with the Event Hub. To see the repo that demostrates this, look here.

Just to reiterate: we've demonstrated this approach using Event Hubs. This is a generally useful approach which can be applied to any Azure resources that support Azure RBAC Role Assignments.

Thanks to Jamie McCrindle for helping out with permissioning the service connection / service principal. His post on rotating AZURE_CREDENTIALS in GitHub with Terraform provides useful background for those who would like to do similar permissioning using Terraform.

Β· 3 min read

Bicep is an amazing language, it's also very new. If you want to write attractive code snippets about Bicep, you can by using PrismJS (and Docusaurus). This post shows you how.

title image reading &quot;Publish Azure Static Web Apps with Bicep and Azure DevOps&quot; and some Azure logos

Syntax highlighting

I've been writing blog posts about Bicep for a little while. I was frustrated that the code snippets were entirely unhighlighted. I'm keen my posts are as readable as possible, and so I looked into adding support to PrismJS which is what Docusaurus uses to power syntax highlighting.

Whilst my regex fu is amateur at best, happily Michael Schmidt of the PrismJS family is considerably better. He took the support I added and made it much better.

Docusaurus meet Bicep

If you have any code snippets that start with three backticks and the word bicep...

```bicep
// code goes here...

... then ideally you'd like to see some syntax highlighting in your post. Since Bicep isn't "in the box" for Docusaurus you need to explicitly opt into support like so:

    prism: {
additionalLanguages: ["powershell", "csharp", "docker", "bicep"],
},

Above you can see a snippet from my own docusaurus.config.js which adds Bicep, alongside the other additional languages I use.

With this in place, you would typically get all the syntax highlighting support you need.

Early adoption workaround

I'm writing this post before the latest version of PrismJS has shipped. As such, Bicep support isn't available by default yet. But if you're an early adopter, you can get support right now. The secret is adding a resolutions section to your package.json which points to the GitHub Repo where Prism lives:

  "resolutions": {
"prismjs": "PrismJS/prism"
},

This will mean that Yarn (if you're using Docusaurus you're probably using Yarn) pulls prismjs directly from GitHub, as demonstrated by the yarn.lock file:

[email protected]/prism, [email protected]^1.23.0:
version "1.24.1"
resolved "https://codeload.github.com/PrismJS/prism/tar.gz/59f449d33dc9fd19302f21aad95fc0b5028ac830"

What does it look like?

Finally, let's see if works. Here's a Bicep code snippet that I borrowed from an earlier post:

param repositoryUrl string
param repositoryBranch string

param location string = 'westeurope'
param skuName string = 'Free'
param skuTier string = 'Free'

param appName string

resource staticWebApp 'Microsoft.Web/[email protected]' = {
name: appName
location: location
tags: tagsObj
sku: {
name: skuName
tier: skuTier
}
properties: {
// The provider, repositoryUrl and branch fields are required for successive deployments to succeed
// for more details see: https://github.com/Azure/static-web-apps/issues/516
provider: 'DevOps'
repositoryUrl: repositoryUrl
branch: repositoryBranch
buildProperties: {
skipGithubActionWorkflowGeneration: true
}
}
}

output deployment_token string = listSecrets(staticWebApp.id, staticWebApp.apiVersion).properties.apiKey

As you can see, it's delightfully highlighted by PrismJS. Enjoy!

Β· 5 min read

This post demonstrates how to deploy Azure Static Web Apps using Bicep and Azure DevOps. It includes a few workarounds for the "Provider is invalid. Cannot change the Provider. Please detach your static site first if you wish to use to another deployment provider." issue.

title image reading &quot;Publish Azure Static Web Apps with Bicep and Azure DevOps&quot; and some Azure logos

Bicep template

The first thing we're going to do is create a folder where our Bicep file for deploying our Azure Static Web App will live:

mkdir infra/static-web-app -p

Then we'll create a main.bicep file:

param repositoryUrl string
param repositoryBranch string

param location string = 'westeurope'
param skuName string = 'Free'
param skuTier string = 'Free'

param appName string

resource staticWebApp 'Microsoft.Web/[email protected]' = {
name: appName
location: location
sku: {
name: skuName
tier: skuTier
}
properties: {
// The provider, repositoryUrl and branch fields are required for successive deployments to succeed
// for more details see: https://github.com/Azure/static-web-apps/issues/516
provider: 'DevOps'
repositoryUrl: repositoryUrl
branch: repositoryBranch
buildProperties: {
skipGithubActionWorkflowGeneration: true
}
}
}

output deployment_token string = listSecrets(staticWebApp.id, staticWebApp.apiVersion).properties.apiKey

There's some things to draw attention to in the code above:

  1. The provider, repositoryUrl and branch fields are required for successive deployments to succeed. In our case we're deploying via Azure DevOps and so our provider is 'DevOps'. For more details, look at this issue.
  2. We're creating a deployment_token which we'll need in order that we can deploy into the Azure Static Web App resource.

Static Web App

In order that we can test out Azure Static Web Apps, what we need is a static web app. You could use pretty much anything here; we're going to use Docusaurus. We'll execute this single command:

npx @docusaurus/[email protected] init static-web-app classic

Which will scaffold a Docusaurus site in a folder named static-web-app. We don't need to change it any further; let's just see if we can deploy it.

Azure Pipeline

We're going to add an azure-pipelines.yml file which Azure DevOps can use to power a pipeline:

trigger:
- main

pool:
vmImage: ubuntu-latest

steps:
- checkout: self
submodules: true

- bash: az bicep build --file infra/static-web-app/main.bicep
displayName: 'Compile Bicep to ARM'

- task: [email protected]
name: DeployStaticWebAppInfra
displayName: Deploy Static Web App infra
inputs:
deploymentScope: Resource Group
azureResourceManagerConnection: $(serviceConnection)
subscriptionId: $(subscriptionId)
action: Create Or Update Resource Group
resourceGroupName: $(azureResourceGroup)
location: $(location)
templateLocation: Linked artifact
csmFile: 'infra/static-web-app/main.json' # created by bash script
overrideParameters: >-
-repositoryUrl $(repo)
-repositoryBranch $(Build.SourceBranchName)
-appName $(staticWebAppName)
deploymentMode: Incremental
deploymentOutputs: deploymentOutputs

- task: [email protected]
name: 'SetDeploymentOutputVariables'
displayName: 'Set Deployment Output Variables'
inputs:
targetType: inline
script: |
$armOutputObj = '$(deploymentOutputs)' | ConvertFrom-Json
$armOutputObj.PSObject.Properties | ForEach-Object {
$keyname = $_.Name
$value = $_.Value.value

# Creates a standard pipeline variable
Write-Output "##vso[task.setvariable variable=$keyName;]$value"

# Creates an output variable
Write-Output "##vso[task.setvariable variable=$keyName;issecret=true;isOutput=true]$value"

# Display keys in pipeline
Write-Output "output variable: $keyName"
}
pwsh: true

- task: [email protected]
name: DeployStaticWebApp
displayName: Deploy Static Web App
inputs:
app_location: 'static-web-app'
# api_location: 'api'
output_location: 'build'
azure_static_web_apps_api_token: $(deployment_token) # captured from deploymentOutputs

When the pipeline is run, it does the following:

  1. Compiles our Bicep into an ARM template
  2. Deploys the compiled ARM template to Azure
  3. Captures the deployment outputs (essentially the deployment_token) and converts them into variables to use in the pipeline
  4. Deploys our Static Web App using the deployment_token

The pipeline depends upon a number of variables:

  • azureResourceGroup - the name of your resource group in Azure where the app will be deployed
  • location - where your app is deployed, eg northeurope
  • repo - the URL of your repository in Azure DevOps, eg https://dev.azure.com/johnnyreilly/_git/azure-static-web-apps
  • serviceConnection - the name of your AzureRM service connection in Azure DevOps
  • staticWebAppName - the name of your static web app, eg azure-static-web-apps-johnnyreilly
  • subscriptionId - your Azure subscription id from the Azure Portal

A successful pipeline looks something like this:

Screenshot of successfully running Azure Pipeline

What you might notice is that the AzureStaticWebApp is itself installing and building our application. This is handled by Microsoft Oryx. The upshot of this is that we don't need to manually run npm install and npm build ourselves; the AzureStaticWebApp task will take care of it for us.

Finally, let's see if we've deployed something successfully...

Screenshot of deployed Azure Static Web App

We have! It's worth noting that you'll likely want to give your Azure Static Web App a lovelier URL, and perhaps even put it behind Azure Front Door as well.

Provider is invalid workaround 2

Shane Neff was attempting to follow the instructions in this post and encountered issues. He shared his struggles with me as he encountered the "Provider is invalid. Cannot change the Provider. Please detach your static site first if you wish to use to another deployment provider." issue.

He was good enough to share his solution as well, which is inserting this task at the start of the pipeline (before the az bicep build step):

- task: [email protected]
inputs:
azureSubscription: '<name of your service connection>'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: 'az staticwebapp disconnect -n <name of your app>'

I haven't had the problems that Shane has had myself, but I wanted to share his fix for the people out there who almost certainly are bumping on this.

Β· 6 min read

If we're provisioning resources in Azure with Bicep, we may have a need to acquire the connection strings and keys of our newly deployed infrastructure. For example, the connection strings of an event hub or the access keys of a storage account. Perhaps we'd like to use them to run an end-to-end test, perhaps we'd like to store these secrets somewhere for later consumption. This post shows how to do that using Bicep and the listKeys helper. Optionally it shows how we could consume this in Azure Pipelines.

An alternative approach would be permissioning our pipeline to access the resources directly. You can read about that approach here.

image which contains the blog title

Event Hub connection string

First of all, let's provision an Azure Event Hub. This involves deploying an event hub namespace, an event hub in that namespace and an authorization rule. The following Bicep will do this for us:

// Create an event hub namespace

var eventHubNamespaceName = 'evhns-demo'

resource eventHubNamespace 'Microsoft.EventHub/[email protected]' = {
name: eventHubNamespaceName
location: resourceGroup().location
sku: {
name: 'Standard'
tier: 'Standard'
capacity: 1
}
properties: {
zoneRedundant: true
}
}

// Create an event hub inside the namespace

var eventHubName = 'evh-demo'

resource eventHubNamespaceName_eventHubName 'Microsoft.EventHub/namespaces/[email protected]' = {
parent: eventHubNamespace
name: eventHubName
properties: {
messageRetentionInDays: 7
partitionCount: 1
}
}

// Grant Listen and Send on our event hub

resource eventHubNamespaceName_eventHubName_ListenSend 'Microsoft.EventHub/namespaces/eventhubs/[email protected]' = {
parent: eventHubNamespaceName_eventHubName
name: 'ListenSend'
properties: {
rights: [
'Listen'
'Send'
]
}
dependsOn: [
eventHubNamespace
]
}

When this is deployed to Azure, it will result in creating something like this:

screenshot of event hub connection strings in the Azure Portal

As we can see, there are connection strings available which can be used to access the event hub. How do we get a connection string that we can play with? It's easily achieved by appending the following to our Bicep:

// Determine our connection string

var eventHubNamespaceConnectionString = listKeys(eventHubNamespaceName_eventHubName_ListenSend.id, eventHubNamespaceName_eventHubName_ListenSend.apiVersion).primaryConnectionString

// Output our variables

output eventHubNamespaceConnectionString string = eventHubNamespaceConnectionString
output eventHubName string = eventHubName

What we're doing here is using the listKeys helper on our authorization rule and retrieving the handy primaryConnectionString, which is then exposed as an output variable.

Storage Account connection string

We'd like to obtain a connection string for a storage account also. Let's put together a Bicep file that creates a storage account and a container therein. (Incidentally, it's fairly common to have a storage account provisioned alongside an event hub to facilitate reading from an event hub.)

// Create a storage account

var storageAccountName = 'stdemo'

resource eventHubNamespaceName_storageAccount 'Microsoft.Storage/[email protected]' = {
name: storageAccountName
location: resourceGroup().location
sku: {
name: 'Standard_LRS'
tier: 'Standard'
}
kind: 'StorageV2'
properties: {
networkAcls: {
bypass: 'AzureServices'
defaultAction: 'Allow'
}
accessTier: 'Hot'
allowBlobPublicAccess: false
minimumTlsVersion: 'TLS1_2'
allowSharedKeyAccess: true
}
}

// create a container inside that storage account

var blobContainerName = 'test-container'

resource storageAccountName_default_containerName 'Microsoft.Storage/storageAccounts/blobServices/[email protected]' = {
name: '${storageAccountName}/default/${blobContainerName}'
dependsOn: [
eventHubNamespaceName_storageAccount
]
}

When this is deployed to Azure, it will result in creating something like this:

screenshot of storage account access keys in the Azure Portal

Again we can see, there are connection strings available in the Azure Portal, which can be used to access the storage account. However, things aren't quite as simple as previously; in that there doesn't seem to be a way to directly acquire a connection string. What we can do, is acquire a key; and construct ourselves a connection string with that. Here's how:

// Determine our connection string

var blobStorageConnectionString = 'DefaultEndpointsProtocol=https;AccountName=${eventHubNamespaceName_storageAccount.name};EndpointSuffix=${environment().suffixes.storage};AccountKey=${listKeys(eventHubNamespaceName_storageAccount.id, eventHubNamespaceName_storageAccount.apiVersion).keys[0].value}'

// Output our variable

output blobStorageConnectionString string = blobStorageConnectionString
output blobContainerName string = blobContainerName

If you just wanted to know how to acquire connection strings from Bicep then you can stop now; we're done! But if you're curious on how the Bicep might connect to the shoulder Azure Pipelines... Read on.

From Bicep to Azure Pipelines

If we put together our snippets above into a single Bicep file it would look like this:

// Create an event hub namespace

var eventHubNamespaceName = 'evhns-demo'

resource eventHubNamespace 'Microsoft.EventHub/[email protected]' = {
name: eventHubNamespaceName
location: resourceGroup().location
sku: {
name: 'Standard'
tier: 'Standard'
capacity: 1
}
properties: {
zoneRedundant: true
}
}

// Create an event hub inside the namespace

var eventHubName = 'evh-demo'

resource eventHubNamespaceName_eventHubName 'Microsoft.EventHub/namespaces/[email protected]' = {
parent: eventHubNamespace
name: eventHubName
properties: {
messageRetentionInDays: 7
partitionCount: 1
}
}

// Grant Listen and Send on our event hub

resource eventHubNamespaceName_eventHubName_ListenSend 'Microsoft.EventHub/namespaces/eventhubs/[email protected]' = {
parent: eventHubNamespaceName_eventHubName
name: 'ListenSend'
properties: {
rights: [
'Listen'
'Send'
]
}
dependsOn: [
eventHubNamespace
]
}

// Create a storage account

var storageAccountName = 'stdemo'

resource eventHubNamespaceName_storageAccount 'Microsoft.Storage/[email protected]' = {
name: storageAccountName
location: resourceGroup().location
sku: {
name: 'Standard_LRS'
tier: 'Standard'
}
kind: 'StorageV2'
properties: {
networkAcls: {
bypass: 'AzureServices'
defaultAction: 'Allow'
}
accessTier: 'Hot'
allowBlobPublicAccess: false
minimumTlsVersion: 'TLS1_2'
allowSharedKeyAccess: true
}
}

// create a container inside that storage account

var blobContainerName = 'test-container'

resource storageAccountName_default_containerName 'Microsoft.Storage/storageAccounts/blobServices/[email protected]' = {
name: '${storageAccountName}/default/${blobContainerName}'
dependsOn: [
eventHubNamespaceName_storageAccount
]
}

// Determine our connection strings

var blobStorageConnectionString = 'DefaultEndpointsProtocol=https;AccountName=${eventHubNamespaceName_storageAccount.name};EndpointSuffix=${environment().suffixes.storage};AccountKey=${listKeys(eventHubNamespaceName_storageAccount.id, eventHubNamespaceName_storageAccount.apiVersion).keys[0].value}'
var eventHubNamespaceConnectionString = listKeys(eventHubNamespaceName_eventHubName_ListenSend.id, eventHubNamespaceName_eventHubName_ListenSend.apiVersion).primaryConnectionString

// Output our variables

output blobStorageConnectionString string = blobStorageConnectionString
output blobContainerName string = blobContainerName
output eventHubNamespaceConnectionString string = eventHubNamespaceConnectionString
output eventHubName string = eventHubName

This might be consumed in an Azure Pipeline that looks like this:

- bash: az bicep build --file infra/our-test-app/main.bicep
displayName: 'Compile Bicep to ARM'

- task: [email protected]
name: DeploySharedInfra
displayName: Deploy Shared ARM Template
inputs:
deploymentScope: Resource Group
azureResourceManagerConnection: ${{ parameters.serviceConnection }}
subscriptionId: $(subscriptionId)
action: Create Or Update Resource Group
resourceGroupName: $(azureResourceGroup)
location: $(location)
templateLocation: Linked artifact
csmFile: 'infra/our-test-app/main.json' # created by bash script
deploymentMode: Incremental
deploymentOutputs: deployOutputs

- task: [email protected]
name: 'SetOutputVariables'
displayName: 'Set Output Variables'
inputs:
targetType: inline
script: |
$armOutputObj = '$(deployOutputs)' | ConvertFrom-Json
$armOutputObj.PSObject.Properties | ForEach-Object {
$keyname = $_.Name
$value = $_.Value.value

# Creates a standard pipeline variable
Write-Output "##vso[task.setvariable variable=$keyName;]$value"

# Creates an output variable
Write-Output "##vso[task.setvariable variable=$keyName;issecret=true;isOutput=true]$value"

# Display keys in pipeline
Write-Output "output variable: $keyName"
}
pwsh: true

Above we can see:

  • the Bicep get compiled to ARM
  • the ARM is deployed to Azure, with deploymentOutputs being passed out at the end
  • the outputs are turned into secret output variables inside the pipeline (the names of which are printed to the console)

With the above in place, we now have all of our variables in place; blobStorageConnectionString, blobContainerName, eventHubNamespaceConnectionString and eventHubName. These could now be consumed in whatever way is useful. Consider the following:

- task: [email protected]
displayName: 'Install .NET Core SDK 3.1.x'
inputs:
packageType: 'sdk'
version: 3.1.x

- task: [email protected]
displayName: 'dotnet run eventhub test'
inputs:
command: 'run'
arguments: 'eventhub test --eventHubNamespaceConnectionString "$(eventHubNamespaceConnectionString)" --eventHubName "$(eventHubName)" --blobStorageConnectionString "$(blobStorageConnectionString)" --blobContainerName "$(blobContainerName)"'
workingDirectory: '$(Build.SourcesDirectory)/OurTestApp'

Here we run a .NET application and pass it our connection strings. Please note, there's nothing .NET specific about what we're doing above - it could be any kind of application, bash script or similar that consumes our connection strings. The significant thing is that we can acquire connection strings in an automated fashion, for use in whichever manner pleases us.

Β· 4 min read

The upgrade of Azure Functions from .NET Core 3.1 to .NET 5 is significant. There's an excellent guide for the general steps required to perform the upgrade. However there's a number of (unrelated) items which are not covered by that post:

  • Query params
  • Dependency Injection
  • Bicep
  • Build

This post will show how to tackle these.

title image showing name of post and the Azure Functions logo

Query params

As part of the move to .NET 5 functions, we say goodbye to HttpRequest and hello to HttpRequestData. Now HttpRequest had a useful Query property which allowed for the simple extraction of query parameters like so.

var from = req.Query["from"]

HttpRequestData has no such property. However, it's straightforward to make our own. It's simply a matter of using System.Web.HttpUtility.ParseQueryString on req.Url.Query and using that:

var query = System.Web.HttpUtility.ParseQueryString(req.Url.Query);
var from = query["from"]

Dependency Injection, local development and Azure Application Settings

Dependency Injection is a much more familiar shape in .NET 5 if you're familiar with .NET Core web apps. Once again we have a Program.cs file. To get the configuration built in such a way to support both local development and when deployed to Azure, there's a few things to do. When deployed to Azure you'll likely want to read from Azure Application Settings:

screenshot of Azure Application Settings

To tackle both of these, you'll want to use AddJsonFile and AddEnvironmentVariables in ConfigureAppConfiguration. A final Program.cs might look something like this:

using System;
using System.Threading.Tasks;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;

namespace MyApp
{
public class Program
{
public static Task Main(string[] args)
{
var host = new HostBuilder()
.ConfigureAppConfiguration(configurationBuilder =>
configurationBuilder
.AddCommandLine(args)
// below is for local development
.AddJsonFile("local.settings.json", optional: true, reloadOnChange: true)
// below is what you need to read Application Settings in Azure
.AddEnvironmentVariables()
)
.ConfigureFunctionsWorkerDefaults()
.ConfigureServices(services =>
{
services.AddLogging();
services.AddHttpClient();
})
.Build();

return host.RunAsync();
}
}
}

With this approach in place, when the application runs, it should construct a configuration driven by all the providers required to run our application.

Bicep

When it comes to deploying to Azure via Bicep, there's some small tweaks required:

  • appSettings.FUNCTIONS_WORKER_RUNTIME becomes dotnet-isolated
  • linuxFxVersion becomes DOTNET-ISOLATED|5.0

Applied to the resource itself the diff looks like this:

resource functionAppName_resource 'Microsoft.Web/[email protected]' = {
name: functionAppName
location: location
tags: tags_var
kind: 'functionapp,linux'
identity: {
type: 'SystemAssigned'
}
properties: {
serverFarmId: appServicePlanName_resource.id
siteConfig: {
http20Enabled: true
remoteDebuggingEnabled: false
minTlsVersion: '1.2'
appSettings: [
{
name: 'FUNCTIONS_EXTENSION_VERSION'
value: '~3'
}
{
name: 'FUNCTIONS_WORKER_RUNTIME'
- value: 'dotnet'
+ value: 'dotnet-isolated'
}
{
name: 'AzureWebJobsStorage'
value: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${listKeys(resourceId('Microsoft.Storage/storageAccounts', storageAccountName), '2019-06-01').keys[0].value};EndpointSuffix=${environment().suffixes.storage}'
}
]
connectionStrings: [
{
name: 'TableStorageConnectionString'
connectionString: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${listKeys(resourceId('Microsoft.Storage/storageAccounts', storageAccountName), '2019-06-01').keys[0].value};EndpointSuffix=${environment().suffixes.storage}'
}
]
- linuxFxVersion: 'DOTNETCORE|LTS'
+ linuxFxVersion: 'DOTNET-ISOLATED|5.0'
ftpsState: 'Disabled'
managedServiceIdentityId: 1
}
clientAffinityEnabled: false
httpsOnly: true
}
}

Building .NET 5 functions

Before signing off, there's one more thing to slip in. When attempting to build .NET 5 Azure Functions with the .NET SDK alone, you'll encounter this error:

The framework 'Microsoft.NETCore.App', version '3.1.0' was not found.

Docs on this seem to be pretty short. The closest I came to docs was this comment on Stack Overflow:

To build .NET 5 functions, the .NET Core 3 SDK is required. So this must be installed alongside the 5.0.x sdk.

So with Azure Pipelines you might have have something that looks like this:

stages:
- stage: build
displayName: build
pool:
vmImage: 'ubuntu-latest'
jobs:
- job: BuildAndTest
displayName: 'Build and Test'
steps:
# we need .NET Core SDK 3.1 too!
- task: [email protected]
displayName: 'Install .NET Core SDK 3.1'
inputs:
packageType: 'sdk'
version: 3.1.x

- task: [email protected]
displayName: 'Install .NET SDK 5.0'
inputs:
packageType: 'sdk'
version: 5.0.x

- task: [email protected]
displayName: 'function app test'
inputs:
command: test

- task: [email protected]
displayName: 'function app build'
inputs:
command: build
arguments: '--configuration Release --output $(Build.ArtifactStagingDirectory)/MyApp'

- task: [email protected]
displayName: 'function app publish'
inputs:
command: publish
arguments: '--configuration Release --output $(Build.ArtifactStagingDirectory)/MyApp /p:SourceRevisionId=$(Build.SourceVersion)'
publishWebProjects: false
modifyOutputPath: false
zipAfterPublish: true

- publish: $(Build.ArtifactStagingDirectory)/MyApp
artifact: functionapp

Have fun building .NET 5 functions!

Β· 3 min read

Bicep makes Azure Resource Management a great deal simpler than ARM templates. The selling point here is grokkability. This post takes a look at the "Hello World" example recently added to the Bicep repo to appreciate quite what a difference it makes.

hello world bicep

More than configuration

The "Hello World" added to the Bicep repo by Chris Lewis illustrates the simplest usage of Bicep:

This bicep file takes a yourName parameter and adds that to a hello variable and returns the concatenated string as an ARM output.

This is, when you consider it, the very essence of a computer program. Taking an input, doing some computation and providing an output. When I think about ARM templates, (and because Bicep is transpiled into ARM templates I mentally bracket the two together) I tend to think about resources being deployed. I focus on configuration, not computation

This is an imperfect mental model. ARM templates can do so much more than deploy by slinging strings and numbers. Thanks to the wealth of template functions that exist they have much more power. They can do computation.

The Hello World example focuses just on computation.

From terse to verbose

The Hello World example is made up of two significant files:

  1. main.bicep - the bicep code
  2. main.json - the ARM template compiled from the Bicep file

The main.bicep file amounts to 3 lines of code (I have omitted the comment line):

param yourName string
var hello = 'Hello World! - Hi'

output helloWorld string = '${hello} ${yourName}'
  • the first line takes the input of yourName
  • the second line declares a hello variable
  • the third line computes the new value of helloWorld based upon hello and yourName, then passes it as output

Gosh is it ever simple. It's easy to read and it's simple to understand. Even if you don't know Bicep, if you've experience in another language you can likely guess what's happening.

Let's compare this with the main.json that main.bicep is transpiled into:

{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"metadata": {
"_generator": {
"name": "bicep",
"version": "dev",
"templateHash": "6989941473549654446"
}
},
"parameters": {
"yourName": {
"type": "string"
}
},
"functions": [],
"variables": {
"hello": "Hello World! - Hi"
},
"resources": [],
"outputs": {
"helloWorld": {
"type": "string",
"value": "[format('{0} {1}', variables('hello'), parameters('yourName'))]"
}
}
}

The above ARM template expresses exactly the same thing as the Bicep alternative. But that 3 lines of logic has become 27 lines of JSON. We've lost something in the transition. Intent is no longer clear. We've gone from something easy to reason about, to something that is hard to reason about. You need to think a lot less to write the Bicep alternative and that's a good thing.

I was chatting to someone recently who expressed it well by saying:

ARM is the format that the resource providers understand, so really it’s the Azure equivalent of Assembler – and I don’t know anyone who enjoys coding in Assembler.

This is a great example of the value that Bicep provides. If you'd like to play with the Hello World a little, why not take it for a spin in the Bicep playground.

Β· 2 min read

Last time I wrote about how to use the Azure CLI to run Bicep within the context of an Azure Pipeline. The solution was relatively straightforward, and involved using az deployment group create in a task. There's an easier way.

Bicep meet Azure Pipelines

The easier way

The target reader of the previous post was someone who was already using [email protected] in an Azure Pipeline to deploy an ARM template. Rather than replacing your existing [email protected] tasks, all you need do is insert a prior bash step that compiles the Bicep to ARM, which your existing template can then process. It looks like this:

- bash: az bicep build --file infra/app-service/azuredeploy.bicep
displayName: 'Compile Bicep to ARM'

This will take your Bicep template of azuredeploy.bicep, transpile it into an ARM template named azuredeploy.json which a subsequent [email protected] task can process. Since this is just exercising the Azure CLI, using bash is not required; powershell etc would also be fine; it's just required that the Azure CLI is available in a pipeline.

In fact this simple task could even be a one-liner if you didn't fancy using the displayName. (Though I say keep it; optimising for readability is generally a good shout.) A full pipeline could look like this:

- bash: az bicep build --file infra/app-service/azuredeploy.bicep
displayName: 'Compile Bicep to ARM'

- task: [email protected]
displayName: 'Deploy Hello Azure ARM'
inputs:
azureResourceManagerConnection: '$(azureSubscription)'
action: Create Or Update Resource Group
resourceGroupName: '$(resourceGroupName)'
location: 'North Europe'
templateLocation: Linked artifact
csmFile: 'infra/app-service/azuredeploy.json' # created by bash script
csmParametersFile: 'infra/app-service/azuredeploy.parameters.json'
deploymentMode: Incremental
deploymentOutputs: resourceGroupDeploymentOutputs
overrideParameters: -applicationName $(Build.Repository.Name)

- pwsh: |
$outputs = ConvertFrom-Json '$(resourceGroupDeploymentOutputs)'
foreach ($output in $outputs.PSObject.Properties) {
Write-Host "##vso[task.setvariable variable=RGDO_$($output.Name)]$($output.Value.value)"
}
displayName: 'Turn ARM outputs into variables'

And when it's run, it may result in something along these lines:

Bicep in an Azure Pipeline

So if you want to get using Bicep right now with minimal effort, this an on ramp that could work for you! Props to Jamie McCrindle for suggesting this.

Β· 5 min read

Bicep is a terser and more readable alternative language to ARM templates. Running ARM templates in Azure Pipelines is straightforward. However, there isn't yet a first class experience for running Bicep in Azure Pipelines. This post demonstrates an approach that can be used until a Bicep task is available.

Bicep meet Azure Pipelines

Bicep: mostly ARMless

If you've been working with Azure and infrastructure as code, you'll likely have encountered ARM templates. They're a domain specific language that lives inside JSON, used to define the infrastructure that is deployed to Azure; App Services, Key Vaults and the like.

ARM templates are quite verbose and not the easiest thing to read. This is a consequence of being effectively a language nestled inside another language. Bicep is an alternative language which is far more readable. Bicep transpiles down to ARM templates, in the same way that TypeScript transpiles down to JavaScript.

Bicep is quite new, but already it enjoys feature parity with ARM templates (as of v0.3) and ships as part of the Azure CLI. However, as Bicep is new, it doesn't yet have a dedicated Azure Pipelines task for deployment. This should exist in future, perhaps as soon as the v0.4 release. In the meantime there's an alternative way to achieve this which we'll go through.

App Service with Bicep

Let's take a simple Bicep file, azuredeploy.bicep, which is designed to deploy an App Service resource to Azure. It looks like this:

@description('Tags that our resources need')
param tags object = {
costCenter: 'todo: replace'
environment: 'todo: replace'
application: 'todo: replace with app name'
description: 'todo: replace'
managedBy: 'ARM'
}

@minLength(2)
@description('Base name of the resource such as web app name and app service plan')
param applicationName string

@description('Location for all resources.')
param location string = resourceGroup().location

@description('The SKU of App Service Plan')
param sku string

var appServicePlanName_var = 'plan-${applicationName}-${tags.environment}'
var linuxFxVersion = 'DOTNETCORE|5.0'
var fullApplicationName_var = 'app-${applicationName}-${uniqueString(applicationName)}'

resource appServicePlanName 'Microsoft.Web/[email protected]' = {
name: appServicePlanName_var
location: location
sku: {
name: sku
}
kind: 'linux'
tags: {
CostCenter: tags.costCenter
Environment: tags.environment
Description: tags.description
ManagedBy: tags.managedBy
}
properties: {
reserved: true
}
}

resource fullApplicationName 'Microsoft.Web/[email protected]' = {
name: fullApplicationName_var
location: location
kind: 'app'
tags: {
CostCenter: tags.costCenter
Environment: tags.environment
Description: tags.description
ManagedBy: tags.managedBy
}
properties: {
serverFarmId: appServicePlanName.id
clientAffinityEnabled: true
siteConfig: {
appSettings: []
linuxFxVersion: linuxFxVersion
alwaysOn: false
ftpsState: 'Disabled'
http20Enabled: true
minTlsVersion: '1.2'
remoteDebuggingEnabled: false
}
httpsOnly: true
}
identity: {
type: 'SystemAssigned'
}
}

output fullApplicationName string = fullApplicationName_var

When transpiled down to an ARM template, this Bicep file more than doubles in size:

  • azuredeploy.bicep - 1782 bytes
  • azuredeploy.json - 3863 bytes

This tells you something of the advantage of Bicep. The template comes with an associated azuredeploy.parameters.json file:

{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"tags": {
"value": {
"costCenter": "8888",
"environment": "stg",
"application": "hello-azure",
"description": "App Service for hello-azure",
"managedBy": "ARM"
}
},
"sku": {
"value": "B1"
}
}
}

It's worth remembering that you can use the same parameters files with Bicep that you can use with ARM templates. This is great for minimising friction when it comes to migrating.

Bicep in azure-pipelines.yml

Now we have our Bicep file, we want to execute it from the context of an Azure Pipeline. If we were working directly with the ARM template we'd likely have something like this in place:

- task: [email protected]
displayName: 'Deploy Hello Azure ARM'
inputs:
azureResourceManagerConnection: '$(azureSubscription)'
action: Create Or Update Resource Group
resourceGroupName: '$(resourceGroupName)'
location: 'North Europe'
templateLocation: Linked artifact
csmFile: 'infra/app-service/azuredeploy.json'
csmParametersFile: 'infra/app-service/azuredeploy.parameters.json'
deploymentMode: Incremental
deploymentOutputs: resourceGroupDeploymentOutputs
overrideParameters: -applicationName $(Build.Repository.Name)

- pwsh: |
$outputs = ConvertFrom-Json '$(resourceGroupDeploymentOutputs)'
foreach ($output in $outputs.PSObject.Properties) {
Write-Host "##vso[task.setvariable variable=RGDO_$($output.Name)]$($output.Value.value)"
}
displayName: 'Turn ARM outputs into variables'

There's two tasks above. The first is the native task for ARM deployments which takes our ARM template and our parameters and deploys them. The second task takes the output variables from the first task and converts them into Azure Pipeline variables such that they can be referenced later in the pipeline. In this case this variablifies our fullApplicationName output.

There is, as yet, no [email protected]. Though it's coming. In the meantime, the marvellous Alex Frankel advised:

I'd recommend using the Azure CLI task to deploy. As long as that task is updated to Az CLI version 2.20 or later, it will automatically install the bicep CLI when calling az deployment group create -f main.bicep.

Let's give it a go!

- task: [email protected]
displayName: 'Deploy Hello Azure Bicep'
inputs:
azureSubscription: '$(azureSubscription)'
scriptType: bash
scriptLocation: inlineScript
inlineScript: |
az --version

echo "az deployment group create --resource-group '$(resourceGroupName)' --name appservicedeploy"
az deployment group create --resource-group '$(resourceGroupName)' --name appservicedeploy \
--template-file infra/app-service/azuredeploy.bicep \
--parameters infra/app-service/azuredeploy.parameters.json \
--parameters applicationName='$(Build.Repository.Name)'

echo "az deployment group show --resource-group '$(resourceGroupName)' --name appservicedeploy"
deploymentoutputs=$(az deployment group show --resource-group '$(resourceGroupName)' --name appservicedeploy \
--query properties.outputs)

echo 'convert outputs to variables'
echo $deploymentoutputs | jq -c '. | to_entries[] | [.key, .value.value]' |
while IFS=$"\n" read -r c; do
outputname=$(echo "$c" | jq -r '.[0]')
outputvalue=$(echo "$c" | jq -r '.[1]')
echo "setting variable RGDO_$outputname=$outputvalue"
echo "##vso[task.setvariable variable=RGDO_$outputname]$outputvalue"
done

The above is just a single Azure CLI task (as advised). It invokes az deployment group create passing the relevant parameters. It then acquires the output properties using az deployment group show. Finally it once again converts these outputs to Azure Pipeline variables with some jq smarts.

This works right now, and running it results in something like the output below. So if you're excited about Bicep and don't want to wait for 0.4 to start moving on this, then this can get you going. To track the progress of the custom task, keep an eye on this issue.

Bicep in an Azure Pipeline

Update: an even simpler alternative

There is even a simpler way to do this which I discovered subsequent to writing this. Have a read.