Skip to main content

3 posts tagged with "Azure Container Apps"

View All Tags

ยท 22 min read

This post shows how to build and deploy two Azure Container Apps using Bicep and GitHub Actions. These apps will communicate using dapr, be built in VS Code using a devcontainer. It will be possible to debug in VS Code and run with docker-compose.

This follows on from the previous post which built and deployed a simple web application to Azure Container Apps using Bicep and GitHub Actions using the GitHub container registry.

title image reading "Azure Container Apps dapr, devcontainer, debug and deploy"  with the dapr, Bicep, Azure Container Apps and GitHub Actions logos

What we're going to buildโ€‹

As an engineer, I'm productive when:

  • Integrating different services together is a turnkey experience and
  • I'm able to easily debug my code

I've found that using dapr and VS Code I'm able to achieve both of these goals. I can build an application made up of multiple services, compose them together using dapr and deploy them to Azure Container Apps with relative ease.

In this post we're going to build an example of that from scratch, with a koa/node.js (built with TypeScript) front end that will communicate with a dotnet service via dapr.

All the work done in this post can be found in the dapr-devcontainer-debug-and-deploy repo. As a note, if you're interested in this topic it's also worth looking at the Azure-Samples/container-apps-store-api-microservice repo.

Setting up our devcontainerโ€‹

The first thing we'll do is set up our devcontainer. We're going to use a tweaked version of the docker-in-docker image from the vscode-dev-containers repo.

In the root of our project we'll create a .devcontainer folder, and within that a library-scripts folder. There's a number of communal scripts from the vscode-dev-containers repo which we're going to lift and shift into in our library-scripts folder:

In the .devcontainer folder we want to create a Dockerfile:

# [Choice] .NET version: 6.0, 5.0, 3.1, 2.1
ARG VARIANT=3.1
FROM mcr.microsoft.com/vscode/devcontainers/dotnet:0-${VARIANT}
RUN su vscode -c "umask 0002 && dotnet tool install -g Microsoft.Tye --version \"0.10.0-alpha.21420.1\" 2>&1"

# [Choice] Node.js version: none, lts/*, 16, 14, 12, 10
ARG NODE_VERSION="14"
RUN if [ "${NODE_VERSION}" != "none" ]; then su vscode -c "umask 0002 && . /usr/local/share/nvm/nvm.sh && nvm install ${NODE_VERSION} 2>&1"; fi

# [Option] Install Azure CLI
ARG INSTALL_AZURE_CLI="false"
COPY library-scripts/azcli-debian.sh /tmp/library-scripts/
RUN if [ "$INSTALL_AZURE_CLI" = "true" ]; then bash /tmp/library-scripts/azcli-debian.sh; fi \
&& apt-get clean -y && rm -rf /var/lib/apt/lists/* /tmp/library-scripts \
&& az bicep install

# [Option] Enable non-root Docker access in container
ARG ENABLE_NONROOT_DOCKER="true"
# [Option] Use the OSS Moby CLI instead of the licensed Docker CLI
ARG USE_MOBY="true"
# [Option] Engine/CLI Version
ARG DOCKER_VERSION="latest"

# Enable new "BUILDKIT" mode for Docker CLI
ENV DOCKER_BUILDKIT=1

ARG USERNAME=vscode

# Install needed packages and setup non-root user. Use a separate RUN statement to add your
# own dependencies. A user of "automatic" attempts to reuse an user ID if one already exists.
COPY library-scripts/docker-in-docker-debian.sh /tmp/library-scripts/
RUN apt-get update \
&& apt-get install python3-pip -y \
# Use Docker script from script library to set things up
&& /bin/bash /tmp/library-scripts/docker-in-docker-debian.sh "${ENABLE_NONROOT_DOCKER}" "${USERNAME}" "${USE_MOBY}" "${DOCKER_VERSION}"

# Install Dapr
RUN wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | /bin/bash \
# Clean up
&& apt-get autoremove -y && apt-get clean -y && rm -rf /var/lib/apt/lists/* /tmp/library-scripts/

# Add daprd to the path for the VS Code Dapr extension.
ENV PATH="${PATH}:/home/${USERNAME}/.dapr/bin"

# Install Tye
ENV PATH=/home/${USERNAME}/.dotnet/tools:$PATH

VOLUME [ "/var/lib/docker" ]

# Setting the ENTRYPOINT to docker-init.sh will configure non-root access
# to the Docker socket. The script will also execute CMD as needed.
ENTRYPOINT [ "/usr/local/share/docker-init.sh" ]
CMD [ "sleep", "infinity" ]

# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>

The above is a loose riff on the docker-in-docker Dockerfile, lovingly mixed with the Azure-Samples container-apps Dockerfile.

It installs the following:

  • Dot Net
  • Node.js
  • the Azure CLI
  • Docker
  • Bicep
  • Dapr

Now we have our Dockerfile, we need a devcontainer.json to go with it:

// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.205.0/containers/dapr-dotnet
{
"name": "dapr",
"build": {
"dockerfile": "Dockerfile",
"args": {
// Update 'VARIANT' to pick a .NET Core version: 3.1, 5.0, 6.0
"VARIANT": "6.0",
// Options
"NODE_VERSION": "lts/*",
"INSTALL_AZURE_CLI": "true"
}
},
"runArgs": ["--init", "--privileged"],
"mounts": ["source=dind-var-lib-docker,target=/var/lib/docker,type=volume"],
"overrideCommand": false,

// Use this environment variable if you need to bind mount your local source code into a new container.
"remoteEnv": {
"LOCAL_WORKSPACE_FOLDER": "${localWorkspaceFolder}",
"PATH": "/home/vscode/.dapr/bin/:/home/vscode/.dotnet/tools:$PATH${containerEnv:PATH}"
},

// Set *default* container specific settings.json values on container create.
"settings": {},

// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"ms-azuretools.vscode-dapr",
"ms-azuretools.vscode-docker",
"ms-dotnettools.csharp",
"ms-vscode.azurecli",
"ms-azuretools.vscode-bicep"
],

// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],

// Ensure Dapr is running on opening the container
"postCreateCommand": "dapr uninstall --all && dapr init",

// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "vscode",
"features": {
"azure-cli": "latest"
}
}

The above will:

  • install Node 16 / dotnet 6 and the latest Azure CLI
  • install a number of VS Code extensions related to dapr / Docker / Bicep / Azure / C#
  • install dapr when the container starts

We're ready! Reopen your repo in a container (it will take a while first time out) and you'll be ready to go.

Create a dotnet serviceโ€‹

Now we're going to create a dotnet service. The aim of this post is not to build a specific application, but rather to demonstrate how simple service to service communication is with dapr. So we'll use the web api template that ships with dotnet 6. That arrives with a fake weather API included, so we'll name our service accordingly:

dotnet new webapi -o WeatherService

Inside the created Program.cs, find the following line and delete it:

app.UseHttpsRedirection();

HTTPS is important, however Azure Container Apps are going to tackle that for us.

Create a Node.js service (with Koa)โ€‹

Creating our dotnet service was very simple. We're now going to create a web app with Node.js and Koa that calls our dotnet service. This will be a little more complicated - but still surprisingly simple thanks to the great API choices of dapr.

Let's make that service:

mkdir WebService
cd WebService
npm init -y
npm install koa axios --save
npm install @types/koa @types/node @types/axios typescript --save-dev

We're installing the following:

  • koa - the web framework we're going to use
  • axios - to make calls to our dotnet service via HTTP / dapr
  • TypeScript and associated type definitions, so we can take advantage of static typing. Admittedly since we're building a minimal example this is not super beneficial; but TS makes me happy and I'd certainly want static typing in place if going beyond a simple example. Start as you mean to go on.

We'll create a tsconfig.json:

{
"compilerOptions": {
"esModuleInterop": true,
"module": "commonjs",
"target": "es2017",
"noImplicitAny": true,
"outDir": "./dist",
"strict": true,
"sourceMap": true
}
}

We'll update the scripts section of our package.json like so:

  "scripts": {
"build": "tsc",
"start": "node dist/index.js"
},

So we can build and start our web app. Now let's write it!

We're going to create an index.ts file:

import Koa from 'koa';
import axios from 'axios';

// How we connect to the dotnet service with dapr
const daprSidecarBaseUrl = `http://localhost:${
process.env.DAPR_HTTP_PORT || 3501
}`;
// app id header for service discovery
const weatherServiceAppIdHeaders = {
'dapr-app-id': process.env.WEATHER_SERVICE_NAME || 'dotnet-app',
};

const app = new Koa();

app.use(async (ctx) => {
try {
const data = await axios.get<WeatherForecast[]>(
`${daprSidecarBaseUrl}/weatherForecast`,
{
headers: weatherServiceAppIdHeaders,
}
);

ctx.body = `And the weather today will be ${data.data[0].summary}`;
} catch (exc) {
console.error('Problem calling weather service', exc);
ctx.body = 'Something went wrong!';
}
});

const portNumber = 3000;
app.listen(portNumber);
console.log(`listening on port ${portNumber}`);

interface WeatherForecast {
date: string;
temperatureC: number;
temperatureF: number;
summary: string;
}

The above code is fairly simple but is achieving quite a lot. It:

  • uses various environment variables to construct the URLs / headers which allow connecting to the dapr sidecar running alongside the app, and consequently to the weather service through the dapr sidecar running alongside the weather service. We're going to set up the environment variables which this code relies upon later.
  • spins up a web server with koa on port 3000
  • that web server, when sent an HTTP request, will call the weatherForecast endpoint of the dotnet app. It will grab what comes back, take the first entry in there and surface that up as the weather forecast.
  • We're also defining a WeatherForecast interface to represent the type of the data that comes back from the dotnet service

It's worth dwelling for a moment on the simplicity that dapr is affording us here. We're able to make HTTP requests to our dotnet service just like they were any other service running locally. What's actually happening is illustrated by the diagram below:

a diagram showing traffic going from the web service to the weather service and back again via dapr

We're making HTTP requests from the web service, which look like they're going directly to the weather service. But in actual fact, they're being routed through dapr sidecars until they reach their destination. Why is this fantastic? Well there's two things we aren't having to think about here:

  • certificates
  • inter-service authentication

Both of these can be complex and burn a large amount of engineering time. Because we're using dapr it's not a problem we have to solve. Isn't that great?

Debugging dapr in VS Codeโ€‹

We want to be able to debug this code. We can achieve that in VS Code by setting a launch.json and a tasks.json file.

First of all we'll create a launch.json file in the .vscode folder of our repo:

{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"compounds": [
{
"name": "All Container Apps",
"configurations": ["WeatherService", "WebService"],
"presentation": {
"hidden": false,
"group": "Containers",
"order": 1
}
}
],
"configurations": [
{
"name": "WeatherService",
"type": "coreclr",
"request": "launch",
"preLaunchTask": "daprd-debug-dotnet",
"postDebugTask": "daprd-down-dotnet",
"program": "${workspaceFolder}/WeatherService/bin/Debug/net6.0/WeatherService.dll",
"args": [],
"cwd": "${workspaceFolder}",
"stopAtEntry": false,
"env": {
"DOTNET_ENVIRONMENT": "Development",
"DOTNET_URLS": "http://localhost:5000",
"DAPR_HTTP_PORT": "3500",
"DAPR_GRPC_PORT": "50000",
"DAPR_METRICS_PORT": "9090"
}
},

{
"name": "WebService",
"type": "node",
"request": "launch",
"preLaunchTask": "daprd-debug-node",
"postDebugTask": "daprd-down-node",
"program": "${workspaceFolder}/WebService/index.ts",
"cwd": "${workspaceFolder}",
"env": {
"NODE_ENV": "development",
"PORT": "3000",
"DAPR_HTTP_PORT": "3501",
"DAPR_GRPC_PORT": "50001",
"DAPR_METRICS_PORT": "9091",
"WEATHER_SERVICE_NAME": "dotnet-app"
},
"protocol": "inspector",
"outFiles": ["${workspaceFolder}/WebService/dist/**/*.js"],
"serverReadyAction": {
"action": "openExternally"
}
}
]
}

The things to note about this are:

  • we create a Node.js ("WebService") and a dotnet ("WeatherService") configuration. These are referenced by the All Container Apps compound. Kicking off that will start both the Node.js and the dotnet apps.
  • The Node.js app runs a daprd-debug-node task prior to launch and a daprd-down-node task when debugging completes. Comparable tasks are run by the dotnet container - we'll look at these in a moment.
  • Various environment variables are configured, most of which control the behaviour of dapr. When we're debugging locally we'll be using some non-typical ports to accomodate multiple dapr sidecars being in play at the same time. Note also the "WEATHER_SERVICE_NAME": "dotnet-app" - it's this that allows the WebService to communicate with the WeatherService - dotnet-app is the appId used to identify a service with dapr. We'll see that as we configure our tasks.json.

Here's the tasks.json we must make:

{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"label": "dotnet-build",
"command": "dotnet",
"type": "process",
"args": [
"build",
"${workspaceFolder}/WeatherService/WeatherService.csproj",
"/property:GenerateFullPaths=true",
"/consoleloggerparameters:NoSummary"
],
"problemMatcher": "$msCompile"
},
{
"label": "daprd-debug-dotnet",
"appId": "dotnet-app",
"appPort": 5000,
"httpPort": 3500,
"grpcPort": 50000,
"metricsPort": 9090,
"type": "daprd",
"dependsOn": ["dotnet-build"]
},
{
"label": "daprd-down-dotnet",
"appId": "dotnet-app",
"type": "daprd-down"
},

{
"label": "npm-install",
"type": "shell",
"command": "npm install",
"options": {
"cwd": "${workspaceFolder}/WebService"
}
},
{
"label": "webservice-build",
"type": "typescript",
"tsconfig": "WebService/tsconfig.json",
"problemMatcher": ["$tsc"],
"group": {
"kind": "build",
"isDefault": true
},
"dependsOn": ["npm-install"]
},
{
"label": "daprd-debug-node",
"appId": "node-app",
"appPort": 3000,
"httpPort": 3501,
"grpcPort": 50001,
"metricsPort": 9091,
"type": "daprd",
"dependsOn": ["webservice-build"]
},
{
"label": "daprd-down-node",
"appId": "node-app",
"type": "daprd-down"
}
]
}

There's two sets of tasks here; one for the WeatherService and one for the WebService. You'll see some commonalities here. For each service there's a daprd task that depends upon the relevant service being built and passes the various ports for the dapr sidecar to run on that runs just before debugging kicks off. To go with that, there's a daprd-down task for each service that runs when debugging finishes and shuts down dapr.

We're now ready to debug our app. Let's hit F5.

screenshot of debugging the index.ts file in VS Code

And if we look at our browser:

screenshot of browsing Firefox at http://localhost:3000 and seeing &quot;And the weather today will be Freezing&quot; in the output

It works! We're running a Node.js WebService which, when called, is communicating with our dotnet WeatherService and surfacing up the results. Brilliant!

Containerising our services with Dockerโ€‹

Before we can deploy each of our services, they need to be containerised.

First let's add a Dockerfile to the WeatherService folder:

FROM mcr.microsoft.com/dotnet/sdk:6.0 as build
WORKDIR /app
COPY . .
RUN dotnet restore
RUN dotnet publish -o /app/publish

FROM mcr.microsoft.com/dotnet/aspnet:6.0 as runtime
WORKDIR /app
COPY --from=build /app/publish /app

ENV DOTNET_ENVIRONMENT=Production
ENV ASPNETCORE_URLS='http://+:5000'
EXPOSE 5000
ENTRYPOINT [ "dotnet", "/app/WeatherService.dll" ]

Then we'll add a Dockerfile to the WebService folder:

FROM node:16 AS build
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
RUN npm install

COPY . .
RUN npm run build

FROM node:16 AS runtime
WORKDIR /app
COPY --from=build /app/dist /app
COPY --from=build /app/package.json /app
COPY --from=build /app/package-lock.json /app
RUN npm install

ENV NODE_ENV production
EXPOSE 3000
ENTRYPOINT [ "node", "/app/index.js" ]

Likely these Dockerfiles could be optimised further; but we're not focussed on that just now. What we have now are two simple Dockerfiles that will give us images we can run. Given that one depends on the other it makes sense to bring them together with a docker-compose.yml file which we'll place in the root of the repo:

version: '3.4'

services:
weatherservice:
image: ${REGISTRY:-weatherservice}:${TAG:-latest}
build:
context: ./WeatherService
dockerfile: Dockerfile
ports:
- '50000:50000' # Dapr instances communicate over gRPC so we need to expose the gRPC port
environment:
DOTNET_ENVIRONMENT: 'Development'
ASPNETCORE_URLS: 'http://+:5000'
DAPR_HTTP_PORT: 3500
DAPR_GRPC_PORT: 50000
DAPR_METRICS_PORT: 9090

weatherservice-dapr:
image: 'daprio/daprd:latest'
command:
[
'./daprd',
'-app-id',
'dotnet-app',
'-app-port',
'5000',
'-dapr-http-port',
'3500',
'-placement-host-address',
'placement:50006',
]
network_mode: 'service:weatherservice'
depends_on:
- weatherservice

webservice:
image: ${REGISTRY:-webservice}:${TAG:-latest}
ports:
- '3000:3000' # The web front end port
- '50001:50001' # Dapr instances communicate over gRPC so we need to expose the gRPC port
build:
context: ./WebService
dockerfile: Dockerfile
environment:
NODE_ENV: 'development'
PORT: '3000'
DAPR_HTTP_PORT: 3501
DAPR_GRPC_PORT: 50001
DAPR_METRICS_PORT: 9091
WEATHER_SERVICE_NAME: 'dotnet-app'

webservice-dapr:
image: 'daprio/daprd:latest'
command: [
'./daprd',
'-app-id',
'node-app',
'-app-port',
'3000',
'-dapr-http-port',
'3501',
'-placement-host-address',
'placement:50006', # Dapr's placement service can be reach via the docker DNS entry
]
network_mode: 'service:webservice'
depends_on:
- webservice

dapr-placement:
image: 'daprio/dapr:latest'
command: ['./placement', '-port', '50006']
ports:
- '50006:50006'

With this in place we can run docker-compose up and bring up our application locally.

And now we have docker images built, we can look at deploying them.

Deploying to Azureโ€‹

At this point we have pretty much everything we need in terms of application code and the ability to build and debug it. Now we'd like to deploy it to Azure.

Let's begin with the Bicep required to deploy our Azure Container Apps.

In our repository we'll create an infra directory, into which we'll place a main.bicep file which will contain our Bicep template:

param branchName string

param webServiceImage string
param webServicePort int
param webServiceIsExternalIngress bool

param weatherServiceImage string
param weatherServicePort int
param weatherServiceIsExternalIngress bool

param containerRegistry string
param containerRegistryUsername string
@secure()
param containerRegistryPassword string

param tags object

var location = resourceGroup().location
var minReplicas = 0
var maxReplicas = 1

var branch = toLower(last(split(branchName, '/')))

var environmentName = '${branch}-env'
var workspaceName = '${branch}-log-analytics'
var appInsightsName = '${branch}-app-insights'
var webServiceContainerAppName = '${branch}-web'
var weatherServiceContainerAppName = '${branch}-weather'

var containerRegistryPasswordRef = 'container-registry-password'

resource workspace 'Microsoft.OperationalInsights/[email protected]' = {
name: workspaceName
location: location
tags: tags
properties: {
sku: {
name: 'PerGB2018'
}
retentionInDays: 30
workspaceCapping: {}
}
}

resource appInsights 'Microsoft.Insights/[email protected]' = {
name: appInsightsName
location: location
tags: tags
kind: 'web'
properties: {
Application_Type: 'web'
Flow_Type: 'Bluefield'
}
}

resource environment 'Microsoft.Web/[email protected]' = {
name: environmentName
kind: 'containerenvironment'
location: location
tags: tags
properties: {
type: 'managed'
internalLoadBalancerEnabled: false
appLogsConfiguration: {
destination: 'log-analytics'
logAnalyticsConfiguration: {
customerId: workspace.properties.customerId
sharedKey: listKeys(workspace.id, workspace.apiVersion).primarySharedKey
}
}
containerAppsConfiguration: {
daprAIInstrumentationKey: appInsights.properties.InstrumentationKey
}
}
}

resource weatherServiceContainerApp 'Microsoft.Web/[email protected]' = {
name: weatherServiceContainerAppName
kind: 'containerapps'
tags: tags
location: location
properties: {
kubeEnvironmentId: environment.id
configuration: {
secrets: [
{
name: containerRegistryPasswordRef
value: containerRegistryPassword
}
]
registries: [
{
server: containerRegistry
username: containerRegistryUsername
passwordSecretRef: containerRegistryPasswordRef
}
]
ingress: {
external: weatherServiceIsExternalIngress
targetPort: weatherServicePort
}
}
template: {
containers: [
{
image: weatherServiceImage
name: weatherServiceContainerAppName
transport: 'auto'
}
]
scale: {
minReplicas: minReplicas
maxReplicas: maxReplicas
}
dapr: {
enabled: true
appPort: weatherServicePort
appId: weatherServiceContainerAppName
}
}
}
}

resource webServiceContainerApp 'Microsoft.Web/[email protected]' = {
name: webServiceContainerAppName
kind: 'containerapps'
tags: tags
location: location
properties: {
kubeEnvironmentId: environment.id
configuration: {
secrets: [
{
name: containerRegistryPasswordRef
value: containerRegistryPassword
}
]
registries: [
{
server: containerRegistry
username: containerRegistryUsername
passwordSecretRef: containerRegistryPasswordRef
}
]
ingress: {
external: webServiceIsExternalIngress
targetPort: webServicePort
}
}
template: {
containers: [
{
image: webServiceImage
name: webServiceContainerAppName
transport: 'auto'
env: [
{
name: 'WEATHER_SERVICE_NAME'
value: weatherServiceContainerAppName
}
]
}
]
scale: {
minReplicas: minReplicas
maxReplicas: maxReplicas
}
dapr: {
enabled: true
appPort: webServicePort
appId: webServiceContainerAppName
}
}
}
}

output webServiceUrl string = webServiceContainerApp.properties.latestRevisionFqdn

This will deploy two container apps - one for our WebService and one for our WeatherService. Alongside that we've resources for logging and environments.

Setting up a resource groupโ€‹

With our Bicep in place, we're going to need a resource group to send it to. Right now, Azure Container Apps aren't available everywhere. So we're going to create ourselves a resource group in North Europe which does support ACAs:

az group create -g rg-aca -l northeurope

Secrets for GitHub Actionsโ€‹

We're aiming to set up a GitHub Action to handle our deployment. This will depend upon a number of secrets:

Screenshot of the secrets in the GitHub website that we need to create

We'll need to create each of these secrets.

AZURE_CREDENTIALS - GitHub logging into Azureโ€‹

So GitHub Actions can interact with Azure on our behalf, we need to provide it with some credentials. We'll use the Azure CLI to create these:

az ad sp create-for-rbac --name "myApp" --role contributor \
--scopes /subscriptions/{subscription-id}/resourceGroups/{resource-group} \
--sdk-auth

Remember to replace the {subscription-id} with your subscription id and {resource-group} with the name of your resource group (rg-aca if you're following along). This command will pump out a lump of JSON that looks something like this:

{
"clientId": "a-client-id",
"clientSecret": "a-client-secret",
"subscriptionId": "a-subscription-id",
"tenantId": "a-tenant-id",
"activeDirectoryEndpointUrl": "https://login.microsoftonline.com",
"resourceManagerEndpointUrl": "https://management.azure.com/",
"activeDirectoryGraphResourceId": "https://graph.windows.net/",
"sqlManagementEndpointUrl": "https://management.core.windows.net:8443/",
"galleryEndpointUrl": "https://gallery.azure.com/",
"managementEndpointUrl": "https://management.core.windows.net/"
}

Take this and save it as the AZURE_CREDENTIALS secret in Azure.

PACKAGES_TOKEN - Azure accessing the GitHub container registryโ€‹

We also need a secret for accessing packages from Azure. We're going to be publishing packages to the GitHub container registry. Azure is going to need to be able to access this when we're deploying. ACA deployment works by telling Azure where to look for an image and providing any necessary credentials to do the acquisition. To facilitate this we'll set up a PACKAGES_TOKEN secret. This is a GitHub personal access token with the read:packages scope. Follow the instructions here to create the token.

Deploying with GitHub Actionsโ€‹

With our secrets configured, we're now well placed to write our GitHub Action. We'll create a .github/workflows/build-and-deploy.yaml file in our repository and populate it thusly:

# yaml-language-server: $schema=./build.yaml
name: Build and Deploy
on:
# Trigger the workflow on push or pull request,
# but only for the main branch
push:
branches:
- main
pull_request:
branches:
- main
# Publish semver tags as releases.
tags: ['v*.*.*']
workflow_dispatch:

env:
RESOURCE_GROUP: rg-aca
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}

jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
services:
[
{ 'imageName': 'node-service', 'directory': './WebService' },
{ 'imageName': 'dotnet-service', 'directory': './WeatherService' },
]
permissions:
contents: read
packages: write
outputs:
image-node: ${{ steps.image-tag.outputs.image-node-service }}
image-dotnet: ${{ steps.image-tag.outputs.image-dotnet-service }}
steps:
- name: Checkout repository
uses: actions/[email protected]

# Login against a Docker registry except on PR
# https://github.com/docker/login-action
- name: Log into registry ${{ env.REGISTRY }}
if: github.event_name != 'pull_request'
uses: docker/login-[email protected]
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}

# Extract metadata (tags, labels) for Docker
# https://github.com/docker/metadata-action
- name: Extract Docker metadata
id: meta
uses: docker/metadata-[email protected]
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}/${{ matrix.services.imageName }}
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=ref,event=branch
type=sha

# Build and push Docker image with Buildx (don't push on PR)
# https://github.com/docker/build-push-action
- name: Build and push Docker image
uses: docker/build-push-[email protected]
with:
context: ${{ matrix.services.directory }}
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

- name: Output image tag
id: image-tag
run: echo "::set-output name=image-${{ matrix.services.imageName }}::${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}/${{ matrix.services.imageName }}:sha-$(git rev-parse --short HEAD)" | tr '[:upper:]' '[:lower:]'

deploy:
runs-on: ubuntu-latest
needs: [build]
steps:
- name: Checkout repository
uses: actions/[email protected]

- name: Azure Login
uses: azure/[email protected]
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}

- name: Deploy bicep
uses: azure/[email protected]
if: github.event_name != 'pull_request'
with:
inlineScript: |
REF_SHA='${{ github.ref }}.${{ github.sha }}'
DEPLOYMENT_NAME="${REF_SHA////-}"
echo "DEPLOYMENT_NAME=$DEPLOYMENT_NAME"

TAGS='{"owner":"johnnyreilly", "email":"[email protected]"}'
az deployment group create \
--resource-group ${{ env.RESOURCE_GROUP }} \
--name "$DEPLOYMENT_NAME" \
--template-file ./infra/main.bicep \
--parameters \
branchName='${{ github.event.number == 0 && 'main' || format('pr-{0}', github.event.number) }}' \
webServiceImage='${{ needs.build.outputs.image-node }}' \
webServicePort=3000 \
webServiceIsExternalIngress=true \
weatherServiceImage='${{ needs.build.outputs.image-dotnet }}' \
weatherServicePort=5000 \
weatherServiceIsExternalIngress=false \
containerRegistry=${{ env.REGISTRY }} \
containerRegistryUsername=${{ github.actor }} \
containerRegistryPassword=${{ secrets.PACKAGES_TOKEN }} \
tags="$TAGS"

There's a lot in this workflow. Let's dig into the build and deploy jobs to see what's happening.

build - building our imageโ€‹

The build job is all about building our container images and pushing then to the GitHub registry. It's heavily inspired by Jeff Hollan's Azure sample app GHA. When we look at the strategy we can see a matrix of services consisting of two services; our node app and our dotnet app:

strategy:
matrix:
services:
[
{ 'imageName': 'node-service', 'directory': './WebService' },
{ 'imageName': 'dotnet-service', 'directory': './WeatherService' },
]

This is a matrix because a typical use case of an Azure Container Apps will be multi-container - just as this is. The outputs pumps out the details of our image-node and image-dotnet images to be used later:

outputs:
image-node: ${{ steps.image-tag.outputs.image-node-service }}
image-dotnet: ${{ steps.image-tag.outputs.image-dotnet-service }}

With that understanding in place, let's examine what each of the steps in the build job does

  • Log into registry - logs into the GitHub container registry
  • Extract Docker metadata - acquire tags which will be used for versioning
  • Build and push Docker image - build the docker image and if this is not a PR: tag, label and push it to the registry
  • Output image tag - write out the image tag for usage in deployment

deploy - shipping our image to Azureโ€‹

The deploy job runs the az deployment group create command which performs a deployment of our main.bicep file.

- name: Deploy bicep
uses: azure/[email protected]
if: github.event_name != 'pull_request'
with:
inlineScript: |
REF_SHA='${{ github.ref }}.${{ github.sha }}'
DEPLOYMENT_NAME="${REF_SHA////-}"
echo "DEPLOYMENT_NAME=$DEPLOYMENT_NAME"

TAGS='{"owner":"johnnyreilly", "email":"[email protected]"}'
az deployment group create \
--resource-group ${{ env.RESOURCE_GROUP }} \
--name "$DEPLOYMENT_NAME" \
--template-file ./infra/main.bicep \
--parameters \
branchName='${{ github.event.number == 0 && 'main' || format('pr-{0}', github.event.number) }}' \
webServiceImage='${{ needs.build.outputs.image-node }}' \
webServicePort=3000 \
webServiceIsExternalIngress=true \
weatherServiceImage='${{ needs.build.outputs.image-dotnet }}' \
weatherServicePort=5000 \
weatherServiceIsExternalIngress=false \
containerRegistry=${{ env.REGISTRY }} \
containerRegistryUsername=${{ github.actor }} \
containerRegistryPassword=${{ secrets.PACKAGES_TOKEN }} \
tags="$TAGS"

In either case we pass the same set of parameters:

branchName='${{ github.event.number == 0 && 'main' ||  format('pr-{0}', github.event.number) }}' \
webServiceImage='${{ needs.build.outputs.image-node }}' \
webServicePort=3000 \
webServiceIsExternalIngress=true \
weatherServiceImage='${{ needs.build.outputs.image-dotnet }}' \
weatherServicePort=5000 \
weatherServiceIsExternalIngress=true \
containerRegistry=${{ env.REGISTRY }} \
containerRegistryUsername=${{ github.actor }} \
containerRegistryPassword=${{ secrets.PACKAGES_TOKEN }} \
tags="$tags"

These are either:

  • secrets we set up earlier
  • special github variables
  • environment variables declared at the start of the script or
  • outputs from the build step - this is where we acquire our node and dotnet images

Running itโ€‹

When the GitHub Action has been run you'll find that Azure Container Apps are now showing up inside the Azure Portal in your resource group, alongside the other resources:

screenshot of the Azure Container App&#39;s resource group in the Azure Portal

If we take a look at our web ACA we'll see

screenshot of the web Azure Container App&#39;s in the Azure Portal

And when we take a closer look at the container app, we find a URL we can navigate to:

screenshot of the Azure Container App in the Azure Portal revealing it&#39;s URL

Congratulations! You've built and deployed a simple web app to Azure Container Apps with Bicep and GitHub Actions and secrets.

The subscription '***' cannot have more than 2 environments.โ€‹

Before signing off, it's probably worth sharing this gotcha. If you've been playing with Azure Container Apps you may have already deployed an "environment" (Microsoft.Web/kubeEnvironments). It's fairly common to have a limit of one environment per subscription, which is what this message is saying. So either delete other environments, share the one you have or arrange to raise the limit on your subscription.

ยท 14 min read

This post shows how to build and deploy a simple web application to Azure Container Apps using Bicep and GitHub Actions. This includes the configuration and deployment of secrets.

This post follows on from the previous post which deployed infrastructure and a "hello world" container, this time introducing the building of an image and storing it in the GitHub container registry so it can be deployed.

If you'd like to learn more about using dapr with Azure Container Apps then you might want to read this post.

title image reading &quot;Azure Container Apps: build and deploy with Bicep and GitHub Actions&quot; with the Bicep, Azure Container Apps and GitHub Actions logos

The containerised conventโ€‹

I learn the most about a technology when I'm using it to build something. It so happens that I have an aunt that's a nun, and long ago she persuaded me to build her convent a website. I'm a good nephew and I complied. Since that time I've been merrily overengineering it for fun and non-profit.

My aunts website is a pretty vanilla node app. Significantly it is already containerised and runs on Azure App Service Web App for Containers. Given it lives in the context of a container, this makes it a great candidate for porting to Azure Container Apps.

So that's what we'll do in this post. But where I'm building and deploying my aunt's container, you could equally be substituting your own; with some minimal changes.

Bicepโ€‹

Let's begin with the Bicep required to deploy our Azure Container App.

In our repository we'll create an infra directory, into which we'll place a main.bicep file which will contain our Bicep template:

param nodeImage string
param nodePort int
param nodeIsExternalIngress bool

param containerRegistry string
param containerRegistryUsername string
@secure()
param containerRegistryPassword string

param tags object

@secure()
param APPSETTINGS_API_KEY string
param APPSETTINGS_DOMAIN string
param APPSETTINGS_FROM_EMAIL string
param APPSETTINGS_RECIPIENT_EMAIL string

var location = resourceGroup().location
var environmentName = 'env-${uniqueString(resourceGroup().id)}'
var minReplicas = 0

var nodeServiceAppName = 'node-app'
var workspaceName = '${nodeServiceAppName}-log-analytics'
var appInsightsName = '${nodeServiceAppName}-app-insights'

var containerRegistryPasswordRef = 'container-registry-password'
var mailgunApiKeyRef = 'mailgun-api-key'

resource workspace 'Microsoft.OperationalInsights/[email protected]' = {
name: workspaceName
location: location
tags: tags
properties: {
sku: {
name: 'PerGB2018'
}
retentionInDays: 30
workspaceCapping: {}
}
}

resource appInsights 'Microsoft.Insights/[email protected]' = {
name: appInsightsName
location: location
tags: tags
kind: 'web'
properties: {
Application_Type: 'web'
Flow_Type: 'Bluefield'
}
}

resource environment 'Microsoft.Web/[email protected]' = {
name: environmentName
location: location
tags: tags
properties: {
type: 'managed'
internalLoadBalancerEnabled: false
appLogsConfiguration: {
destination: 'log-analytics'
logAnalyticsConfiguration: {
customerId: workspace.properties.customerId
sharedKey: listKeys(workspace.id, workspace.apiVersion).primarySharedKey
}
}
containerAppsConfiguration: {
daprAIInstrumentationKey: appInsights.properties.InstrumentationKey
}
}
}

resource containerApp 'Microsoft.Web/[email protected]' = {
name: nodeServiceAppName
kind: 'containerapps'
tags: tags
location: location
properties: {
kubeEnvironmentId: environment.id
configuration: {
secrets: [
{
name: containerRegistryPasswordRef
value: containerRegistryPassword
}
{
name: mailgunApiKeyRef
value: APPSETTINGS_API_KEY
}
]
registries: [
{
server: containerRegistry
username: containerRegistryUsername
passwordSecretRef: containerRegistryPasswordRef
}
]
ingress: {
'external': nodeIsExternalIngress
'targetPort': nodePort
}
}
template: {
containers: [
{
image: nodeImage
name: nodeServiceAppName
transport: 'auto'
env: [
{
name: 'APPSETTINGS_API_KEY'
secretref: mailgunApiKeyRef
}
{
name: 'APPSETTINGS_DOMAIN'
value: APPSETTINGS_DOMAIN
}
{
name: 'APPSETTINGS_FROM_EMAIL'
value: APPSETTINGS_FROM_EMAIL
}
{
name: 'APPSETTINGS_RECIPIENT_EMAIL'
value: APPSETTINGS_RECIPIENT_EMAIL
}
]
}
]
scale: {
minReplicas: minReplicas
}
}
}
}

Let's talk through this template. The environment, workspace and app insights resources are fairly self explanatory. The containerApp resource is where the action is. We'll drill into that resource and the parameters used to configure it.

The node container appโ€‹

We're going to create a single container app for our node web application. This is configured with these parameters:

param nodeImage string
param nodePort int
param nodeIsExternalIngress bool

The above parameters relate to the node application that represents the website. The nodeImage is the container image which should be deployed to a container app. The nodePort is the port from the app which should be exposed (3000 in our case). nodeIsExternalIngress is whether the container should be accessible on the internet. (Always true incidentally.)

When these parameters are applied to the containerApp resource, it looks like this:

var nodeServiceAppName = 'node-app'

resource containerApp 'Microsoft.Web/[email protected]' = {
// ...
properties: {
// ...
ingress: {
'external': nodeIsExternalIngress
'targetPort': nodePort
}
}
template: {
containers: [
{
image: nodeImage
name: nodeServiceAppName
// ...
}
]
// ...
}
}
}

Accessing the GitHub Container Registryโ€‹

Given that we've told Bicep to deploy an image, we're going to need to tell it what registry it can use to acquire that image. Our template takes these parameters:

param containerRegistry string
param containerRegistryUsername string
@secure()
param containerRegistryPassword string

param tags object

With the exception of the tags object which is metadata to apply to resources, these parameters are related to the container registry where our images will be stored. GitHub's in our case. Remember, what we deploy to Azure Container Apps are container images. To get something running in an ACA, it first has to reside in a container registry. There's a multitude of container registries out there and we're using the one directly available in GitHub. As an alternative, we could use an Azure Container Registry, or Docker Hub - or something else entirely.

Do note the @secure() decorator. This marks the containerRegistryPassword parameter as secure. The value for a secure parameter isn't saved to the deployment history and isn't logged. Typically you'll want to mark secrets with the @secure() decorator for this very reason.

We use the parameters to configure the registries property of our container app. This tells the ACA where it can go to collect the image it needs. You can also see our first usage of secrets here. We declare the containerRegistryPassword as a secret which is stored against the ref 'container-registry-password'; captured as the variable containerRegistryPasswordRef. That variable is then referenced in the passwordSecretRef property - thus telling ACA where it can find the password.

var containerRegistryPasswordRef = 'container-registry-password'

resource containerApp 'Microsoft.Web/[email protected]' = {
// ...
properties: {
// ...
configuration: {
secrets: [
{
name: containerRegistryPasswordRef
value: containerRegistryPassword
}
// ...
]
registries: [
{
server: containerRegistry
username: containerRegistryUsername
passwordSecretRef: containerRegistryPasswordRef
}
]
// ...
}
// ...
}
}

Secrets / Configurationโ€‹

The final collection of parameters are unrelated to the infrastructure of deployment, rather they are the things required to configure our running application:

@secure()
param APPSETTINGS_API_KEY string
param APPSETTINGS_DOMAIN string
param APPSETTINGS_FROM_EMAIL string
param APPSETTINGS_RECIPIENT_EMAIL string

Again we've got a secret marked with @secure() in the form of our APPSETTINGS_API_KEY. Just as we did with containerRegistryPassword, we declare APPSETTINGS_API_KEY to be a secret, which is stored against the ref 'mailgun-api-key'; captured as the variable mailgunApiKeyRef.

All of our configuration is exposed to the running application through environment variables. By and large this is achieved through the mechanism of key / value pairs (well technically name / value) with a slight variation for secrets. Similar to the passwordSecretRef mechanism we used for the registry password, we use a secretref in place of value when passing a secret, and the value will be the ref that was set up in the secrets section; mailgunApiKeyRef in this case.

var mailgunApiKeyRef = 'mailgun-api-key'

resource containerApp 'Microsoft.Web/[email protected]' = {
// ...
properties: {
// ...
configuration: {
secrets: [
// ...
{
name: mailgunApiKeyRef
value: APPSETTINGS_API_KEY
}
]
// ...
}
template: {
containers: [
{
// ...
env: [
{
name: 'APPSETTINGS_API_KEY'
secretref: mailgunApiKeyRef
}
{
name: 'APPSETTINGS_DOMAIN'
value: APPSETTINGS_DOMAIN
}
{
name: 'APPSETTINGS_FROM_EMAIL'
value: APPSETTINGS_FROM_EMAIL
}
{
name: 'APPSETTINGS_RECIPIENT_EMAIL'
value: APPSETTINGS_RECIPIENT_EMAIL
}
]
}
]
// ...
}
}
}

Setting up a resource groupโ€‹

With our Bicep in place, we're going to need a resource group to send it to. Right now, Azure Container Apps aren't available everywhere. So we're going to create ourselves a resource group in North Europe which does support ACAs:

az group create -g rg-aca -l northeurope

Secrets for GitHub Actionsโ€‹

We're aiming to set up a GitHub Action to handle our deployment. This will depend upon a number of secrets:

Screenshot of the secrets in the GitHub website that we need to create

We'll need to create each of these secrets.

AZURE_CREDENTIALS - GitHub logging into Azureโ€‹

So GitHub Actions can interact with Azure on our behalf, we need to provide it with some credentials. We'll use the Azure CLI to create these:

az ad sp create-for-rbac --name "myApp" --role contributor \
--scopes /subscriptions/{subscription-id}/resourceGroups/{resource-group} \
--sdk-auth

Remember to replace the {subscription-id} with your subscription id and {resource-group} with the name of your resource group (rg-aca if you're following along). This command will pump out a lump of JSON that looks something like this:

{
"clientId": "a-client-id",
"clientSecret": "a-client-secret",
"subscriptionId": "a-subscription-id",
"tenantId": "a-tenant-id",
"activeDirectoryEndpointUrl": "https://login.microsoftonline.com",
"resourceManagerEndpointUrl": "https://management.azure.com/",
"activeDirectoryGraphResourceId": "https://graph.windows.net/",
"sqlManagementEndpointUrl": "https://management.core.windows.net:8443/",
"galleryEndpointUrl": "https://gallery.azure.com/",
"managementEndpointUrl": "https://management.core.windows.net/"
}

Take this and save it as the AZURE_CREDENTIALS secret in Azure.

PACKAGES_TOKEN - Azure accessing the GitHub container registryโ€‹

We also need a secret for accessing packages from Azure. We're going to be publishing packages to the GitHub container registry. Azure is going to need to be able to access this when we're deploying. ACA deployment works by telling Azure where to look for an image and providing any necessary credentials to do the acquisition. To facilitate this we'll set up a PACKAGES_TOKEN secret. This is a GitHub personal access token with the read:packages scope. Follow the instructions here to create the token.

Secrets for the appโ€‹

Alongside these infrastructure / deployment related secrets, we'll need ones to configure the app at runtime:

  • APPSETTINGS_API_KEY - an API key for Mailgun which will be used to send emails
  • APPSETTINGS_DOMAIN - the domain for the email eg mg.poorclaresarundel.org
  • APPSETTINGS_FROM_EMAIL - who automated emails should come from eg [email protected]
  • APPSETTINGS_RECIPIENT_EMAIL - the email address emails should be sent to

Strictly speaking, only the API key is a secret. But to simplify this post we'll configure all of these as secrets in GitHub.

Deploying with GitHub Actionsโ€‹

With our secrets configured, we're now well placed to write our GitHub Action. We'll create a .github/workflows/deploy.yaml file in our repository and populate it thusly:

# yaml-language-server: $schema=./build.yaml
name: Build and Deploy
on:
# Trigger the workflow on push or pull request,
# but only for the main branch
push:
branches:
- main
pull_request:
branches:
- main
# Publish semver tags as releases.
tags: ['v*.*.*']
workflow_dispatch:

env:
RESOURCE_GROUP: rg-aca
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}

jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
services:
[{ 'imageName': 'node-service', 'directory': './node-service' }]
permissions:
contents: read
packages: write
outputs:
containerImage-node: ${{ steps.image-tag.outputs.image-node-service }}
steps:
- name: Checkout repository
uses: actions/[email protected]

# Login against a Docker registry except on PR
# https://github.com/docker/login-action
- name: Log into registry ${{ env.REGISTRY }}
if: github.event_name != 'pull_request'
uses: docker/login-[email protected]
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}

# Extract metadata (tags, labels) for Docker
# https://github.com/docker/metadata-action
- name: Extract Docker metadata
id: meta
uses: docker/metadata-[email protected]
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}/${{ matrix.services.imageName }}
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=ref,event=branch
type=sha

# Build and push Docker image with Buildx (don't push on PR)
# https://github.com/docker/build-push-action
- name: Build and push Docker image
uses: docker/build-push-[email protected]
with:
context: ${{ matrix.services.directory }}
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

- name: Output image tag
id: image-tag
run: echo "::set-output name=image-${{ matrix.services.imageName }}::${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}/${{ matrix.services.imageName }}:sha-$(git rev-parse --short HEAD)" | tr '[:upper:]' '[:lower:]'

deploy:
runs-on: ubuntu-latest
needs: [build]
steps:
- name: Checkout repository
uses: actions/[email protected]

- name: Azure Login
uses: azure/[email protected]
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}

- name: Deploy bicep
uses: azure/[email protected]
if: github.event_name != 'pull_request'
with:
inlineScript: |
tags='{"owner":"johnnyreilly", "email":"[email protected]"}'
az deployment group create \
--resource-group ${{ env.RESOURCE_GROUP }} \
--template-file ./infra/main.bicep \
--parameters \
nodeImage='${{ needs.build.outputs.containerImage-node }}' \
nodePort=3000 \
nodeIsExternalIngress=true \
containerRegistry=${{ env.REGISTRY }} \
containerRegistryUsername=${{ github.actor }} \
containerRegistryPassword=${{ secrets.PACKAGES_TOKEN }} \
tags="$tags" \
APPSETTINGS_API_KEY="${{ secrets.APPSETTINGS_API_KEY }}" \
APPSETTINGS_DOMAIN="${{ secrets.APPSETTINGS_DOMAIN }}" \
APPSETTINGS_FROM_EMAIL="${{ secrets.APPSETTINGS_FROM_EMAIL }}" \
APPSETTINGS_RECIPIENT_EMAIL="${{ secrets.APPSETTINGS_RECIPIENT_EMAIL }}"

- name: What-if bicep
uses: azure/[email protected]
if: github.event_name == 'pull_request'
with:
inlineScript: |
tags='{"owner":"johnnyreilly", "email":"[email protected]"}'
az deployment group what-if \
--resource-group ${{ env.RESOURCE_GROUP }} \
--template-file ./infra/main.bicep \
--parameters \
nodeImage='${{ needs.build.outputs.containerImage-node }}' \
nodePort=3000 \
nodeIsExternalIngress=true \
containerRegistry=${{ env.REGISTRY }} \
containerRegistryUsername=${{ github.actor }} \
containerRegistryPassword=${{ secrets.PACKAGES_TOKEN }} \
tags="$tags" \
APPSETTINGS_API_KEY="${{ secrets.APPSETTINGS_API_KEY }}" \
APPSETTINGS_DOMAIN="${{ secrets.APPSETTINGS_DOMAIN }}" \
APPSETTINGS_FROM_EMAIL="${{ secrets.APPSETTINGS_FROM_EMAIL }}" \
APPSETTINGS_RECIPIENT_EMAIL="${{ secrets.APPSETTINGS_RECIPIENT_EMAIL }}"

There's a lot in this workflow. Let's dig into the build and deploy jobs to see what's happening.

build - building our imageโ€‹

The build job is all about building our container images and pushing then to the GitHub registry. It's heavily inspired by Jeff Hollan's Azure sample app GHA. When we look at the strategy we can see a matrix of services consisting of a single service; our node app:

strategy:
matrix:
services: [{ 'imageName': 'node-service', 'directory': './node-service' }]

This is a matrix because a typical use case of an Azure Container App will be multi-container, so we're starting generic from the beginning. The outputs pumps out the details of our containerImage-node image to be used later:

outputs:
containerImage-node: ${{ steps.image-tag.outputs.image-node-service }}

With that understanding in place, let's examine what each of the steps in the build job does

  • Log into registry - logs into the GitHub container registry
  • Extract Docker metadata - acquire tags which will be used for versioning
  • Build and push Docker image - build the docker image and if this is not a PR: tag, label and push it to the registry
  • Output image tag - write out the image tag for usage in deployment

deploy - shipping our image to Azureโ€‹

The deploy job does two possible things with our Bicep template; main.bicep.

In the case of a pull request, it runs the az deployment group what-if - this allows us to see what the effect would be of applying a PR to our infrastructure.

- name: What-if bicep
uses: azure/[email protected]
if: github.event_name == 'pull_request'
with:
inlineScript: |
tags='{"owner":"johnnyreilly", "email":"[email protected]"}'
az deployment group what-if \
--resource-group ${{ env.RESOURCE_GROUP }} \
--template-file ./infra/main.bicep \
--parameters \
nodeImage='${{ needs.build.outputs.containerImage-node }}' \
nodePort=3000 \
nodeIsExternalIngress=true \
containerRegistry=${{ env.REGISTRY }} \
containerRegistryUsername=${{ github.actor }} \
containerRegistryPassword=${{ secrets.PACKAGES_TOKEN }} \
tags="$tags" \
APPSETTINGS_API_KEY="${{ secrets.APPSETTINGS_API_KEY }}" \
APPSETTINGS_DOMAIN="${{ secrets.APPSETTINGS_DOMAIN }}" \
APPSETTINGS_FROM_EMAIL="${{ secrets.APPSETTINGS_FROM_EMAIL }}" \
APPSETTINGS_RECIPIENT_EMAIL="${{ secrets.APPSETTINGS_RECIPIENT_EMAIL }}"

When it's not a pull request, it runs the az deployment group create command which performs a deployment of our main.bicep file.

- name: Deploy bicep
uses: azure/[email protected]
if: github.event_name != 'pull_request'
with:
inlineScript: |
tags='{"owner":"johnnyreilly", "email":"[email protected]"}'
az deployment group create \
--resource-group ${{ env.RESOURCE_GROUP }} \
--template-file ./infra/main.bicep \
--parameters \
nodeImage='${{ needs.build.outputs.containerImage-node }}' \
nodePort=3000 \
nodeIsExternalIngress=true \
containerRegistry=${{ env.REGISTRY }} \
containerRegistryUsername=${{ github.actor }} \
containerRegistryPassword=${{ secrets.PACKAGES_TOKEN }} \
tags="$tags" \
APPSETTINGS_API_KEY="${{ secrets.APPSETTINGS_API_KEY }}" \
APPSETTINGS_DOMAIN="${{ secrets.APPSETTINGS_DOMAIN }}" \
APPSETTINGS_FROM_EMAIL="${{ secrets.APPSETTINGS_FROM_EMAIL }}" \
APPSETTINGS_RECIPIENT_EMAIL="${{ secrets.APPSETTINGS_RECIPIENT_EMAIL }}"

In either case we pass the same set of parameters:

nodeImage='${{ needs.build.outputs.containerImage-node }}' \
nodePort=3000 \
nodeIsExternalIngress=true \
containerRegistry=${{ env.REGISTRY }} \
containerRegistryUsername=${{ github.actor }} \
containerRegistryPassword=${{ secrets.PACKAGES_TOKEN }} \
tags="$tags" \
APPSETTINGS_API_KEY="${{ secrets.APPSETTINGS_API_KEY }}" \
APPSETTINGS_DOMAIN="${{ secrets.APPSETTINGS_DOMAIN }}" \
APPSETTINGS_FROM_EMAIL="${{ secrets.APPSETTINGS_FROM_EMAIL }}" \
APPSETTINGS_RECIPIENT_EMAIL="${{ secrets.APPSETTINGS_RECIPIENT_EMAIL }}"

These are either:

  • secrets we set up earlier
  • environment variables declared at the start of the script or
  • outputs from the build step - this is where we acquire our node image

Running itโ€‹

When the GitHub Action has been run you'll find that Azure Container App is now showing up inside the Azure Portal in your resource group, alongside the other resources:

screenshot of the Azure Container App&#39;s resource group in the Azure Portal

And when we take a closer look at the container app, we find a URL we can navigate to:

screenshot of the Azure Container App in the Azure Portal revealing it&#39;s URL

Congratulations! You've built and deployed a simple web app to Azure Container Apps with Bicep and GitHub Actions and secrets.

ยท 4 min read

Azure Container Apps are an exciting way to deploy containers to Azure. This post shows how to deploy the infrastructure for an Azure Container App to Azure using Bicep and GitHub Actions. The Azure Container App documentation features quickstarts for deploying your first container app using both the Azure Portal and the Azure CLI. These are great, but there's a gap if you prefer to deploy using Bicep and you'd like to get your CI/CD setup right from the beginning. This post aims to fill that gap.

If you're interested in building your own containers as well, it's worth looking at this follow up post.

title image reading &quot;Azure Container Apps, Bicep and GitHub Actions&quot; with the Bicep, Azure Container Apps and GitHub Actions logos

Bicepโ€‹

Let's begin with the Bicep required to deploy an Azure Container App.

In our new repository we'll create an infra directory, into which we'll place a main.bicep file which will contain our Bicep template.

I've pared this down to the simplest Bicep template that I can; it only requires a name parameter:

param name string
param secrets array = []

var location = resourceGroup().location
var environmentName = 'Production'
var workspaceName = '${name}-log-analytics'

resource workspace 'Microsoft.OperationalInsights/[email protected]' = {
name: workspaceName
location: location
properties: {
sku: {
name: 'PerGB2018'
}
retentionInDays: 30
workspaceCapping: {}
}
}

resource environment 'Microsoft.Web/[email protected]' = {
name: environmentName
location: location
properties: {
type: 'managed'
internalLoadBalancerEnabled: false
appLogsConfiguration: {
destination: 'log-analytics'
logAnalyticsConfiguration: {
customerId: workspace.properties.customerId
sharedKey: listKeys(workspace.id, workspace.apiVersion).primarySharedKey
}
}
}
}

resource containerApp 'Microsoft.Web/[email protected]' = {
name: name
kind: 'containerapps'
location: location
properties: {
kubeEnvironmentId: environment.id
configuration: {
secrets: secrets
registries: []
ingress: {
'external':true
'targetPort':80
}
}
template: {
containers: [
{
'name':'simple-hello-world-container'
'image':'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
'command':[]
'resources':{
'cpu':'.25'
'memory':'.5Gi'
}
}
]
}
}
}

Some things to note from the template:

  • We're deploying three resources; a container app, a kube environment and an operational insights.
  • Just like the official quickstarts we're going to use the containerapps-helloworld image.

Setting up a resource groupโ€‹

In order that you can deploy your Bicep, we're going to need a resource group to send it to. Right now, Azure Container Apps aren't available everywhere. So we're going to create ourselves a resource group in North Europe which does support ACAs:

az group create -g rg-aca -l northeurope

Deploying with the Azure CLIโ€‹

With this resource group in place, we could simply deploy using the Azure CLI like so:

az deployment group create \
--resource-group rg-aca \
--template-file ./infra/main.bicep \
--parameters \
name='container-app'

Deploying with GitHub Actionsโ€‹

However, we're aiming to set up a GitHub Action to do this for us. We'll create a .github/workflows/deploy.yaml file in our repository:

name: Deploy
on:
push:
branches: [main]
workflow_dispatch:

env:
RESOURCE_GROUP: rg-aca

jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/[email protected]

- name: Azure Login
uses: azure/[email protected]
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}

- name: Deploy bicep
uses: azure/[email protected]
with:
inlineScript: |
az deployment group create \
--resource-group ${{ env.RESOURCE_GROUP }} \
--template-file ./infra/main.bicep \
--parameters \
name='container-app'

The above GitHub action is very simple. It:

  1. Logs into Azure using some AZURE_CREDENTIALS we'll set up in a moment.
  2. Invokes the Azure CLI to deploy our Bicep template.

Let's create that AZURE_CREDENTIALS secret in GitHub:

Screenshot of `AZURE_CREDENTIALS` secret in the GitHub website that we need to create

We'll use the Azure CLI once more:

az ad sp create-for-rbac --name "myApp" --role contributor \
--scopes /subscriptions/{subscription-id}/resourceGroups/{resource-group} \
--sdk-auth

Remember to replace the {subscription-id} with your subscription id and {resource-group} with the name of your resource group (rg-aca if you're following along). This command will pump out a lump of JSON that looks something like this:

{
"clientId": "a-client-id",
"clientSecret": "a-client-secret",
"subscriptionId": "a-subscription-id",
"tenantId": "a-tenant-id",
"activeDirectoryEndpointUrl": "https://login.microsoftonline.com",
"resourceManagerEndpointUrl": "https://management.azure.com/",
"activeDirectoryGraphResourceId": "https://graph.windows.net/",
"sqlManagementEndpointUrl": "https://management.core.windows.net:8443/",
"galleryEndpointUrl": "https://gallery.azure.com/",
"managementEndpointUrl": "https://management.core.windows.net/"
}

Take this and save it as the AZURE_CREDENTIALS secret in Azure.

Running itโ€‹

When the GitHub Action has been run you'll find that Azure Container App is now showing up inside the Azure Portal:

screenshot of the Azure Container App in the Azure Portal

You'll see a URL is displayed, when you go that URL you'll find the hello world image is running!

screenshot of the running Azure Container App