Skip to main content

2 posts tagged with "devcontainer"

View All Tags

Devcontainers AKA performance in a secure sandbox

Many corporate machines arrive in engineers hands with a preponderance of pre-installed background tools; from virus checkers to backup utilities to port blockers; the list is long.

The reason that these tools are installed is generally noble. However, the implementation can often be problematic. The tools may be set up in such a way as they impact and interfere with one another. Really powerful machines with 8 CPUs and hardy SSDs can be slowed to a crawl. Put simply: the good people responsible for ensuring security are rarely encouraged to incentivise performance alongside it. And so don't.

The unfortunate consequence of considering the role of security without regard to performance is this: sluggish computers. The further consequence (and this is the one I want you to think long and hard about) is low developer productivity. And that sucks. It impacts what an organisation is able to do, how fast an organisation is able to move. Put simply: it can be the difference between success and failure.

The most secure computer is off. But you won't ship much with it. Encouraging your organisation to consider tackling security with performance in mind is worthwhile. It's a long game though. In the meantime what can we do?

"Hide from the virus checkers*** in a devcontainer"#

Devcontainers, the infrastructure as code equivalent for developing software, have an underappreciated quality: unlocking your machine's performance.

Devcontainers are isolated secure sandboxes in which you can build software. To quote the docs:

A devcontainer.json file in your project tells VS Code how to access (or create) a development container with a well-defined tool and runtime stack. This container can be used to run an application or to sandbox tools, libraries, or runtimes needed for working with a codebase.

Workspace files are mounted from the local file system or copied or cloned into the container.

We're going to set up a devcontainer to code an ASP.NET Core application with a JavaScript (well TypeScript) front end. If there's one thing that's sure to catch a virus checkers beady eye, it's node_modules. node_modules contains more files than a black hole has mass. Consider a project with 5,000 source files. One trusty yarn later and the folder now has a tidy 250,000 files. The virus checker is now really sitting up and taking notice.

Our project has a git commit hook set up with Husky that formats our TypeScript files with Prettier. Every commit the files are formatted to align with the project standard. With all the virus checkers in place a git commit takes around 45 seconds. Inside a devcontainer we can drop this to 5 seconds. That's nine times faster. I'll repeat that: that's nine times faster!

The "cloned into a container" above is key to what we're going to do. We're not going to mount our local file system into the devcontainer. Oh no. We're going to build a devcontainer with ASP.NET CORE and JavaScript in. Then, inside there, we're going to clone our repo. Then we can develop, build and debug all inside the container. It will feel like we're working on our own machine because VS Code does such a damn fine job. In reality, we're connecting to another computer (a Linux computer to boot) that is running in isolation to our own. In our case that machine is sharing our hardware; but that's just an implementation detail. It could be anywhere (and in the future may well be).

Make me a devcontainer...#

Enough talk... We're going to need a .devcontainer/devcontainer.json:

"name": "my devcontainer",
"dockerComposeFile": "../docker-compose.devcontainer.yml",
"service": "my-devcontainer",
"workspaceFolder": "/workspace",
// Set *default* container specific settings.json values on container create.
"settings": {
"": "/bin/zsh"
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
// Use 'postCreateCommand' to clone the repo into the workspace folder when the devcontainer starts
// and copy in the .env file
"postCreateCommand": "git clone [email protected]:my-org/my-repo.git . && cp /.env /workspace/.env"
// "remoteUser": "vscode"

Now the docker-compose.devcontainer.yml which lives in the root of the project. It provisions a SQL Server container (using the official image) and our devcontainer:

version: "3.7"
image: my-devcontainer
context: .
dockerfile: Dockerfile.devcontainer
command: /bin/zsh -c "while sleep 1000; do :; done"
# mount .zshrc from home - make sure it doesn't contain Windows line endings
- ~/.zshrc:/root/.zshrc
# user: vscode
- "5000:5000"
- "8080:8080"
- db
privileged: true
- 1433:1433
SA_PASSWORD: "Your_password123"

The devcontainer will be built with the Dockerfile.devcontainer in the root of our repo. It relies upon your SSH keys and a .env file being available to be copied in:

# Based upon:
ARG VARIANT="3.1-bionic"
# Because MITM certificates
COPY ./docker/certs/. /usr/local/share/ca-certificates/
ENV NODE_EXTRA_CA_CERTS=/usr/local/share/ca-certificates/mitm.pem
RUN update-ca-certificates
# This Dockerfile adds a non-root user with sudo access. Use the "remoteUser"
# property in devcontainer.json to use it. On Linux, the container user's GID/UIDs
# will be updated to match your local UID/GID (when using the dockerFile property).
# See for details.
# Options for common package install script
# Settings for installing Node.js.
ENV NVM_DIR=/usr/local/share/nvm
# Have nvm create a "current" symlink and add to path to work around
ENV PATH=${NVM_DIR}/current/bin:${PATH}
# Configure apt and install packages
RUN apt-get update \
&& export DEBIAN_FRONTEND=noninteractive \
# Verify git, common tools / libs installed, add/modify non-root user, optionally install zsh
&& apt-get -y install --no-install-recommends curl ca-certificates 2>&1 \
&& curl -sSL ${COMMON_SCRIPT_SOURCE} -o /tmp/ \
&& ([ "${COMMON_SCRIPT_SHA}" = "dev-mode" ] || (echo "${COMMON_SCRIPT_SHA} */tmp/" | sha256sum -c -)) \
&& /bin/bash /tmp/ "${INSTALL_ZSH}" "${USERNAME}" "${USER_UID}" "${USER_GID}" "${UPGRADE_PACKAGES}" \
# Install Node.js
&& curl -sSL ${NODE_SCRIPT_SOURCE} -o /tmp/ \
&& ([ "${NODE_SCRIPT_SHA}" = "dev-mode" ] || (echo "${COMMON_SCRIPT_SHA} */tmp/" | sha256sum -c -)) \
&& /bin/bash /tmp/ "${NVM_DIR}" "${NODE_VERSION}" "${USERNAME}" \
# Clean up
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -f /tmp/ /tmp/ \
&& rm -rf /var/lib/apt/lists/* \
# Workspace
&& mkdir workspace \
&& chown -R ${NONROOT_USER}:root workspace
# Install Vim
RUN apt-get update && apt-get install -y \
vim \
&& rm -rf /var/lib/apt/lists/*
# Set up a timezone in the devcontainer - necessary for anything timezone dependent
ENV TZ=Europe/London
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone \
&& apt-get update \
&& apt-get install --no-install-recommends -y \
apt-utils \
tzdata \
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf /var/lib/apt/lists/*
# Copy across SSH keys so you can git clone
RUN mkdir /root/.ssh
RUN chmod 700 /root/.ssh
COPY .ssh/id_rsa /root/.ssh
RUN chmod 600 /root/.ssh/id_rsa
COPY .ssh/ /root/.ssh
RUN chmod 644 /root/.ssh/
COPY .ssh/known_hosts /root/.ssh
RUN chmod 644 /root/.ssh/known_hosts
# Disable initial git clone prompt
RUN echo "StrictHostKeyChecking no" >> /etc/ssh/ssh_config
# Copy across .env file so you can customise environment variables
# This will be copied into the root of the repo post git clone
COPY .env /.env
RUN chmod 644 /.env
# Install dotnet entity framework tools
RUN dotnet tool install dotnet-ef --tool-path /usr/local/bin --version 3.1.2

With this devcontainer you're good to go for an ASP.NET Core / JavaScript developer setup that is blazing fast! Remember to fire up Docker and give it goodly access to the resources of your host machine. All the CPUs, lots of memory and all the performance that there ought to be.

* "virus checkers" is a euphemism here for all the background tools that may be running. It was that or calling them "we are legion"

Devcontainers and SSL interception

Devcontainers are cool. They are the infrastructure as code equivalent for developing software.

Imagine your new starter joins the team, you'd like them to be contributing code on day 1. But if the first thing that happens is you hand them a sheaf of paper upon which are the instructions for how to get their machines set up for development, well, maybe it's going to be a while. But if your project has a devcontainer then you're off to the races. One trusty git clone, fire up VS Code and they can get going.

That's the dream right?

I've recently been doing some work getting a project I work on set up with a devcontainer. As I've worked on that I've become aware of some of the hurdles that might hamper your adoption of devcontainers in a corporate environment.

Certificates: I'm starting with the man in the middle#

It is a common practice in company networks to perform SSL interception. Not SSL inception; that'd be more fun.

SSL interception is the practice of installing a "man-in-the-middle" (MITM) CA certificate on users machines. When SSL traffic takes place from a users machine, it goes through a proxy. That proxy performs the SSL on behalf of that user and, if it's happy, supplies another certificate back to the users machine which satisfies the MITM CA certificate. So rather than seeing, for example, Google's certificate from you'd see the one resulting from the SSL interception. You can read more here.

Now this is a little known and less understood practice. I barely understand it myself. Certificates are hard. Even having read the above you may be none the wiser about why this is relevant. Let's get to the broken stuff.

"Devcontainers don't work at work!"#

So, you're ready to get going with your first devcontainer. You fire up the vscode-dev-containers repo and find the container that's going to work for you. Copy pasta the .devcontainer into your repo, install the Remote Development extension into VS Code and enter the Remote-Containers: Reopen Folder in Container. Here comes the future!

But when it comes to performing SSL inside the devcontainer, trouble awaits. Here's what a yarn install results in:

yarn install v1.22.4
[1/4] Resolving packages...
[2/4] Fetching packages...
error An unexpected error occurred: " self signed certificate in certificate chain".

Oh no!

Gosh but it's okay - you're just bumping on the SSL interception. Why though? Well it's like this: when you fire up your devcontainer it builds a new Docker container. It's as well to imagine the container as a virtual operating system. So what's the difference between this operating system and the one our machine is running? Well a number of things, but crucially our host operating system has the MITM CA certificate installed. So when we SSL, we have the certificate that will match up with what the proxy sends back to us certificate-wise. And inside our trusty devcontainer we don't have that. Hence the sadness.

Devcontainer + MITM cert = working#

We need to do two things to get this working:

  1. Acquire the requisite CA certificate(s) from your friendly neighbourhood networking team. Place them in a certs folder inside your repo, in the .devcontainer folder.
  2. Add the following lines to your .devcontainer/Dockerfile, just after the initial FROM statement:
# Because MITM certificates
COPY certs/. /usr/local/share/ca-certificates/
ENV NODE_EXTRA_CA_CERTS=/usr/local/share/ca-certificates/mitm.pem
RUN update-ca-certificates

Which does the following:

  • Copies the certs into the devcontainer
  • This is a Node example and so we set an environment variable called NODE_EXTRA_CA_CERTS which points to the path of your MITM CA certificate file inside your devcontainer.
  • updates the directory /etc/ssl/certs to hold SSL certificates and generates ca-certificates.crt

With these in place then you should be able to build your devcontainer with no SSL trauma. Enjoy!