December 22, 2016

Day 22 - Building a pipeline for Azure Deployments

Written by: Sam Cogan (@samcogan)
Edited by: Michelle Carroll (@miiiiiche)

Introduction

In my SysAdvent article from last year I talked about automating deployments to Azure using Azure Resource Manager (ARM) and PowerShell Desired State Configuration (DSC). This year, I wanted to take this a few steps further and talk about taking this “infrastructure as code” concept, and using this to build a deployment pipeline for your Azure Infrastructure.

In a world where infrastructure is just another set of code, we have an exciting opportunity to apply techniques developers have been using for a long time to refine our deployment process. Developers having been using a version of the pipeline below for years, and we are able to apply the same technique to infrastructure.

Pipeline
Pipeline

By implementing a deployment pipeline we gain a number of significant benefits:

  1. Better collaboration and sharing of work between distributed teams
  2. Increased security, reliability, and reusability of your code.
  3. Repeatable and reliable packaging and distribution of your code and artifacts.
  4. The ability to catch errors early, and fix them before time and money is wasted during deployment.
  5. Reliable and repeatable deployments.
  6. Absolute control over when a deployment occurs, because of the ability to add gating and security controls to the process.
  7. Moving closer to the concepts of continuous, automated deployment.

The process described in this article focusses on Azure deployments and the tools available for Microsoft Azure — however, this process could easily be applied to other platforms and tools.

Source Control

The first step, and one that I believe everyone writing any sort of code should be doing, is to make sure you are using some sort of version control. Once you’ve jumped in and started writing ARM templates and DSC files you’ve got artifacts that could (and should) be in a version control system. Using version control offers helps us with a number of areas:

  1. Collaboration. As soon as more than one person is involved in creating, editing, or reviewing deployment files, we hit the problems of passing files around by email, knowing which is the most recent version, trying to merge conflicting changes etc. Version control is a simple, well-tested solution to this problem. It provides a single point of truth for everyone, and an easy way to collaboratively edit files and merge the results.
  2. Versioning. One of the big benefits of ARM and DSC is that the code is also the documentation of your infrastructure deployment. With version control, it is also the history of your infrastructure. You can easily see how your infrastructure changed over time through, and even roll back to a previous version.
  3. Repository. A lot of the techniques we will discuss in this article require a central repository to store and access files. Your version control repository can be used for this, and provides a way to access a specific version of the files.

The choice of which version control system to use is really up to you and how you like to work (distributed vs Client/Server). If you work with developers, it is very likely they will already have a system in place, and it’s often easier to take advantage of the existing infrastructure. If you don’t have a system in place (and don’t want the overhead of managing one), then you can look at cloud providers like Github, Visual Studio Team Services or Bitbucket.

Build

At first glance this may seem like a bit of an odd step: none of the script types we are using require compiling, so what is there to build? In this process, “build” is the transformation and composition of files into the appropriate format for later steps, and getting them to the right place. For example, my deployment system expects my ARM templates and DSC files to be delivered in a NuGet package, so I have a build step that takes those files, brings them together in the right folder structure, and creates a NuGet package. Another build step looks at the software installer files required for deployment and, if required, uploads these to Azure Blob storage. This could include MSI or EXE files for installers, but also things like Nuget packages for web applications.

Again, the tools used for this stage are really up to you. At a very basic level you could use PowerShell or even Batch scripts to run this process. Alternatively you could look at build tools like VSTS, TeamCity or Jenkins to co-ordinate the process, this provides the additional benefits of:

  1. Many build systems will come with pre-built process that will do a lot of this work for you.
  2. It’s usually easy to integrate your build system with version control, so that when a new version is committed (or any other type of trigger) a build will be started.
  3. The build systems usually provide some sort of built-in reporting and notification.
  4. Build systems often have workflows that can be used as part of the testing and deployment process.

Test

This step is possibly the most alien for system administrators. structured code testing is often left to developers, with infrastructure testing limited to things like Disaster Recoveryand performance tests. However, because our infrastructure deployments are now effectively just more code, we can apply testing frameworks to that code and try to find problems before we start a deployment. Given that these deployments can take many hours, finding problems early can be a real benefit in terms of time and money.

There are various different testing frameworks out there that you could use, so I recommend picking the one you are comfortable with. My preference is Pester, the PowerShell testing framework. By using Pester, I can write my tests in PowerShell (with some Pester specific language added on), and I gain Pester’s ability to natively test PowerShell modules out of the box. I split my testing into two phases, pre-deployment and post-deployment testing.

Pre-Deployment Testing

As the name suggests, these are the tests that run before I deploy, and are aimed at catching errors in my scripts before I start a deployment. This can be a big time saver, especially when deployment scripts take hours. I t to run:

  1. Syntax Checks. I parse all my JSON and PowerShell files to look for simple syntax errors, missing commas, quotation marks, and other typos, to ensure that the scripts will make it through the parser. I have a simple Pester test that loops through all my JSON files and runs the PowerShell convertFrom-JSON command — if this throws an error, I know it failed.
  2. Best Practices. To get idea of how my PowerShell conforms to best practices I run a Pester test that runs the PowerShell Script Analyser, and fails if there are any errors. These tests are based on the code in Ben Taylor’s “Script Analyzer” article.
  3. Unit Tests. Pester’s initial purpose was to run unit tests against PowerShell scripts, so I run any available unit tests before deployment. It’s not really possible to unit test DSC or ARM templates, but you can run tests against any DSC Resources. These can be downloaded from the PowerShell gallery (which usually come with tests), or you can write tests for custom DSC resources. This article on DSC unit tests is a great starting point forbuilding generic tests for DSC resources.

At this point I’m in a pretty good state to run the deployment. Assuming it doesn’t fail, I can move on to my next set of tests. If any test do fail, then my pipeline stops and I don’t progress any further until the tests pass .

Post Deployment Testing

Once the deployment is complete,I want to be able to check that the environment I just deployed matches the state I defined in my ARM templates and DSC files. The declarative nature of these files means it should be in compliance, but it is good to confirm that nothing has gone wrong, or that what I thought I had modelled in DSC is actually what came out the other end. For example, I have a DSC script that installs IIS, so I have a corresponding test that checks that IIS has been installed. looks like this when written in Pester:

Describe "Web Server"  {
    It "Is Installed" {
        $Output = Get-WindowsFeature web-server
        $Output.InstallState | Should Be "Installed"
    }
}

You can be as simple of as complex as you want in the tests checking your infrastructure, based on your criteria for a successful deployment.

Deploy

The whole point of this exercise is to actually get some infrastructure deployed, which we could have done without any of the previous steps. Following this process gives us several benefits: 1. We have a copy of the deployment files in a known-good state, and are the specific version that we know we want to deploy. 2. We have packaged these files in the right format for our deployment process, so we don’t need requirements for manually zipping or arranging files. 3. We have already performed necessary pre-deployment tasks like uploading installers, config files, etc. 4. We have tested our deployment files to make sure they are syntactically correct, and know we won’t have to stop the deployment halfway through because of a missing comma.

At this point, you would kick off your ARM deployment process — this may mean downloading or copying the appropriate files from your Build output, and running the New-AzureResourceGroupDeployment. However, just like we used a build tool to tie our build process to a new version control check-in, we can also use deployment tools to tie our deployment process to a new build. Once a build completes, your deployment software starts creating a release, and even automatically deploying it. Some examples of tools that can do this include VSTS (again), Octopus Deploy or Jenkins

The Deployment Pipeline

Each of the steps we’ve discussed can be implemented on their own and will add benefit to your process straight away. As you gain familiarity with the techniques, you can layer on more steps until you have a pipeline that runs from code commit to deployment, similar to this:

Workflow
Workflow

Each step in the process will be a gate in your deployment, and if it fails you don’t move on to the next. This can be controlled by something as simple as some PowerShell or CMD scripts, or something as complex as VSTS or Jenkins — there’s no one right tool or process to use. The process is going to differ markedly depending on what you are trying to deploy, what opportunity there is for testing, which pieces are automated and which are done manually,, and how agile your business is.

Your ultimate goal might be able to deploy your software and infrastructure on every new commit,for true continuous deployment. In many industries, this may not be practical. Even if that is the case, implementing this pipeline means you can still be in a position where you could deploy a new release from any commit. With the pipeline, you gain the confidence in your code. What comes out at the end of the process should be in a known good state, and is ready to go if you want to deploy it.

To give you an example, in one of my environments each commit triggers an automatic deployment to the development environment, as this always needs to be the very latest version. However, the test environment needs to be more stable than the dev environment. While a release is created in the deployment tool, it is still deployed manually by the development team, with multiple human approvals. Moving the release through production requires successful deployments to development and test — once that requirement is met, a member of the team can trigger a manual deployment..

Useful Resources

Pester Testing Framework

Continuous deployment with Visual Studio Team Services

Devops on Windows with Jenkins and Azure Resource Manager

No comments :