Delivery Pipelines; When you think pipeline you usually think "I put something in one end and get something out of the other". True, but it's what happens whilst in the "pipe" thats the exciting bit. You can add as many or few gateways/checkpoints/hurdles as you believe is possible. Whatever you feel is necessary to ensure the quality and dependency of your software. Maybe you have multiple pipelines or just one. There isn't a one fits all pattern, which is great. The more innovative ideas the better.
In this blog I will cover how we created a pipeline for 'Continuous Delivery' involving various steps and technologies/methodologies.
Some of the bits covered are Chef (linting, testing), SPK (AMI Creation), AWS CloudFormation and Selenium Testing. I've added links so that you can read and digest those technologies in your own time. This blog will not cover what they do and the assumption is that you have some basic understand of each of them.
So where does it all start? Typically there has been some form of change to our Chef code; some bug fix or new feature that needs to be tested out in the wilds of the 'net. So our build agent checks out the code and runs some checks before moving onto the next stage. These are the steps taken as part of our "pre-ami build" checks:
Great! The checks passed! So now we can move onto building our AMI. For this we use an in-house tool called the SPK (link above). We have pre-defined templates that dictate the Chef properties, attributes and packer configurations needed to build our image. Once the AMI has been created, we take a snapshot and return the AMI ID to our build agent.
This is where it gets interesting....
Within our repo we also have the CloudFormation template which contains a list of mappings of AMI's available in specific regions, amongst other things. Right now what we do is, once the AMI id has been return from AWS, we scrape the SPK log files, grab the ID, parse the template and update the AMI ID. This then triggers a commit on that repo that another build job is watching and waiting for. It will only fire IF a change has been made to the template file, not the Chef code. This part could probably be replaced with a Lambda that returns the latest AMI ID.
The next build steps are actually building the CloudFormation stack on AWS. This is triggered using the AWS command line by passing in the template and any parameters needed. One of the outputs of the template is the publicly accessible URL of the ELB in that stack. Once this is available and all the relevant services have started, we use that URL to run a selection of Selenium tests. If the tests pass, we use the AWS command line again to delete the stack. Once this has returned an expected status the pipeline is complete.
There are lots more opportunities for improvements here; once the stack is up and running, we could run more thorough tests such as load-testing, stress-testing, security testing (the list is endless). We also have a whole bunch of integration tests written using the ServerSpec DSL that we have configured to output the results in a JUnit formatted XML. Perfect for pipelines that can read these file types. And perfect for reporting on build-over-time statistics. These need to be fed into the pipeline.
This pipeline currently gives us the confidence that:
Thanks for reading all! Any comments and feedback always welcome!