Tutorial : Continuous Delivery in the Cloud Part 1 of 6
We help companies deliver software reliably and repeatedly using Continuous Delivery in the Cloud. With Continuous Delivery (CD), teams can deliver new versions of software to production by flattening the software delivery process and decreasing the cycle time between an idea and usable software through the automation of the entire delivery system: build, deployment, test, and release. CD is enabled through a delivery pipeline. With CD,our customers can choose when and how often to release to production. On top of this, we utilize the cloud so that customers can scale their infrastructure up and down and deliver software to users on demand.
We offer a solutions called Elastic Operations which provides a Continuous Delivery platform along with expert engineering support and monitoring of a delivery pipeline that builds, tests, provisions and deploys software to target environments – as often as our customers choose. We’re in the process of open sourcing the platform utilized by Elastic Operations. In this six-part blog series, I am going to go over how we built out a Continuous Delivery solution for one of our customers Sea to Shore Alliance:
Part 1: Introduction – What you’re reading now; Part 2: CD Pipeline – Automated pipeline to build, test, deploy, and release software continuously; Part 3: CloudFormation – Scripted virtual resource provisioning; Part 4: Dynamic Configuration – “Property file less” infrastructure; Part 5: Deployment Automation – Scripted deployment orchestration; Part 6: Infrastructure Automation – Scripted environment provisioning (Infrastructure Automation)
This year, we delivered this Continuous Delivery in the Cloud solution to the Sea to Shore Alliance. The Sea to Shore Alliance is a non-profit organization whose mission is to protect and conserve the world’s fragile coastal ecosystems and its endangered species such as manatees, sea turtles, and right whales. One of their first software systems tracks and monitors manatees. Prior to Stelligent‘s involvement, the application was running on a single instance that was manually provisioned and deployed. As a result of the manual processes, there were no automated tests for the infrastructure or deployment. This made it impossible to reproduce environments or deployments the same way every time. Moreover, the knowledge to recreate these environments, builds and deployments were locked in the heads of a few key individuals. The production application for tracking these Manatees, developed by Sarvatix, is located here.
In this case study, I describe how we went from an untested manual process in which the development team was manually building software artifacts, creating environments and deploying, to a completely automated delivery pipeline that is triggered with every change.
Figure 1 illustrates the AWS architecture of the infrastructure that we designed for this Continuous Delivery solution.
There are two CloudFormation stacks being used, the Jenkins stack – or Jenkins environment – as shown on the left and the Manatee stack – or Target environment – as shown on the right.
The Jenkins Stack
- Creates the jenkins.example.com Route53 Hosted Zone
- Creates an EC2 instance with Tomcat and Jenkins installed and configured on it.
- Runs the CD Pipeline
The Manatee stack is slightly different, it utilizes the configuration provided by SimpleDB to create itself. This stack defines the target environment for which the application software is deployed.
The Manatee Stack
- Creates the manatee.example.com Route53 Hosted Zone
- Creates an EC2 instance with Tomcat, Apache, PostgreSQL installed on it.
- Runs the Manatee application.
The Manatee stack is configured with CPU alarms that send an email notification to the developers/administrators when it becomes over-utilized. We’re in the process of scaling to additional instances when these types of alarms are triggered.
Both instances are encapsulated behind a security group so that they can talk between each other using the internal AWS network.
Fast Facts Industry: Non-Profit Profile: Customer tracks and monitors endangered species such as manatees. Key Business Issues: The customer’s development team needed to have unencumbered access to resources along with automated environment creation and deployment. Stakeholders: Development team and scientists and others from the Sea to Shore Alliance Solution: Continuous Delivery in the Cloud (Elastic Operations) Key Tools/Technologies: AWS – Amazon Web Services (CloudFormation, EC2, S3, SimpleDB, IAM, CloudWatch, SNS), Jenkins, Capistrano, Puppet, Subversion, Cucumber, Liquibase
The Business Problem The customer needed an operations team that could be scaled up or down depending on the application need. The customer’s main requirements were to have unencumbered access to resources such as virtual hardware. Specifically, they wanted to have the ability to create a target environment and run an automated deployment to it without going to a separate team and submitting tickets, emails, etc. In addition to being able to create environments, the customer wanted to have more control over the resources being used; they wanted to have the ability to terminate resources if they were unused. To address these requirements we introduced an entirely automated solution which utilizes the AWS cloud for providing resources on-demand, along with other solutions for providing testing, environment provisioning and deployment.
On the Manatee project, we have five key objectives for the delivery infrastructure. The development team should be able to:
- Deliver new software or updates to users on demand
- Reprovision target environment configuration on demand
- Provision environments on demand
- Remove configuration bottlenecks
- Ability for users to terminate instances
Our Team Stelligent’s team consisted of an account manager and one polyskilled DevOps Engineer that built, managed, and supported the Continuous Delivery pipeline.
Our Solution Our solution, a single delivery pipeline that gives our customer (developers, testers, etc.) unencumbered access to resources and a single click automated deployment to production. To enable this, the pipeline needed to include:
- The ability for any authorized team member to create a new target environment using a single click
- Automated deployment to the target environment
- End-to-end testing
- The ability to terminate unnecessary environments
- Automated deployment into production with a single click
The delivery pipeline improves efficiency and reduces costs by not limiting the development team. The solution includes:
On-Demand Provisioning – All hardware is provided via EC2’s virtual instances in the cloud, on demand. As part of the CD pipeline, any authorized team member can use the Jenkins CreateTargetEnvironment job to order target environments for development work.
Continuous Delivery Solution so that the team can deliver software to users on demand:
- Dependency Management using Ivy (through Grails).
- Database Integration/Change using Liquibase
- Testing using Cucumber
- Custom Capistrano scripts for remote deployment.
- Continuous Integration server using Jenkins
- Continuous Delivery pipeline system – we customized Jenkins to build a delivery pipeline
Development Infrastructure – Consists of:
- Tomcat: used for hosting the Manatee Application
- Apache: Hosted the front-end website and used virtual hosts for proxying and redirection.
- PostgreSQL: Database for the Manatee application
- Groovy: the application is written in Grails which uses Groovy.
Instance Management – Any authorized team member is able to monitor virtual instance usage by viewing Jenkins. There is a policy that test instances are automatically terminated every two days. This promotes ephemeral environments and test automation.
Deployment to Production – There’s a boolean value (i.e. a checkbox the user selects) in the delivery pipeline used for deciding whether to deploy to production.
System Monitoring and Disaster Recovery – Using the AWS CloudWatch service, AWS provides us with detailed monitoring to notify us of instance errors or anomalies through statistics such as CPU utilization, Network IO, Disk utilization, etc. Using these solutions we’ve implemented an automated disaster recovery solution.
A list of the AWS tools we utilized are enumerated below.
Tool: AWS EC2 What is it? Cloud-based virtual hardware instances Our Use: We use EC2 for all of our virtual hardware needs. All instances, from development to production are run on EC2
Tool: AWS S3 What is it? Cloud-based storage Our Use: We use S3 as both a binary repository and a place to store successful build artifacts.
Tool: AWS IAM What is it? User-based access to AWS resources Our Use: We create users dynamically and use their AWS access and secret access keys so we don’t have to store credentials as properties
Tool: AWS CloudWatch What is it? System monitoring Our Use: Monitors all instances in production. If an instance takes an abnormal amount of strain or shuts down unexpectedly, SNS sends an email to designated parties
Tool: AWS SNS What is it? Email notifications Our Use: When an environment is created or a deployment is run, SNS is used to send notifications to affected parties.
Tool: Cucumber What is it? Acceptance testing Our Use: Cucumber is used for testing at almost every step of the way. We use Cucumber to test infrastructure, deployments and application code to ensure correct functionality. Cucumber’s unique english-ess verbiage allows both technical personnel and customers to communicate using an executable test.
Tool: Liquibase What is it? Automated database change management Our Use: Liquibase is used for all database changesets. When a change is necessary within the database, it is made to a liquibase changelog.xml
Tool: AWS CloudFormation What is it? Templating language for orchestrating all AWS resources Our Use: CloudFormation is used for creating a fully working Jenkins environment and Target environment. For instance for the Jenkins environment it creates the EC2 instance with CloudWatch monitoring alarms, associated IAM user, SNS notification topic, everything required for Jenkins to build. This along with Jenkins are the major pieces of the infrastructure.
Tool: AWS SimpleDB What is it? Cloud-based NoSQL database Our Use: SimpleDB is used for storing dynamic property configuration and passing properties through the CD Pipeline. As part of the environment creation process, we store multiple values such as IP addresses that we need when deploying the application to the created environment.
Tool: Jenkins What is it? We’re using Jenkins to implement a CD pipeline using the Build Pipeline plugin. Our Use: Jenkins runs the CD pipeline which does the building, testing, environment creation and deploying. Since the CD pipeline is also code (i.e. configuration code), we version our Jenkins configuration.
Tool: Capistrano What is it? Deployment automation Our Use: Capistrano orchestrates and automates deployments. Capistrano is a Ruby-based deployment DSL that can be used to deploy to multiple platforms including Java, Ruby and PHP. It is called as part of the CD pipeline and deploys to the target environment.
Tool: Puppet What is it? Infrastructure automation Our Use: Puppet takes care of the environment provisioning. CloudFormation requests the environment and then calls Puppet to do the dynamic configuration. We configured Puppet to install, configure, and manage the packages, files and services.
Tool: Subversion What is it? Version control system Our Use: Subversion is the version control repository where every piece of the Manatee infrastructure is stored. This includes the environment scripts such as the Puppet modules, the CloudFormation templates, Capistrano deployment scripts, etc.
We applied the on-demand usability of the cloud with a proven continuous delivery approach to build an automated one click method for building and deploying software into scripted production environments.
In the blog series, I will describe the technical implementation of how we went about building this infrastructure into a complete solution for continuously delivering software. This series will consist of the following:
Part 2 of 6 – CD Pipeline: I will go through the technical implementation of the CD pipeline using Jenkins. I will also cover Jenkins versioning, pulling and pushing artifacts from S3, and Continuous Integration.
Part 3 of 6 – CloudFormation: I will go through a CloudFormation template we’re using to orchestrate the creation of AWS resources and to build the Jenkins and target infrastructure.
Part 4 of 6 – Dynamic Configuration: Will cover dynamic property configuration using SimpleDB
Part 5 of 6 – Deployment Automation: I will explain Capistrano in detail along how we used Capistrano to deploy build artifacts and run Liquibase database changesets against target environments
Part 6 of 6 – Infrastructure Automation: I will describe the features of Puppet in detail along with how we’re using Puppet to build and configure target environments – for which the software is deployed.