Tutotial : Continuous Delivery in the Cloud Part 6 of 6
Inpart 1 of this series, I introduced theContinuous Delivery (CD) pipeline for theManatee Tracking application. Inpart 2,I went over how we use this CD pipeline to deliver software from checkin to production. Inpart 3,we focused on how CloudFormation is used to script the virtual AWS components that create the Manatee infrastructure. Then inpart 4, we focused on a “property file less” environment by dynamically setting and retrieving properties. Part 5 explained how we use Capistrano for scripting our deployment. A list of topics for each of the articles is summarized below:
Part 1: Introduction – Introduction to continuous delivery in the cloud and the rest of the articles; Part 2: CD Pipeline – In-depth look at the CD Pipeline; Part 3: CloudFormation – Scripted virtual resource provisioning; Part 4: Dynamic Configuration – “Property file less” infrastructure; Part 5: Deployment Automation – Scripted deployment orchestration; Part 6: Infrastructure Automation – What you’re reading now;
In this part of the series, I am going to show how we use Puppet in combination with CloudFormation to script our target environment infrastructure, preparing it for a Manatee application deployment.
What is Puppet?
Puppet is a Ruby based infrastructure automation tool. Puppet is primarily used for provisioning environments and managing configuration. Puppet is made to support multiple operating systems, making your infrastructure automation cross-platform.
How does Puppet work?
Puppet uses a library called Facter which collects facts about your system. Facter returns details such as the operating system, architecture, IP address, etc. Puppet uses these facts to make decisions for provisioning your environment. Below is an example of the facts returned by Facter.
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Puppet uses the operating system fact to decide the service name as show below:
1 2 3 4 5 6 |
|
With this case statement, if the operating environment is either centos
or redhat
the service name ntpd
and the configuration file ntp.conf.el
are used.
Puppet is declarative by nature. Inside a Puppet module you define the end state the environment end state after the Puppet run. Puppet enforces this state during the run. If at any point the environment does not conform to the desired state, the Puppet run fails.
Anatomy of a Puppet Module
To script the infrastructure Puppet uses modules for organizing related code to perform a specific task. A Puppet module has multiple sub directories that contain resources for performing the intended task. Below are these resources:
manifests/
: Contains the manifest class files for defining how to perform the intended task
files/
: Contains static files that the node can download during the installation
lib/
: Contains plugins
templates/
: Contains templates which can be used by the module’s manifests
tests/
: Contains tests for the module
Puppet also uses manifests to manage multiple modules together site.pp
. Puppet also uses another manifest to define what to install on each node, default.pp
.
How to run Puppet
Puppet can be run using either a master agent configuration or a solo installation (puppet apply).
Master Agent: With a master agent installation, you configure one main master puppet node which manages and configure all of your agent nodes (target environments). The master initiates the installation of the agent and manages it throughout its lifecycle. This model enables infrastructure changes to your agents in parallel by controlling the master node.
Solo: In a solo Puppet run, it’s up to the user to place the desired Puppet module on the target environment. Once the module is on the target environment, the user needs run
puppet apply --modulepath=/path/to/modules/ /path/to/site.pp
. Puppet will then provision the server with the provided modules and site.pp
without relying on another node.
Why do we use Puppet?
We use Puppet to script and automate our infrastructure — making our environment provisioning repeatable, fully automated, and less error prone. Furthermore, scripting our environments gives us complete control over our infrastructure and the ability to terminate and recreate environments as often as they choose.
Puppet for Manatees
In the Manatee infrastructure, we use Puppet for provisioning our target environments. I am going to go through our manifests and modules while explaining their use and purpose. In our Manatee infrastructure, we create a new target environment as part of the CD pipeline – discussed in part 2 of the series,CD Pipeline. Below I provide a high-level summary of the environment provisioning process:
1. CloudFormation dynamically creates a params.pp manifest with AWS variables 2. CloudFormation runs puppet apply as part of UserData 3. Puppet runs the modules defined in hosts/default.pp. 4. Cucumber acceptance tests are run to verify the infrastructure was provisioned correctly.
Now that we know at a high-level what’s being done during the environment provisioning, let’s take a deeper look at the scripts in more detail. The actual scripts can be found here: Puppet
First we will start off with the manifests.
The site.pp (shown below) serves two purposes. It loads the other manifests default.pp
, params.pp
and also sets stages pre
, main
and post
.
1 2 3 4 |
|
These stages are used to define the order in which Puppet modules should be run. If the Puppet module is defined as pre
,it will run before Puppet modules defined as main
or post
. Moreover if stages aren’t defined, Puppet will determine the order of execution. The default.pp
(referenced below) shows how staging defined for executing puppet modules.
1 2 3 4 5 6 7 8 9 10 |
|
The default.pp manifest also defines which Puppet modules to use for provisioning the target environment.
params.pp (shown below), loaded from site.pp, is dynamically created using CloudFormation. params.pp
is used for setting AWS property values that are used later in the Puppet modules.
1 2 3 4 5 6 7 8 |
|
Now that we have an overview of the manifests used, lets take a look at the Puppet modules themselves.
In our java module, which is run in the pre
stage, we are running a simple installation using packages. This is easily dealt with in Puppet by using the package
resource. This relies on Puppet’s knowledge of the operating system and the package manager. Puppet simply installs the package that is declared.
1 2 3 |
|
The next module we’ll discuss is system. System is also run during the pre
stage and is used for the setup of all the extra operations that don’t necessarily need their own module. These actions include setting up general packages (gcc, make, etc.), installing ruby gems (AWS sdk, bundler, etc.), and downloading custom scripts used on the target environment.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
|
First I want to point out that at the top we are specifying to include params
. This enables the system module to access the params.pp
file. This way we can use the properties defined in params.pp.
1 2 3 4 |
|
This enables us to define the parameters in one central location and then reference that location with other module.
As we move through the script we are using the package resource similar to previous modules. For each rubygem we use the package resource and explicitly tell Puppet to use the gem
provider. You can specify other providers like rpm
and yum
.
We use the file
resource to create files from templates.
1 2 3 4 |
|
In the aws.config.erb
template (referenced above) we are using the properties defined in params.pp
for dynamically creating an aws.config
credential file. This file is then used by our database_update.rb
script for connecting to S3.
Speaking of the database_update.rb
script, we need to get it on the target environment. To do this, we define a download_file
resource.
1 2 3 4 5 6 7 |
|
This creates a new resource for Puppet to use. Using this we are able to download both the database_update.rb
and id_rsa.pub
public SSH key.
As a final step for setting up the system, we execute a bash line for copying the id_rsa.pub
contents into the authorized_keys
file for the ec2-user
. This enables clients with the connected id_rsa
key to ssh into the target environment as ec2-user
.
The Manatee infrastructure uses Apache for the webserver, Tomcat for the app server, and PostgreSQL for its database. Puppet these up as part of the main
stage, meaning they run in order after the pre
stage modules are run.
In our httpd
module, we are performing several steps discussed previously. The httpd
package is installed and creating a new file from a template.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
|
The new piece of functionality used in our httpd
module is service
. service
allows us define the state the httpd
service should be in at the end of our run. In this case, we are declaring that it should be running.
The Tomcat module again uses package to define what to install and service to declare the end state of the tomcat service.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
|
Tomcat uses the file
resource differently then previous modules. tomcat
uses file
for creating directories. This is defined using ensure => “directory”
.
We are using the package
resource for installing PostgreSQL, building files from templates using the file
resource, performing bash executions with exec
, and declaring the intended state of the PostgreSQL using the service
resource.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
|
In this module we are creating a new user on the PostgreSQL database:
1 2 3 4 |
|
In this next section we download the latest Manatee database SQL dump.
1 2 3 4 |
|
In the section below, we load the database with the SQL file. This builds our target environments with the production database content giving developers an exact replica sandbox to work in.
1 2 3 4 5 6 7 8 9 10 |
|
Lastly in our Puppet run, we install subversion
and groovy
on the target node. We could have just included these in our system module, but they seemed general purpose enough to create individual modules.
Subversion manifest:
1 2 3 4 5 |
|
Groovy manifest:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
|
The Subversion manifest is relatively straightforward as we are using the package resource. The Groovy manifest is slightly different, we are downloading the Groovy tar, placing it on the filesystem, and then extracting it.
We’ve gone through how the target environment is provisioned. We do however have one more task, testing. It’s not enough to assume that if Puppet doesn’t error out, that everything got installed successfully. For this reason, we use Cucumber to do acceptance testing against our environment. Our tests check if services are running, configuration files are present and if the right packages have been installed.
Puppet allows us to completely script and version our target environments. Consequently, this enables us to treat environments as disposable entities. As a practice, we create a new target environment every time our CD pipeline is run. This way we are always deploying against a known state.
As our blog series is coming to a close, let’s recap what we’ve gone through. In the Manatee infrastructure we use a combination of CloudFormation for scripting AWS resources, Puppet for scripting target environments, Capistrano for deployment automation, Simple DB and CloudFormation for dynamic properties and
Jenkins for coordinating all the resources into one cohesive unit for moving a Manatee application change from check-in to production in just a single click.