5. May 2017 00:25
by Aaron Medacco
0 Comments

Creating a CI/CD Pipeline on AWS - Part II: CodeBuild

5. May 2017 00:25 by Aaron Medacco | 0 Comments

Welcome to Part II of this series on setting up your own continuous integration and continuous delivery pipeline on Amazon Web Services. If you missed Part I where we created a source control repository using AWS CodeCommit, you can check it out here.

In this post, we'll be creating the build and test stages of our pipeline using Amazon's fully managed build service, AWS CodeBuild. All commands will be done using the AWS CLI to avoid the fast-paced updates of the web console from dating this series. 

AWS CodeBuild

Last time, we created a CodeCommit repository, but only pushed some test files to it. Since we're going to need a meaningful project to build, I'll be using the demo application provided by expressjs.com. I've committed this project to the CodeCommit repository created in Part I. You should already be familiar with committing and pushing code to CodeCommit, so I won't spell out any CodeCommit steps in detail for brevity.

Granting permissions for your user account to use AWS CodeBuild:

  1. Open a command prompt or terminal window.
  2. Run the following commands substituting your user's name for [username]:
    aws iam attach-user-policy --user-name [username] --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess
    aws iam attach-user-policy --user-name [username] --policy-arn arn:aws:iam::aws:policy/AWSCodeBuildAdminAccess

This will allow your user the ability to invoke CodeBuild commands assuming they didn't already have them. We also need to create a bucket on S3 to store our build artifacts.

Note: Best practice would dictate that fewer permissions be granted to the user, but I am being lenient with access for this tutorial. 

Creating an S3 bucket to store our build artifacts:

  1.  With your command prompt or terminal window, run the following command to create an S3 bucket substituting a name for your bucket for [bucketname]:
    aws s3 mb s3://[bucketname]

Creating a service role for the AWS CodeBuild service:

We need to create an IAM role that will allow CodeBuild to access other services on our behalf. While you can have this done automatically when using the management console, there is no way to do this automatically with the CLI. The following is taken straight from Amazon's documentation for setting up a service role for CodeBuild:

  1. Make an empty directory on your file system and create the following files:
    create-role.json
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": "codebuild.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
    put-role-policy.json
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "CloudWatchLogsPolicy",
          "Effect": "Allow",
          "Action": [
            "logs:CreateLogGroup",
            "logs:CreateLogStream",
            "logs:PutLogEvents"
          ],
          "Resource": [
            "*"
          ]
        },
        {
          "Sid": "CodeCommitPolicy",
          "Effect": "Allow",
          "Action": [
            "codecommit:GitPull"
          ],
          "Resource": [
            "*"
          ]
        },
        {
          "Sid": "S3GetObjectPolicy",
          "Effect": "Allow",
          "Action": [
            "s3:GetObject",
            "s3:GetObjectVersion"
          ],
          "Resource": [
            "*"
          ]
        },
        {
          "Sid": "S3PutObjectPolicy",
          "Effect": "Allow",
          "Action": [
            "s3:PutObject"
          ],
          "Resource": [
            "*"
          ]
        }
      ]
    }
  2. In your command prompt or terminal window, switch your working directory to the directory where these files live.
  3. Run the following commands:
    aws iam create-role --role-name CodeBuildServiceRole --assume-role-policy-document file://create-role.json
    aws iam put-role-policy --role-name CodeBuildServiceRole --policy-name CodeBuildServiceRolePolicy --policy-document file://put-role-policy.json
  4. Write down the ARN value of the created role output by the first command. We'll need it to create our build project.

Creating your build project in AWS CodeBuild:

  1. Using the same command prompt or terminal window, run the following command substituting for the following values:

    Name for your project -> [ProjectName]
    Description for your project -> [ProjectDescription]
    HTTPS clone URL for your CodeCommit repository -> [CloneUrlHttp]
    Bucket name of the bucket created earlier -> [BucketName]
    ARN for the service role you created in the preceding step -> [ServiceRoleARN]
    aws codebuild create-project --name [ProjectName] --description "[ProjectDescription]" --source type="CODECOMMIT",location="[CloneUrlHttp]" --artifacts type="S3",location="[BucketName]",name="BuildOutput.zip",packaging="ZIP" --environment type="LINUX_CONTAINER",computeType="BUILD_GENERAL1_SMALL",image="aws/codebuild/nodejs:7.0.0" --service-role "[ServiceRoleARN]"

Adding tests to your project:

No CI/CD is complete without testing. We'll include testing using Mocha. Our testing will be trivial, but will illustrate how you can incorporate testing with AWS CodeBuild.

  1. Create a file named test.js that contains the following code:
    var assert = require('assert');
    describe('String Tests', function() {
      describe('Comparison', function() {
        it('Should be equal when strings are the same.', function() {
          assert.equal("No mercy for the misguided.", "No mercy for the misguided.");
        });
      });
    });
  2. Commit this file to the root of the master branch of the CodeCommit repository we created in Part I.

Running a build of your project:

AWS requires a build spec in order to run a build of your project. You can read more about build specs here. Since we didn't define a build spec when we created the project, we'll include it in our project's root. Build specs are YAML files that tell CodeBuild what to do when it builds your project, such as any tests to run, dependencies to install, environment variables, and build outputs. The following is the build spec YAML file we'll use:

version: 0.1

phases:
  install:
    commands:
      - echo Installing Express...
      - npm install express
      - echo Installing Mocha...
      - npm install -g mocha
  pre_build:
    commands:
      - echo Installing source NPM dependencies...
  build:
    commands:
      - echo Build started on `date`
      - echo Compiling the Node.js code
      - echo Running tests...
      - mocha test.js
  post_build:
    commands:
      - echo Build completed on `date`
artifacts:
  files:
    - '**/*'
  1. Make sure to commit this file as buildspec.yaml in the root of your CodeCommit repository.
  2. To start a build of the project, run the following command in your prompt or terminal window substituting your project's name for [ProjectName]:
    aws codebuild start-build --project-name [ProjectName]

If we were to log in to the management console for CodeBuild, we'll see this:

CodeBuild Build Success

Our build succeeded!

If I break the tests on purpose by changing test.js to the following, recommitting to the repository, and running another build, we get:

var assert = require('assert');
describe('String Tests', function() {
  describe('Comparison', function() {
    it('Should be equal when strings are the same.', function() {
      assert.equal("No mercy for the misguided.", "No mercy for the wretched.");
    });
  });
});

CodeBuild Build Failure

The build fails, which is what we want when tests fail.

If you want to dive deeper into CodeBuild, I encourage you to head over to the AWS documentation. In the next part of this series, we'll continue building our pipeline by incorporating CodeDeploy, Amazon's automated code deployment service, to deploy code to EC2 instances in our AWS environment.

Cheers!

2. May 2017 16:57
by Aaron Medacco
0 Comments

Creating a CI/CD Pipeline on AWS - Part I: CodeCommit

2. May 2017 16:57 by Aaron Medacco | 0 Comments

This will be the first in a series of posts on setting up your own continuous integration and continuous delivery pipeline on Amazon Web Services. I'll be using the Developer Tools provided by Amazon (CodeCommit, CodeBuild, CodeDeploy, CodePipeline) for this series and will stick to using commands with the AWS CLI instead of the management console. 

The first part of our CI/CD pipeline entails setting up our source control repository using AWS CodeCommit. CodeCommit is Amazon's managed source control service that we'll be pushing code changes to over HTTPS using Git credentials.

AWS CodeCommit

Before I get started, make sure you've created an AWS account with a privileged user who has access to invoke commands for setup. Also be sure to install the latest version of the AWS CLI and Git on your computer. I'll assume you've already configured the AWS CLI to operate in the region you'd like to setup your pipeline. 

Granting permissions for your user account to use AWS CodeCommit:

  1. Open a command prompt or terminal window.
  2. Run the following commands substituting your user's name for [username]:
    aws iam attach-user-policy --user-name [username] --policy-arn arn:aws:iam::aws:policy/AWSCodeCommitFullAccess
    aws iam attach-user-policy --user-name [username] --policy-arn arn:aws:iam::aws:policy/IAMReadOnlyAccess
    aws iam attach-user-policy --user-name [username] --policy-arn arn:aws:iam::aws:policy/IAMSelfManageServiceSpecificCredentials

This will grant your user access to make any changes to CodeCommit in addition to allowing your user to provision their own Git credentials which we'll use later.

Creating Git Credentials for HTTPS for your user account:

  1. Using the same command prompt or terminal window, run the following command substituting your user's name for [username]:
    aws iam create-service-specific-credential --user-name [username] --service-name codecommit.amazonaws.com
  2. You'll receive your new Git credentials as output when the command finishes. Copy these and save them in a secure place for later.

Creating your CodeCommit repository:

  1. Using the same command prompt or terminal window, run the following command substituting what you'd like to name your CodeCommit repository for [repositoryname]:
    aws codecommit create-repository --repository-name [repositoryname]
  2. Upon completion, you'll receive output detailing attributes about your new repository. Copy these, too. We'll be using the HTTPS URL to clone our repository.

Cloning your CodeCommit repository to a local repository:

  1. Using the same command prompt or terminal window, run the following command substituting your repository's HTTP clone URL for [CloneUrlHttp] and the local directory you want to use for the local repository for [localrepository]:
    git clone [CloneUrlHttp] [localrepository]
  2. You should be prompted to enter the Git credentials you generated in the earlier step. Your prompt may look different depending on your computer's configuration and operating system.

    Git Credentials Prompt
  3. Enter the Git credentials you generated in the earlier step and proceed. 
  4. Your clone should have finished. In our case, we just cloned an empty repository so nothing too special.

Committing to your CodeCommit repository:

  1. Change your working directory to that of the local repository you just cloned.
  2. Add a file or a collection of files and perform an add command with Git.
  3. Commit the changes to your local repository with a commit command with Git.
  4. To push the commit from your local repository to your CodeCommit repository, issue the following command:
    git push origin

Creating branches for your CodeCommit repository:

  1. Using a command prompt or terminal window, run the following command substituting the name of your repository for [repositoryname] and the name of your new branch for [branchname]. You'll need to specify a commit identifier for [commitid]:
    aws codecommit create-branch --repository-name [repositoryname] --branch-name [branchname] --commit-id [commitid]
    For instance, to get the commit identifier for the commit we just made, we can run the following command substituting our repository name for [repositoryname]:
    aws codecommit get-branch --repository-name [repositoryname] --branch-name master

If you want to learn how to work with CodeCommit in more depth, I suggest reading the AWS documentation. However, for this series, we have the minimum necessary setup to proceed. In the next part of this series, we'll provision the next part of our delivery pipeline using Amazon's managed build service, AWS CodeBuild, which will build and test our projects using Amazon's infrastructure.

Cheers!

24. April 2017 21:01
by Aaron Medacco
0 Comments

Testing if Your AWS Application Load Balancer is Relaying Traffic

24. April 2017 21:01 by Aaron Medacco | 0 Comments

A common requirement before deploying an elastic load balancer into production on AWS is to test if traffic is being relayed from the load balancer to EC2 instances in the assigned target group. Those familiar with developing web applications know that you can modify your hosts file to force DNS to resolve to an address. Therefore, we just need to find out the load balancer's public facing IP address, update our hosts file and validate our site still loads.

Let's get started.

Retrieving the DNS name of your elastic load balancer:

  1. Navigate to your load balancer in your AWS management console under the EC2 service.
  2. With your load balancer selected, copy the DNS name value under the description tab.
    Application Load Balancer DNS Name

Identifying the IP address of your elastic load balancer:

  1. Open up a command prompt or terminal window.
  2. Run the following command, substituting your DNS name value for mine.
    nslookup Test-Load-Balancer-148217235.us-east-1.elb.amazonaws.com
  3. You should see something similar to the following:
    Application Load Balancer IP Address
  4. We'll take one of the IP address returned by our nslookup command and plug it into our hosts file.

Modifying your hosts file and requesting your application:

  1. Navigate to your hosts file.
    For Windows, this is located at C:\Windows\System32\drivers\etc\hosts.
    For Linux, this is located at \etc\hosts.
  2. Add an entry for your domain with the IP address you took from the previous step:
    Hosts File
  3. Save the file.
  4. Use a browser and navigate to the domain being serviced by your elastic load balancer.

If you receive a response, that means your load balancer is forwarding traffic to instances in your target group. If your request hangs and times out, that means that something still needs to be done before your load balancer is ready. Keep in mind that the IP addresses you looked up in step 2 are subject to change and should not be treated as if they are static.

Cheers!

22. April 2017 23:50
by Aaron Medacco
0 Comments

Using Nested Stacks w/ AWS CloudFormation

22. April 2017 23:50 by Aaron Medacco | 0 Comments

When describing your cloud infrastructure using AWS CloudFormation, your templates can become large and difficult to manage as your desired stacks grow significant. CloudFormation allows you to nest templates giving you the ability to piecemeal different chunks of your infrastructure into smaller modules. For instance, suppose you have several templates that involve creating an elastic load balancer. Rather than copying / pasting the same JSON between each template, you can write one template for provisioning the load balancer and then reference that template in "parent" templates that require it. 

This has several advantages. Code concerning the load balancer is consolidated in one place. This means when changes need to be made to it's configuration, you don't need to revisit each template where you copied the code at one point in time. This saves you both time and grief, removing the chance for human error to cause one template whose ELB is different than another template when they should be identical. It also enhances your ability to develop and test your CloudFormation templates. When writing templates, it's common to make incremental changes to JSON, create a stack from the template to validate structure and behavior, tear down the stack, and rinse / repeat. For large templates, provisioning and deleting stacks will slow you down as you wait for feedback. Templates that are more modular / smaller generate more specific testing providing feedback in faster iterations.

AWS CloudFormation 

In this post, I'll reveal two CloudFormation templates, one that provisions an application load balancer, and one that creates a hosted zone with a record set pointing to the load balancer being provisioned. I'll do this by nesting the template for the load balancer inside the template that creates a Route 53 hosted zone. Keep in mind, I won't be setting up listeners or target groups for the load balancer. I'm only demonstrating how to nest CloudFormation templates.

Here is my template for provisioning an application load balancer without any configuration:

Note: You can download this template here.

{
    "AWSTemplateFormatVersion": "2010-09-09",
	"Description": "Simple Load Balancer",
	"Parameters": {
		"VPC": {
			"Type": "AWS::EC2::VPC::Id",
			"Description": "VPC for the load balancer."
		},
		"PublicSubnet1": {
			"Type": "AWS::EC2::Subnet::Id",
			"Description": "First public subnet."
		},
		"PublicSubnet2": {
			"Type": "AWS::EC2::Subnet::Id",
			"Description": "Second public subnet."
		}
	},
	"Resources": {
		"ElasticLoadBalancer": {
			"Type": "AWS::ElasticLoadBalancingV2::LoadBalancer",
			"Properties" : {
				"Name": "Load-Balancer",
				"Scheme": "internet-facing",
				"Subnets": [ {"Ref": "PublicSubnet1"}, {"Ref": "PublicSubnet2"} ],
				"Tags": [ { "Key": "Name", "Value": "CloudFormation Load Balancer" } ]
			}
		}
	},
	"Outputs": {
		"LoadBalancerDNS": {
			"Description": "Public DNS For Load Balancer",
			"Value": { "Fn::GetAtt": [ "ElasticLoadBalancer", "DNSName" ] }
		},
		"LoadBalancerHostedZoneID": {
			"Description": "Canonical Hosted Zone ID of load balancer.",
			"Value": { "Fn::GetAtt": [ "ElasticLoadBalancer", "CanonicalHostedZoneID" ] } 
		}
	}
}

Notice that there are some parameters related to networking being asked for. In this case, I'm requesting two public subnets for the ELB since I intend to have it exposed to the internet. I've also defined some outputs, one is the public DNS for the load balancer I'm creating, and the other is it's hosted zone. These values will come in handy when my parent template sets up an A record set pointing to the newly made ELB.

The following is the template for provisioning a hosted zone given a domain name:

Note: You can download this template here.

{
    "AWSTemplateFormatVersion": "2010-09-09",
	"Description": "Generate internet-facing load balancer.",
	"Parameters": {
		"Domain": {
			"Type": "String",
			"Description": "Domain serviced by load balancer."
		},
		"VPC": {
			"Type": "AWS::EC2::VPC::Id",
			"Description": "VPC for the load balancer."
		},
		"PublicSubnet1": {
			"Type": "AWS::EC2::Subnet::Id",
			"Description": "First public subnet."
		},
		"PublicSubnet2": {
			"Type": "AWS::EC2::Subnet::Id",
			"Description": "Second public subnet."
		}
	},
	"Resources": {
		"HostedZone": {
			"Type": "AWS::Route53::HostedZone",
			"Properties": {
				"Name": { "Ref": "Domain" }
			}
		},
		"HostedZoneRecords": {
			"Type": "AWS::Route53::RecordSetGroup",
			"Properties": {
				"HostedZoneId": { "Ref": "HostedZone" },
				"RecordSets": [{
					"Name": { "Ref": "Domain" },
					"Type": "A",
					"AliasTarget": {
						"DNSName": { "Fn::GetAtt": [ "LoadBalancerStack", "Outputs.LoadBalancerDNS" ]},
						"HostedZoneId": { "Fn::GetAtt": [ "LoadBalancerStack", "Outputs.LoadBalancerHostedZoneID" ]}
					}
				}]
			}
		},
		"LoadBalancerStack": {
			"Type": "AWS::CloudFormation::Stack",
			"Properties": {
				"Parameters": {
					"VPC": { "Ref": "VPC" },
					"PublicSubnet1": { "Ref": "PublicSubnet1" },
					"PublicSubnet2": { "Ref": "PublicSubnet2" }
				},
				"TemplateURL": "https://s3.amazonaws.com/cf-templates-1bc7bmahm5ald-us-east-1/loadbalancer.json"
			}
		}
	}
}

You can see I've added a parameter asking for a domain in addition to the values required for the previous template. Then, in the resources section of the template, I'm creating an AWS::CloudFormation::Stack referencing the location my nested template is stored and passing the parameters required in order to invoke it. In the section where I define my hosted zone records, I need to know the DNS name and hosted zone of the application load balancer. These are retrieved by referencing the outputs returned by the nested template. 

Creating a CloudFormation stack using the parent template, I now have a Route 53 hosted zone for the input domain pointing to the newly created load balancer. From here, I could reference the load balancer template in any number of templates requiring it without bloating each of them with pasted JSON. The next step would be to create listeners and target groups with EC2 instance targets, but that is a separate exercise.

Cheers!

19. April 2017 01:36
by Aaron Medacco
0 Comments

Simple Web Hosting w/ AWS Lightsail

19. April 2017 01:36 by Aaron Medacco | 0 Comments

What if you just want to host a WordPress blog or a simple website on AWS?

Maybe you don't want to learn all the tools necessary to configure your cloud environment from scratch. Perhaps your a business owner or a web designer who knows enough to get a site running on GoDaddy, but got overwhelmed with this when you created an AWS account:

AWS Management Console

Note: The list of services actually keep going, but this is all I could fit in a barely readable image.

Amazon Web Services has recently released AWS Lightsail to service this type of customer. Many hosting providers like WinHost, GoDaddy, BlueHost, etc. offer cheap hosting packages that allow you to get a simple website running quickly. They typically come with their own management console or control panel for users to manage items like DNS, domains, billing, SSL certificates, email, etc. AWS Lightsail offers a similar experience, where setup is fast, easy, and cheap. In other words, you won't have to hire an AWS expert or sink your time learning about Amazon Web Services in order to get up and running. The key to Lightsail is simplicity, to not be overwhelmed by the flexibility and options thrown at you when setting up your environment in AWS.

For example, I was able to get a minimal WordPress environment running within a few clicks:

AWS Lightsail Console

If you're used to most hosting provider consoles, this should look pretty familiar. You can see how Amazon has peeled back the complexity of the normal AWS management console in order to make Lightsail more accessible for the "less" technically-minded customer (newbs). Readers who are familiar with Amazon Web Services will identify how items like security groups and elastic IPs are presented differently in Lightsail's simplified user interface. 

For those that are curious, it appears the instances running within Lightsail are EC2 instances of the t2 instance family under the covers. Resources provisioned with Lightsail do not appear in the normal AWS management console. At the time of this writing, it does not appear that you can "graduate" your Lightsail environment to normal AWS, however I believe this will be a common customer request so it may become an option in the future. Instance-level firewalls (security groups), DNS, monitoring, static (elastic) IPs, and volume snapshots can all be leveraged within AWS Lightsail. However, you should think of AWS Lightsail as AWS Lite. You're not going to have all of the options available that you would normally within AWS, but the intern you hired from the local community college to make your website might be able to figure it out.

If you're interested in this service, check out Jeff Barr's launch post where he shows how easy it is to get started with AWS Lightsail.

Cheers!

Copyright © 2016-2017 Aaron Medacco