7. December 2017 02:49
by Aaron Medacco
0 Comments

AWS re:Invent 2017 - Day 4 Experience

7. December 2017 02:49 by Aaron Medacco | 0 Comments

The following is my Day 4 re:Invent 2017 experience. Missed Day 3? Check it out here.

The common theme of not waking up early continues. Missed the entire Werner Vogels Keynote, which is fine since from what others were saying it was a storm to get in. I didn't have it reserved anyways. AWS added it to the event catalog after I had already decided my schedule. Not sure why I thought you were just supposed to show up. 

First session of the day, AWS Database and Analytics State of the Union - 2017 (DAT201). This took place in the Venetian Theatre.

AWS re:Invent 2017

Love this venue for sessions.

I wasn't sure what to expect from a "State of the Union" session. For the most part, this was a combination of sales pitch, history lesson into the origin of some of the AWS database services, and explanation of miscellaneous features. After the explanation of what the RDS Multi-AZ feature does (really? who doesn't know what this is?), the session moved on to highlight the motivations for building Aurora and DynamoDB. Essentially, AWS wanted to combine the benefits of commercial-grade performance provided by products like Oracle and SQL Server with the low-cost of MySQL, PostgreSQL and MariaDB. The product of these efforts became the Aurora product. 

AWS re:Invent 2017

Horrible quality pic. I don't know why.

After sharing some history, a few of the newest Aurora features came up. Specifically, Amazon Aurora Multi-Master and Amazon Aurora Serverless. Amazon Aurora Multi-Master allows you bring the disaster recovery of a failed master instance to almost nothing, 100 ms. The single-region version of this feature is available in preview now, with the multi-region version available later in 2018. Amazon Aurora Serverless allows you to essentially have an on-demand database that scales for you and is for applications that have unpredictable workloads. Being serverless, the customer manages hardly anything.

The origins for DynamoDB came from a disaster affecting Amazon.com. I didn't write down the exact year, but basically Amazon.com was leveraging Oracle to power the online retail site, however the site became unavailable during Christmas. The cause of this was traced back to limitations in Oracle or Amazon.com's implementation of it, of this I wasn't clear. In response, Amazon built the NoSQL database service, DynamoDB, to handle massive volumes of transactions it was experiencing.

AWS re:Invent 2017

DynamoDB. But does it work at scale? Haha.

The remainder of the session focused on general overviews of the rest of the data service catalog. Therefore, a lot of the session was review for anyone who regularly spends time learning Amazon Web Services. Several attendees started leaving during this time. There's a lot going on during re:Invent so while I understand time is precious during the week, I always stay for the whole session. Maybe it's just out of respect for the presenter. Either way, I did learn a few tidbits about Aurora and some origin history of some services I didn't know before.

AWS re:Invent 2017

Big Amazon Echo.

Grabbed a shuttle to the Aria to check out what was going on at the Quad. I was expecting it to be as large as the Venetian Expo. Boy, was that off the mark! The Quad was very underwhelming, especially after visiting the Expo. There was a small handful of booths, but nothing like the Expo which was aisles and aisles full of them. I smiled when I saw the Certification Lounge there, looked like a large-sized cubicle. At this point, it became clear to me that the Venetian was definitely the primary hub for re:Invent. Good to know for next year's hotel room reservations.

They did have a cool Lego section, though.

During my video filming of the re:Invent areas of the Aria, I got yelled at by some woman telling me I couldn't record. I waited until she stopped looking, and turned it back on after. What a joke! Now, I actually was told not to video record prior to this while in the Venetian casino, which while I'd argue is unenforceable, makes more sense to me. However, back in the Aria, what is there to film that can cause harm!? It's a bunch of nerds with laptops walking around banners, session rooms, and re:Invent help desks a quarter mile from the casino floor! Ridiculous. It's 2017, and there's a tech conference going on here. Are you going to watch every attendee's laptop, tablet and smartphone, too, because guess what, those devices can do video recording as well. Unenforceable and moronic. Anyways...

AWS re:Invent 2017

In case the re:Invent app fails you.

Returned to my room after grabbing a Starbucks and did some blogging until my next session at the Venetian, Taking DevOps Closer to the AWS Edge (CTD401). 

AWS re:Invent 2017

Emptier session. Was actually really good.

This session was possible my favorite of the conference. And I certainly wasn't expecting that. The session title, in my opinion, is misleading, however. Now, the presenter did say that the same session took place last year and included a demo involving saving CloudFormation templates to CodeCommit and managing a delivery pipeline with CodePipeline to push modifications to a development CloudFront distribution, perform some testing, and then do the same to a production CloudFront distribution. That seems more DevOps to me. What we got instead was an in-depth overview of how to incorporate the edge services of AWS into application design and development and how to use CloudFront

AWS re:Invent 2017

More terrible quality pics.

AWS re:Invent 2017

Logic determining the TTL CloudFront uses.

Most of the session was a deep dive and explanation of CloudFront and how it works between your origin (EC2, S3, etc.) and your clients (users). The presenter explained how the TCP connections, both from the user to the edge location, and from the edge location to the origin function as well as provided some tips for keeping your cache-hit ratio high using headers like cloudfront-is-mobile-viewer to reduce variability. Plus, there were some cool examples given of Lambda@Edge taking headers and custom modifying them in the in-between. 

AWS re:Invent 2017

Lambda@Edge examples.

I've not used CloudFront a lot, but I'm more confident about it after this session. A lot of people walked out throughout this session, probably hoping for something different. Can't say I wouldn't have wanted to do the same thing if I knew CloudFront inside and out already. Being a 400-level course, it was surprisingly easy to grasp, perhaps due to the presenter.

AWS re:Invent 2017

Lego blocks.

Back at the Bellagio, stopped in for some grub at the FIX Restaurant & Bar. Snatched a cocktail, salmon, and mashed potatoes.

AWS re:Invent 2017

9/10. Would have been 10/10 if the waitress had brought the mac n cheese I ordered. Maybe she didn't hear me? Don't know. The food coma was instant, so I took a power nap before going out to check out the re:Play party.

Which brings us to the re:Play party! AWS goes all out for this. You could only enter in through the Venetian Hall A even though it was behind the LINQ. 

AWS re:Invent 2017

Oomce, oomce, oomce.

Food and drinks were included and the place was packed. It took place under a set of large tents, one being totally dedicated to games like glow-in-the-dark pool, glow-in-the-dark ping pong, adult-sized ball container, putt-putt pool, batting cages, dodge-ball and more.

AWS re:Invent 2017

Guy got lost in the balls. They couldn't find him.

AWS re:Invent 2017

Another tent is where the rave was going on with DJ Snake.

AWS re:Invent 2017

Oomce, oomce, oomce.

AWS re:Invent 2017

Not sure what these were about.

And then a final tent was packed full of arcade style games. There were some other areas I didn't explore since the line was ridiculous or I wasn't clear how to get in. 

AWS re:Invent 2017

Ancient video games.

I didn't end up staying too long since everything had huge lines and I'm not one for live music anyways.

AWS re:Invent 2017

Walked back to the Venetian casino and played poker with other attendees from re:Invent. Lost another $300. What is going on, man! I'm just the worst, although I turned Aces up when a guy got a set. Understandable, but annoying. Came home with some Taco Bell (I know, classy) and turned in for the night.

Cheers!

16. May 2017 23:57
by Aaron Medacco
0 Comments

Creating a CI/CD Pipeline on AWS - Part IV: CodePipeline

16. May 2017 23:57 by Aaron Medacco | 0 Comments

Welcome to Part IV of this series on setting up your own continuous integration and delivery pipeline on Amazon Web Services. In the previous part, we set up the deployment stage of our pipeline using AWS CodeDeploy.

In this final segment, we'll take each stage of the pipeline we've already built and combine them with AWS CodePipeline. Our pipeline will entail 3 stages, the source stage representing the CodeCommit repository we set up in Part I, the build stage representing the CodeBuild project we set up in Part II, and the staging stage representing the application deployment via CodeDeploy we set up in Part III. Like the previous steps, I'll only be using the AWS CLI to complete the task as the web console changes frequently.

AWS CodePipeline

Additionally, I assume the reader has already viewed Part I, Part II, and Part III of this series, thus anything involving interactions discussed in that content will not be detailed again here. 

Granting permissions for your user account to use AWS CodePipeline:

  1. Open a command prompt or terminal window.
  2. Run the following commands substituting your user's name for [username]:
    aws iam attach-user-policy --user-name [username] --policy-arn arn:aws:iam::aws:policy/AWSCodePipelineFullAccess

This gives your user permission to interact with the CodePipeline service if you didn't already have sufficient privileges.

Creating a service role for the AWS CodePipeline service:

Like the prior posts, we need a service role that allows CodePipeline to act on our behalf. Again, there is usually a simple method for doing this in the management console, but since we're sticking to CLI commands, you can accomplish the same thing by doing the following:

  1. Make an empty directory on your file system and create the following files:
    create-role.json
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": "codepipeline.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
    put-role-policy.json
    {
      "Statement": [
        {
          "Action": [
            "s3:GetObject",
            "s3:GetObjectVersion",
            "s3:GetBucketVersioning"
          ],
          "Resource": "*",
          "Effect": "Allow"
        },
        {
          "Action": [
            "s3:PutObject"
          ],
          "Resource": [
            "arn:aws:s3:::codepipeline*",
            "arn:aws:s3:::elasticbeanstalk*"
          ],
          "Effect": "Allow"
        },
        {
          "Action": [
            "codecommit:CancelUploadArchive",
            "codecommit:GetBranch",
            "codecommit:GetCommit",
            "codecommit:GetUploadArchiveStatus",
            "codecommit:UploadArchive"
          ],
          "Resource": "*",
          "Effect": "Allow"
        },
        {
          "Action": [
            "codedeploy:CreateDeployment",
            "codedeploy:GetApplicationRevision",
            "codedeploy:GetDeployment",
            "codedeploy:GetDeploymentConfig",
            "codedeploy:RegisterApplicationRevision"
          ],
          "Resource": "*",
          "Effect": "Allow"
        },
        {
          "Action": [
            "elasticbeanstalk:*",
            "ec2:*",
            "elasticloadbalancing:*",
            "autoscaling:*",
            "cloudwatch:*",
            "s3:*",
            "sns:*",
            "cloudformation:*",
            "rds:*",
            "sqs:*",
            "ecs:*",
            "iam:PassRole"
          ],
          "Resource": "*",
          "Effect": "Allow"
        },
        {
          "Action": [
            "lambda:InvokeFunction",
            "lambda:ListFunctions"
          ],
          "Resource": "*",
          "Effect": "Allow"
        },
        {
          "Action": [
            "opsworks:CreateDeployment",
            "opsworks:DescribeApps",
            "opsworks:DescribeCommands",
            "opsworks:DescribeDeployments",
            "opsworks:DescribeInstances",
            "opsworks:DescribeStacks",
            "opsworks:UpdateApp",
            "opsworks:UpdateStack"
          ],
          "Resource": "*",
          "Effect": "Allow"
        },
        {
          "Action": [
            "cloudformation:CreateStack",
            "cloudformation:DeleteStack",
            "cloudformation:DescribeStacks",
            "cloudformation:UpdateStack",
            "cloudformation:CreateChangeSet",
            "cloudformation:DeleteChangeSet",
            "cloudformation:DescribeChangeSet",
            "cloudformation:ExecuteChangeSet",
            "cloudformation:SetStackPolicy",
            "cloudformation:ValidateTemplate",
            "iam:PassRole"
          ],
          "Resource": "*",
          "Effect": "Allow"
        },
        {
          "Action": [
            "codebuild:BatchGetBuilds",
            "codebuild:StartBuild"
          ],
          "Resource": "*",
          "Effect": "Allow"
        }
      ],
      "Version": "2012-10-17"
    }
  2. In your command prompt or terminal window, switch your working directory to the directory where these files live.
  3. Run the following commands:
    aws iam create-role --role-name CodePipelineServiceRole --assume-role-policy-document file://create-role.json
    aws iam put-role-policy --role-name CodePipelineServiceRole --policy-name CodePipelineServiceRolePolicy --policy-document file://put-role-policy.json
  4. Write down the ARN value of the created role output by the first command. We'll require it when we create our pipeline.

Creating your pipeline in CodePipeline:

  1. In the same directory (so you don't have to change again) where you created files in the last step, create a new file and substitute your values for the following:

    ARN for the service role you just created in the preceding step -> [ServiceRoleARN]
    Repository name you created in Part I -> [RepositoryName]
    Project name you created in Part II -> [ProjectName]
    Deployment group name you created in Part III -> [DeploymentGroupName]
    Application name you created in Part III -> [ApplicationName]
    An S3 bucket name for Pipeline to store artifacts -> [ArtifactStoreBucketName]
    A name for your pipeline -> [PipelineName]

    pipeline.json
    {
        "pipeline": {
            "roleArn": "[ServiceRoleARN]",
            "stages": [
                {
                    "name": "Source",
                    "actions": [
                        {
                            "inputArtifacts": [],
                            "name": "Source",
                            "actionTypeId": {
                                "category": "Source",
                                "owner": "AWS",
                                "version": "1",
                                "provider": "CodeCommit"
                            },
                            "outputArtifacts": [
                                {
                                    "name": "MyApp"
                                }
                            ],
                            "configuration": {
                                "BranchName": "master",
                                "RepositoryName": "[RepositoryName]"
                            },
                            "runOrder": 1
                        }
                    ]
                },
                {
                    "name": "Build",
                    "actions": [
                        {
                            "inputArtifacts": [
                                {
                                    "name": "MyApp"
                                }
                            ],
                            "name": "CodeBuild",
                            "actionTypeId": {
                                "category": "Build",
                                "owner": "AWS",
                                "version": "1",
                                "provider": "CodeBuild"
                            },
                            "outputArtifacts": [
                                {
                                    "name": "MyAppBuild"
                                }
                            ],
                            "configuration": {
                                "ProjectName": "[ProjectName]"
                            },
                            "runOrder": 1
                        }
                    ]
                },
                {
                    "name": "Staging",
                    "actions": [
                        {
                            "inputArtifacts": [
                                {
                                    "name": "MyAppBuild"
                                }
                            ],
                            "name": "[DeploymentGroupName]",
                            "actionTypeId": {
                                "category": "Deploy",
                                "owner": "AWS",
                                "version": "1",
                                "provider": "CodeDeploy"
                            },
                            "outputArtifacts": [],
                            "configuration": {
                                "ApplicationName": "[ApplicationName]",
                                "DeploymentGroupName": "[DeploymentGroupName]"
                            },
                            "runOrder": 1
                        }
                    ]
                }
            ],
            "artifactStore": {
                "type": "S3",
                "location": "[ArtifactStoreBucketName]"
            },
            "name": "[PipelineName]",
            "version": 1
        }
    }
  2. In your command prompt or terminal window, switch your working directory to the directory where this file lives if you aren't already there.
  3. Run the following command: 
    aws codepipeline create-pipeline --cli-input-json file://pipeline.json

Testing your pipeline in CodePipeline:

Congratulations if you made it this far! This is where we see the culmination of everything we've built so far work in an fully automated fashion. Make sure the instance(s) in your deployment group are running and change your app.js file to display something other than "Hello World!":

var express = require('express')
var app = express()

app.get('/', function (req, res) {
  res.send('You suffer in measure to your authority.')
})

app.listen(3000, function () {
  console.log('Example app listening on port 3000!')
})

and commit your code your CodeCommit repository.

If you log in to the management console for CodePipeline and view the pipeline you created, you should see the following:

AWS CodePipeline Visual

or you can run the following command, substituting the name you gave your pipeline for [PipelineName]:

aws codepipeline get-pipeline-state --name [PipelineName]

You now have a fully automated delivery pipeline on Amazon Web Services! Feel free to add more steps to your pipeline or experiment with other projects you might want to implement this with. For additional know-how on using AWS CodePipeline, check out the AWS documentation

Cheers!

14. May 2017 23:00
by Aaron Medacco
0 Comments

Creating a CI/CD Pipeline on AWS - Part III: CodeDeploy

14. May 2017 23:00 by Aaron Medacco | 0 Comments

Welcome to Part III of this series on setting up your own continuous integration and continuous delivery pipeline on Amazon Web Services. Last time in Part II, we created a build process with testing using AWS CodeBuild.

In this part, we'll be setting up the deployment stage of our pipeline that will push build artifacts created by our build project to a deployment group using Amazon's automated deployment service, AWS CodeDeploy. Like the previous posts in the series, I'll be sticking to AWS CLI commands as the web console is subject to rapid change.

AWS CodeDeploy

Since we already created an S3 bucket to store our build artifacts from CodeBuild, we've already done some of the setup necessary for building the deployment stage of our pipeline. We'll need to specify this location when it comes to configuring CodeDeploy. I'll assume the reader has already gone thru Part I and Part II of the series, therefore any steps involving pushing source changes to CodeCommit or running builds of the CodeBuild project will not be detailed.

Granting permissions for your user account to use AWS CodeDeploy:

  1. Open a command prompt or terminal window.
  2. Run the following commands substituting your user's name for [username]:
    aws iam attach-user-policy --user-name [username] --policy-arn arn:aws:iam::aws:policy/AWSCodeDeployFullAccess

This enables your user access to interact with CodeDeploy assuming that you didn't already have sufficient privileges.

Creating a service role for the AWS CodeDeploy service:

Just like with CodeBuild, we need to create a service role that grants the CodeDeploy service permission to use other resources and services on our behalf. Taken from Amazon's documentation, we need to:

  1. Make an empty directory on your file system and create the following file:
    CodeDeployDemo-Trust.json
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": [
              "codedeploy.amazonaws.com"
            ]
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
  2. In your command prompt or terminal window, switch your working directory to the directory where this file lives.
  3. Run the following commands:
    aws iam create-role --role-name CodeDeployServiceRole --assume-role-policy-document file://CodeDeployDemo-Trust.json
    aws iam attach-role-policy --role-name CodeDeployServiceRole --policy-arn arn:aws:iam::aws:policy/service-role/AWSCodeDeployRole
  4. Write down the ARN value of the created role output by the first command. We'll need it later when we configure CodeDeploy.

Creating an instance profile for your EC2 instance(s):

Taken from Amazon's documentation, we need to:

  1. Make an empty directory on your file system and create the following files:
    CodeDeployDemo-EC2-Trust.json
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": "ec2.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
    CodeDeployDemo-EC2-Permissions.json
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Action": [
            "s3:Get*",
            "s3:List*"
          ],
          "Effect": "Allow",
          "Resource": "*"
        }
      ]
    }
  2. In your command prompt or terminal window, switch your working directory to the directory where these files live.
  3. Run the following commands:
    aws iam create-role --role-name CodeDeployDemo-EC2-Instance-Profile --assume-role-policy-document file://CodeDeployDemo-EC2-Trust.json
    aws iam put-role-policy --role-name CodeDeployDemo-EC2-Instance-Profile --policy-name CodeDeployDemo-EC2-Permissions --policy-document file://CodeDeployDemo-EC2-Permissions.json
    aws iam create-instance-profile --instance-profile-name CodeDeployDemo-EC2-Instance-Profile
    aws iam add-role-to-instance-profile --instance-profile-name CodeDeployDemo-EC2-Instance-Profile --role-name CodeDeployDemo-EC2-Instance-Profile

Provisioning your EC2 instance(s):

  1. Follow the instructions for deploying an EC2 instance(s) if you haven't already got an instance(s) to deploy to. You will need to attach the role you just created to the instance(s).
  2. Be sure to install Node.js on the instance once it's done initializing.
  3. Tag the instance with the name-value pair: (CodeDeploy, Yes). This is important because it will tell CodeDeploy what instances to deploy to. You can use a different tag if you want, but a tag will be required.

For this tutorial, I provisioned a t2.micro instance running Windows Server Base 2016. 

Installing and running the AWS CodeDeploy agent on your instance(s):

In order for CodeDeploy to work properly, the AWS CodeDeploy agent must be running on the instances you want to deploy to. Follow these instructions for installing the agent

  1. Since I am using Windows, I pulled the .msi file from https://s3.amazonaws.com/aws-codedeploy-us-east-1/latest/codedeploy-agent.msi. As the documentation states, you may need to change the region to the one you are working in or you won't be able to access the file.
  2. Run the .msi file and validate it's running using the following command in PowerShell:
    Get-Service -Name codedeployagent

 Creating an application in CodeDeploy:

  1. In your command prompt or terminal window, run the following command, substituting the name you want to give to your application for [ApplicationName]:
    aws deploy create-application --application-name [ApplicationName]

Adding an Application Spec File to your project:

Application spec files give CodeDeploy information on how to deploy your application. We'll need to add one to our project for CodeDeploy to function.

  1. Navigate to your local repository we set up in Part I of this series.
  2. Create a file named appspec.yml with the following contents:
    version: 0.0
    os: windows
    files:
      - source: \app.js
        destination: c:\host
      - source: \node_modules
        destination: c:\host\node_modules
  3. Commit this file to your local Git repository and push it to your CodeCommit repository from Part I.
  4. Run another build of the CodeBuild project from Part II using the latest source.

This will ensure the appspec.yml file appears in our build artifacts zip file. If this file were to be missing, our deployments would fail since CodeDeploy wouldn't know what to do.

Creating a deployment group in CodeDeploy:

  1. In your command prompt or terminal window, run the following command, substituting for the following values:

    Application name you created earlier in the post -> [ApplicationName]
    A name for your deployment group -> [DeploymentGroupName]
    The ARN for the service role you created earlier in this post -> [ServiceRoleARN]
    aws deploy create-deployment-group --application-name [ApplicationName] --deployment-group-name [DeploymentGroupName] --deployment-config-name CodeDeployDefault.OneAtATime --ec2-tag-filters Key=CodeDeploy,Value=Yes,Type=KEY_AND_VALUE --service-role-arn [ServiceRoleARN]

    Notice that I specified the key value pairs for (CodeDeploy, Yes) for the --ec2-tag-filters argument. If you deviated from what I used for tagging, you'll need to change this command to use your values.

Deploying your application using CodeDeploy:

  1. Using the same command prompt or terminal window, run the following command, substituting for the following values:
    Application name you created earlier in the post -> [ApplicationName]
    Deployment group name you used earlier in the post -> [DeploymentGroupName]
    The name of the S3 bucket you chose to send build artifacts to in Part II -> [BucketName]
    aws deploy create-deployment --application-name [ApplicationName] --deployment-config-name CodeDeployDefault.OneAtATime --deployment-group-name [DeploymentGroupName] --s3-location bucket=[BucketName],bundleType=zip,key=BuildOutput.zip

Validating your application deployed successfully:

Assuming that your deployment was successful, you can validate this by RDP'ing into your instance(s) and checking the host directory on the C: drive for the project files. If you find them there, your CodeDeploy is correctly configured!

Deployment Successful

To explore CodeDeploy in more detail, check out the documentation provided by AWS. In the next and final part of this series, we'll finish our pipeline by incorporating all the pieces we've built so far using AWS CodePipeline. After which, we'll have a fully automated pipeline triggered by source code commits and ending with a deployment to our instances without having to manually invoke each part of the process.

Cheers!

5. May 2017 00:25
by Aaron Medacco
0 Comments

Creating a CI/CD Pipeline on AWS - Part II: CodeBuild

5. May 2017 00:25 by Aaron Medacco | 0 Comments

Welcome to Part II of this series on setting up your own continuous integration and continuous delivery pipeline on Amazon Web Services. If you missed Part I where we created a source control repository using AWS CodeCommit, you can check it out here.

In this post, we'll be creating the build and test stages of our pipeline using Amazon's fully managed build service, AWS CodeBuild. All commands will be done using the AWS CLI to avoid the fast-paced updates of the web console from dating this series. 

AWS CodeBuild

Last time, we created a CodeCommit repository, but only pushed some test files to it. Since we're going to need a meaningful project to build, I'll be using the demo application provided by expressjs.com. I've committed this project to the CodeCommit repository created in Part I. You should already be familiar with committing and pushing code to CodeCommit, so I won't spell out any CodeCommit steps in detail for brevity.

Granting permissions for your user account to use AWS CodeBuild:

  1. Open a command prompt or terminal window.
  2. Run the following commands substituting your user's name for [username]:
    aws iam attach-user-policy --user-name [username] --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess
    aws iam attach-user-policy --user-name [username] --policy-arn arn:aws:iam::aws:policy/AWSCodeBuildAdminAccess

This will allow your user the ability to invoke CodeBuild commands assuming they didn't already have them. We also need to create a bucket on S3 to store our build artifacts.

Note: Best practice would dictate that fewer permissions be granted to the user, but I am being lenient with access for this tutorial. 

Creating an S3 bucket to store our build artifacts:

  1.  With your command prompt or terminal window, run the following command to create an S3 bucket substituting a name for your bucket for [bucketname]:
    aws s3 mb s3://[bucketname]

Creating a service role for the AWS CodeBuild service:

We need to create an IAM role that will allow CodeBuild to access other services on our behalf. While you can have this done automatically when using the management console, there is no way to do this automatically with the CLI. The following is taken straight from Amazon's documentation for setting up a service role for CodeBuild:

  1. Make an empty directory on your file system and create the following files:
    create-role.json
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": "codebuild.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
    put-role-policy.json
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "CloudWatchLogsPolicy",
          "Effect": "Allow",
          "Action": [
            "logs:CreateLogGroup",
            "logs:CreateLogStream",
            "logs:PutLogEvents"
          ],
          "Resource": [
            "*"
          ]
        },
        {
          "Sid": "CodeCommitPolicy",
          "Effect": "Allow",
          "Action": [
            "codecommit:GitPull"
          ],
          "Resource": [
            "*"
          ]
        },
        {
          "Sid": "S3GetObjectPolicy",
          "Effect": "Allow",
          "Action": [
            "s3:GetObject",
            "s3:GetObjectVersion"
          ],
          "Resource": [
            "*"
          ]
        },
        {
          "Sid": "S3PutObjectPolicy",
          "Effect": "Allow",
          "Action": [
            "s3:PutObject"
          ],
          "Resource": [
            "*"
          ]
        }
      ]
    }
  2. In your command prompt or terminal window, switch your working directory to the directory where these files live.
  3. Run the following commands:
    aws iam create-role --role-name CodeBuildServiceRole --assume-role-policy-document file://create-role.json
    aws iam put-role-policy --role-name CodeBuildServiceRole --policy-name CodeBuildServiceRolePolicy --policy-document file://put-role-policy.json
  4. Write down the ARN value of the created role output by the first command. We'll need it to create our build project.

Creating your build project in AWS CodeBuild:

  1. Using the same command prompt or terminal window, run the following command substituting for the following values:

    Name for your project -> [ProjectName]
    Description for your project -> [ProjectDescription]
    HTTPS clone URL for your CodeCommit repository -> [CloneUrlHttp]
    Bucket name of the bucket created earlier -> [BucketName]
    ARN for the service role you created in the preceding step -> [ServiceRoleARN]
    aws codebuild create-project --name [ProjectName] --description "[ProjectDescription]" --source type="CODECOMMIT",location="[CloneUrlHttp]" --artifacts type="S3",location="[BucketName]",name="BuildOutput.zip",packaging="ZIP" --environment type="LINUX_CONTAINER",computeType="BUILD_GENERAL1_SMALL",image="aws/codebuild/nodejs:7.0.0" --service-role "[ServiceRoleARN]"

Adding tests to your project:

No CI/CD is complete without testing. We'll include testing using Mocha. Our testing will be trivial, but will illustrate how you can incorporate testing with AWS CodeBuild.

  1. Create a file named test.js that contains the following code:
    var assert = require('assert');
    describe('String Tests', function() {
      describe('Comparison', function() {
        it('Should be equal when strings are the same.', function() {
          assert.equal("No mercy for the misguided.", "No mercy for the misguided.");
        });
      });
    });
  2. Commit this file to the root of the master branch of the CodeCommit repository we created in Part I.

Running a build of your project:

AWS requires a build spec in order to run a build of your project. You can read more about build specs here. Since we didn't define a build spec when we created the project, we'll include it in our project's root. Build specs are YAML files that tell CodeBuild what to do when it builds your project, such as any tests to run, dependencies to install, environment variables, and build outputs. The following is the build spec YAML file we'll use:

version: 0.1

phases:
  install:
    commands:
      - echo Installing Express...
      - npm install express
      - echo Installing Mocha...
      - npm install -g mocha
  pre_build:
    commands:
      - echo Installing source NPM dependencies...
  build:
    commands:
      - echo Build started on `date`
      - echo Compiling the Node.js code
      - echo Running tests...
      - mocha test.js
  post_build:
    commands:
      - echo Build completed on `date`
artifacts:
  files:
    - '**/*'
  1. Make sure to commit this file as buildspec.yaml in the root of your CodeCommit repository.
  2. To start a build of the project, run the following command in your prompt or terminal window substituting your project's name for [ProjectName]:
    aws codebuild start-build --project-name [ProjectName]

If we were to log in to the management console for CodeBuild, we'll see this:

CodeBuild Build Success

Our build succeeded!

If I break the tests on purpose by changing test.js to the following, recommitting to the repository, and running another build, we get:

var assert = require('assert');
describe('String Tests', function() {
  describe('Comparison', function() {
    it('Should be equal when strings are the same.', function() {
      assert.equal("No mercy for the misguided.", "No mercy for the wretched.");
    });
  });
});

CodeBuild Build Failure

The build fails, which is what we want when tests fail.

If you want to dive deeper into CodeBuild, I encourage you to head over to the AWS documentation. In the next part of this series, we'll continue building our pipeline by incorporating CodeDeploy, Amazon's automated code deployment service, to deploy code to EC2 instances in our AWS environment.

Cheers!

2. May 2017 16:57
by Aaron Medacco
0 Comments

Creating a CI/CD Pipeline on AWS - Part I: CodeCommit

2. May 2017 16:57 by Aaron Medacco | 0 Comments

This will be the first in a series of posts on setting up your own continuous integration and continuous delivery pipeline on Amazon Web Services. I'll be using the Developer Tools provided by Amazon (CodeCommit, CodeBuild, CodeDeploy, CodePipeline) for this series and will stick to using commands with the AWS CLI instead of the management console. 

The first part of our CI/CD pipeline entails setting up our source control repository using AWS CodeCommit. CodeCommit is Amazon's managed source control service that we'll be pushing code changes to over HTTPS using Git credentials.

AWS CodeCommit

Before I get started, make sure you've created an AWS account with a privileged user who has access to invoke commands for setup. Also be sure to install the latest version of the AWS CLI and Git on your computer. I'll assume you've already configured the AWS CLI to operate in the region you'd like to setup your pipeline. 

Granting permissions for your user account to use AWS CodeCommit:

  1. Open a command prompt or terminal window.
  2. Run the following commands substituting your user's name for [username]:
    aws iam attach-user-policy --user-name [username] --policy-arn arn:aws:iam::aws:policy/AWSCodeCommitFullAccess
    aws iam attach-user-policy --user-name [username] --policy-arn arn:aws:iam::aws:policy/IAMReadOnlyAccess
    aws iam attach-user-policy --user-name [username] --policy-arn arn:aws:iam::aws:policy/IAMSelfManageServiceSpecificCredentials

This will grant your user access to make any changes to CodeCommit in addition to allowing your user to provision their own Git credentials which we'll use later.

Creating Git Credentials for HTTPS for your user account:

  1. Using the same command prompt or terminal window, run the following command substituting your user's name for [username]:
    aws iam create-service-specific-credential --user-name [username] --service-name codecommit.amazonaws.com
  2. You'll receive your new Git credentials as output when the command finishes. Copy these and save them in a secure place for later.

Creating your CodeCommit repository:

  1. Using the same command prompt or terminal window, run the following command substituting what you'd like to name your CodeCommit repository for [repositoryname]:
    aws codecommit create-repository --repository-name [repositoryname]
  2. Upon completion, you'll receive output detailing attributes about your new repository. Copy these, too. We'll be using the HTTPS URL to clone our repository.

Cloning your CodeCommit repository to a local repository:

  1. Using the same command prompt or terminal window, run the following command substituting your repository's HTTP clone URL for [CloneUrlHttp] and the local directory you want to use for the local repository for [localrepository]:
    git clone [CloneUrlHttp] [localrepository]
  2. You should be prompted to enter the Git credentials you generated in the earlier step. Your prompt may look different depending on your computer's configuration and operating system.

    Git Credentials Prompt
  3. Enter the Git credentials you generated in the earlier step and proceed. 
  4. Your clone should have finished. In our case, we just cloned an empty repository so nothing too special.

Committing to your CodeCommit repository:

  1. Change your working directory to that of the local repository you just cloned.
  2. Add a file or a collection of files and perform an add command with Git.
  3. Commit the changes to your local repository with a commit command with Git.
  4. To push the commit from your local repository to your CodeCommit repository, issue the following command:
    git push origin

Creating branches for your CodeCommit repository:

  1. Using a command prompt or terminal window, run the following command substituting the name of your repository for [repositoryname] and the name of your new branch for [branchname]. You'll need to specify a commit identifier for [commitid]:
    aws codecommit create-branch --repository-name [repositoryname] --branch-name [branchname] --commit-id [commitid]
    For instance, to get the commit identifier for the commit we just made, we can run the following command substituting our repository name for [repositoryname]:
    aws codecommit get-branch --repository-name [repositoryname] --branch-name master

If you want to learn how to work with CodeCommit in more depth, I suggest reading the AWS documentation. However, for this series, we have the minimum necessary setup to proceed. In the next part of this series, we'll provision the next part of our delivery pipeline using Amazon's managed build service, AWS CodeBuild, which will build and test our projects using Amazon's infrastructure.

Cheers!

Copyright © 2016-2017 Aaron Medacco