21. June 2017 20:10
by Aaron Medacco
0 Comments

AWS re:Invent 2017 Public Registration Now Open

21. June 2017 20:10 by Aaron Medacco | 0 Comments

Public registration for AWS re:Invent 2017 is now open! For those who don't know, AWS re:Invent is the annual conference Amazon Web Services throws where participants can attend labs, breakout sessions, hackathons, bootcamps, and games in Las Vegas.

AWS re:Invent 2017

The conference runs from Nov. 27th to Dec. 1st this year and conference passes do sell out so if you want to attend, you'll need to move. The resources I've read have said the most difficult part is finding adequate accommodations, not necessarily getting your conference pass, especially as the event gets closer. Fortunately, this year's registration allows the ability to book your hotel and accommodations with your conference pass, which was convenient. Apparently, this is the first year AWS is allowing you to do this. I tossed around the idea of flying but since I'm fairly close being in Scottsdale, AZ, I've decided to take the 6-7 hour drive up instead.

Needless to say, I booked my pass and hotel already. I'll be arriving Monday morning and staying at the Bellagio for all 5 nights, which lies in between where all the AWS action is happening for the week. And even though I'll definitely be doing some gambling (poker) while I'm there, I'm going to do by best to stay focused and hit as many of the breakout sessions as possible. Because of the sheer volume of content and activities taking place each day, you can only attend a fraction of the events, so I'll have to figure out my schedule once Amazon releases the date and times for everything going on that week. I'll put a post up once I know the exact schedule. For now, AWS has a rough map of the strip so you can get an idea of what will be where.

AWS re:Invent 2017 Receipt

AWS also proctors their certification exams at the event on each day so I'll be taking my shot at the AWS Solution Architect - Professional and possibly the AWS DevOps Engineer - Professional certifications while I'm there. Between work, exercise, blogging, and a course on AWS Athena I'm creating for Pluralsight, it'll be fun finding the time to study for those. Fortunately, there going on all 5 days of the event, so I'll be able to pick when I want to take it. 

Additionally, given the announcements, crowd, and overall spectacle of the event, I'll be putting together posts for each day as well as taking pictures and possibly creating vlogs during the event. I think the internet could do with some more content showcasing the event firsthand from participants. A lot of what I found is AWS's own marketing and details.

Again, as more details come out about the exact schedule of everything, I'll do a post of the events I'll be at. If anyone else going wants to meet up, leave a comment. Super excited for November!

Cheers!

16. June 2017 22:58
by Aaron Medacco
4 Comments

New AWS Training Portal, Beta Certifications & Certification Swag Available

16. June 2017 22:58 by Aaron Medacco | 4 Comments

There's been quite a bit of news lately about AWS training and certification in the last few months. Jeff Barr announced on the AWS blog the release of the new AWS training portal. This change makes it easier for you to manage your certification and training in one location. Personally, I haven't needed to schedule a certification exam since January, but the process certainly could have been improved back then. Anything where I need to create more online accounts and remember more URLs is tedious. 

Upon signing in to the new portal, I did some exploring and it looks like, as part of this update, AWS offers benefits to those who become certified including free practice exam vouchers and access to their Certification Store. Even though the fees for practice exam are only up to $40, providing these small, extra incentives to continue progressing through their certification roadmap is certainly welcome.

The store had me a bit excited, too. I'm always complaining that companies or brands I love don't offer enough merchandise for users to buy and walk around in. I mean come on...even if you don't make your money on selling the products, the free advertising should be worth it you would think. I ordered all the Associate level gear available except for the laptop case, which wasn't in stock. I'll have to remember to get that once it becomes available. Since I don't hold either of the two Professional level certifications, I couldn't tell if the store held exclusive items available to only that group of the AWS community. However, the store interface seemed to suggest that was the case.

AWS Certification Gear

In other news, AWS has released 2 of the 3 long-awaited specialty certifications. 

There was another certification, the AWS Certified Security - Specialty that was also in beta but I suspect it's not yet ready.

Exciting stuff, however I'll still be pursuing the Professional certifications prior to studying for the specialty ones. Having the opportunity to validate expertise with these exams is great, both as a competency self-check and (more importantly) as a way for customers to know the people they trust their cloud resources with have a clue.

After all, no one wants to hire this guy:

This is fine

Cheers!

4. June 2017 20:42
by Aaron Medacco
0 Comments

Scheduling URL Requests w/ AWS Lambda

4. June 2017 20:42 by Aaron Medacco | 0 Comments

There's often a need to request URLs in a scheduled fashion. Whether it's performing a health check on a page, hitting an endpoint that crunches some data, or interacting with a public API of some kind, being able to automate this kind of behavior is often desired. Fortunately, this can be done quite easily with AWS Lambda. Also, because Lambda is serverless, you don't need to worry about machine failure, if for instance, you were to brew your own solution and put it on a specific server or instance. 

Automated URLs

In this post, I'll demonstrate how to build this by using AWS's Lambda service in conjunction with the HTTP and HTTPS libraries available from Node.js. We'll cover permission setup, function creation, and how to input the URLs you'd like to request on a schedule.

Let's get started.

Creating an IAM policy for access permissions:

  1. Navigate to IAM in your management console.
  2. Select "Policies" in the sidebar.
  3. Click "Create Policy".
  4. Select "Create Your Own Policy".
  5. Enter an appropriate policy name and description.
  6. Paste the following JSON into the policy document:
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Action": [
            "logs:CreateLogGroup",
            "logs:CreateLogStream",
            "logs:PutLogEvents"
          ],
          "Resource": "arn:aws:logs:*:*:*"
        }
      ]
    }
  7. Click "Create Policy".

Creating the IAM role for the Lambda function:

  1. Select "Roles" in the sidebar.
  2. Click "Create New Role".
  3. Enter an appropriate role name and click "Next Step".
  4. Select "AWS Lambda" within the AWS Service Roles.
  5. Change the filter to "Customer Managed", check the box of the policy you just created, and click "Next Step".
  6. Click "Create Role".

Creating the Lambda function:

  1. Navigate to Lambda in your management console.
  2. Click "Create a Lambda function".
  3. Select the "Blank Function" blueprint.
  4. Under "Configure triggers", click the grey box and select "CloudWatch Events - Schedule".
  5. Enter an appropriate rule name and description.
  6. Select the frequency you'd like your collection of URLs to be hit in the expression input. For instance, for daily, use "rate(1 day)".
  7. Check the box to "Enable trigger" and click "Next".
  8. Click "Next".
  9. Enter an appropriate function name and description. Select Node.js 6.10 for the runtime.
  10. Under "Lambda function code", select "Edit code inline" for the Code entry type and paste the following code in the box:
    exports.handler = (event, context, callback) => {
        var urls = event.urls;
        var http = require("http");
        var https = require("https");
        for (var i = 0; i < urls.length; i++) {
            var protocol = urls[i].Protocol;
            var domain = urls[i].Domain;
            var queryString = urls[i].QueryString;
            var url = protocol + "://" + domain + queryString;
            if (protocol.toLowerCase() === "http") {
                var j = i;
                http.get(url, function(res) {
                    // Get around async.
                    var requestUrl = urls[j].Protocol + "://" + urls[j].Domain + urls[j].QueryString;
                    console.log("Response from " + requestUrl + ": ");
                    console.log(res.statusCode);
                    console.log(res.statusMessage);
                }).on('error', function(e) {
                    console.log("Got error: " + e.message);
                });
            } else if (protocol.toLowerCase() === "https") {
                https.get(url, function(res) {
                    var j = i;
                    // Get around async.
                    var requestUrl = urls[j].Protocol + "://" + urls[j].Domain + urls[j].QueryString;
                    console.log("Response from " + requestUrl + ": ");
                    console.log(res.statusCode);
                    console.log(res.statusMessage);
                }).on('error', function(e) {
                    console.log("Encountered error: " + e.message);
                });
            }
            // Force break due to async -> output.
            if ((i+1) == urls.length) {
                break;
            }
        }
    };
  11. Leave Handler as "index.handler".
  12. Choose to use an existing role and select the role you created earlier.
  13. Leave the other default values and click "Next".
  14. Click "Create function".

Configuring the URLs to automate in CloudWatch:

  1. Navigate to CloudWatch in your management console.
  2. Select "Rules" in the sidebar.
  3. Click the name of the rule you created when creating the Lambda function.
  4. Click "Actions" -> "Edit".
  5. Under Targets, find your Lambda function and click "Configure input".
  6. Select "Constant (JSON text)".
  7. Paste JSON that conforms to this structure. Fill in the details for the URLs you would like to schedule. The following is an example:
    {
      "urls": [{
          "Protocol": "HTTP",
          "Domain": "www.aaronmedacco.com",
          "QueryString": ""
      }, {
          "Protocol": "HTTPS",
          "Domain": "www.google.com",
          "QueryString": "?key=value"
      }]
    }
  8. Remember to replace the above with your own URLs.
  9. Click "Configure details".
  10. Click "Update rule".

What if you want to configure different URLs on different schedules?

I'd recommend creating a different Lambda function similar to this one and using a different CloudWatch event rule to schedule it. Unless you want to get some kind of persistent storage involved or implement your own scheduling logic, it's probably easier to leverage the tools AWS has built, namely event rules in CloudWatch.

Cheers!

16. May 2017 23:57
by Aaron Medacco
0 Comments

Creating a CI/CD Pipeline on AWS - Part IV: CodePipeline

16. May 2017 23:57 by Aaron Medacco | 0 Comments

Welcome to Part IV of this series on setting up your own continuous integration and delivery pipeline on Amazon Web Services. In the previous part, we set up the deployment stage of our pipeline using AWS CodeDeploy.

In this final segment, we'll take each stage of the pipeline we've already built and combine them with AWS CodePipeline. Our pipeline will entail 3 stages, the source stage representing the CodeCommit repository we set up in Part I, the build stage representing the CodeBuild project we set up in Part II, and the staging stage representing the application deployment via CodeDeploy we set up in Part III. Like the previous steps, I'll only be using the AWS CLI to complete the task as the web console changes frequently.

AWS CodePipeline

Additionally, I assume the reader has already viewed Part I, Part II, and Part III of this series, thus anything involving interactions discussed in that content will not be detailed again here. 

Granting permissions for your user account to use AWS CodePipeline:

  1. Open a command prompt or terminal window.
  2. Run the following commands substituting your user's name for [username]:
    aws iam attach-user-policy --user-name [username] --policy-arn arn:aws:iam::aws:policy/AWSCodePipelineFullAccess

This gives your user permission to interact with the CodePipeline service if you didn't already have sufficient privileges.

Creating a service role for the AWS CodePipeline service:

Like the prior posts, we need a service role that allows CodePipeline to act on our behalf. Again, there is usually a simple method for doing this in the management console, but since we're sticking to CLI commands, you can accomplish the same thing by doing the following:

  1. Make an empty directory on your file system and create the following files:
    create-role.json
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": "codepipeline.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
    put-role-policy.json
    {
      "Statement": [
        {
          "Action": [
            "s3:GetObject",
            "s3:GetObjectVersion",
            "s3:GetBucketVersioning"
          ],
          "Resource": "*",
          "Effect": "Allow"
        },
        {
          "Action": [
            "s3:PutObject"
          ],
          "Resource": [
            "arn:aws:s3:::codepipeline*",
            "arn:aws:s3:::elasticbeanstalk*"
          ],
          "Effect": "Allow"
        },
        {
          "Action": [
            "codecommit:CancelUploadArchive",
            "codecommit:GetBranch",
            "codecommit:GetCommit",
            "codecommit:GetUploadArchiveStatus",
            "codecommit:UploadArchive"
          ],
          "Resource": "*",
          "Effect": "Allow"
        },
        {
          "Action": [
            "codedeploy:CreateDeployment",
            "codedeploy:GetApplicationRevision",
            "codedeploy:GetDeployment",
            "codedeploy:GetDeploymentConfig",
            "codedeploy:RegisterApplicationRevision"
          ],
          "Resource": "*",
          "Effect": "Allow"
        },
        {
          "Action": [
            "elasticbeanstalk:*",
            "ec2:*",
            "elasticloadbalancing:*",
            "autoscaling:*",
            "cloudwatch:*",
            "s3:*",
            "sns:*",
            "cloudformation:*",
            "rds:*",
            "sqs:*",
            "ecs:*",
            "iam:PassRole"
          ],
          "Resource": "*",
          "Effect": "Allow"
        },
        {
          "Action": [
            "lambda:InvokeFunction",
            "lambda:ListFunctions"
          ],
          "Resource": "*",
          "Effect": "Allow"
        },
        {
          "Action": [
            "opsworks:CreateDeployment",
            "opsworks:DescribeApps",
            "opsworks:DescribeCommands",
            "opsworks:DescribeDeployments",
            "opsworks:DescribeInstances",
            "opsworks:DescribeStacks",
            "opsworks:UpdateApp",
            "opsworks:UpdateStack"
          ],
          "Resource": "*",
          "Effect": "Allow"
        },
        {
          "Action": [
            "cloudformation:CreateStack",
            "cloudformation:DeleteStack",
            "cloudformation:DescribeStacks",
            "cloudformation:UpdateStack",
            "cloudformation:CreateChangeSet",
            "cloudformation:DeleteChangeSet",
            "cloudformation:DescribeChangeSet",
            "cloudformation:ExecuteChangeSet",
            "cloudformation:SetStackPolicy",
            "cloudformation:ValidateTemplate",
            "iam:PassRole"
          ],
          "Resource": "*",
          "Effect": "Allow"
        },
        {
          "Action": [
            "codebuild:BatchGetBuilds",
            "codebuild:StartBuild"
          ],
          "Resource": "*",
          "Effect": "Allow"
        }
      ],
      "Version": "2012-10-17"
    }
  2. In your command prompt or terminal window, switch your working directory to the directory where these files live.
  3. Run the following commands:
    aws iam create-role --role-name CodePipelineServiceRole --assume-role-policy-document file://create-role.json
    aws iam put-role-policy --role-name CodePipelineServiceRole --policy-name CodePipelineServiceRolePolicy --policy-document file://put-role-policy.json
  4. Write down the ARN value of the created role output by the first command. We'll require it when we create our pipeline.

Creating your pipeline in CodePipeline:

  1. In the same directory (so you don't have to change again) where you created files in the last step, create a new file and substitute your values for the following:

    ARN for the service role you just created in the preceding step -> [ServiceRoleARN]
    Repository name you created in Part I -> [RepositoryName]
    Project name you created in Part II -> [ProjectName]
    Deployment group name you created in Part III -> [DeploymentGroupName]
    Application name you created in Part III -> [ApplicationName]
    An S3 bucket name for Pipeline to store artifacts -> [ArtifactStoreBucketName]
    A name for your pipeline -> [PipelineName]

    pipeline.json
    {
        "pipeline": {
            "roleArn": "[ServiceRoleARN]",
            "stages": [
                {
                    "name": "Source",
                    "actions": [
                        {
                            "inputArtifacts": [],
                            "name": "Source",
                            "actionTypeId": {
                                "category": "Source",
                                "owner": "AWS",
                                "version": "1",
                                "provider": "CodeCommit"
                            },
                            "outputArtifacts": [
                                {
                                    "name": "MyApp"
                                }
                            ],
                            "configuration": {
                                "BranchName": "master",
                                "RepositoryName": "[RepositoryName]"
                            },
                            "runOrder": 1
                        }
                    ]
                },
                {
                    "name": "Build",
                    "actions": [
                        {
                            "inputArtifacts": [
                                {
                                    "name": "MyApp"
                                }
                            ],
                            "name": "CodeBuild",
                            "actionTypeId": {
                                "category": "Build",
                                "owner": "AWS",
                                "version": "1",
                                "provider": "CodeBuild"
                            },
                            "outputArtifacts": [
                                {
                                    "name": "MyAppBuild"
                                }
                            ],
                            "configuration": {
                                "ProjectName": "[ProjectName]"
                            },
                            "runOrder": 1
                        }
                    ]
                },
                {
                    "name": "Staging",
                    "actions": [
                        {
                            "inputArtifacts": [
                                {
                                    "name": "MyAppBuild"
                                }
                            ],
                            "name": "[DeploymentGroupName]",
                            "actionTypeId": {
                                "category": "Deploy",
                                "owner": "AWS",
                                "version": "1",
                                "provider": "CodeDeploy"
                            },
                            "outputArtifacts": [],
                            "configuration": {
                                "ApplicationName": "[ApplicationName]",
                                "DeploymentGroupName": "[DeploymentGroupName]"
                            },
                            "runOrder": 1
                        }
                    ]
                }
            ],
            "artifactStore": {
                "type": "S3",
                "location": "[ArtifactStoreBucketName]"
            },
            "name": "[PipelineName]",
            "version": 1
        }
    }
  2. In your command prompt or terminal window, switch your working directory to the directory where this file lives if you aren't already there.
  3. Run the following command: 
    aws codepipeline create-pipeline --cli-input-json file://pipeline.json

Testing your pipeline in CodePipeline:

Congratulations if you made it this far! This is where we see the culmination of everything we've built so far work in an fully automated fashion. Make sure the instance(s) in your deployment group are running and change your app.js file to display something other than "Hello World!":

var express = require('express')
var app = express()

app.get('/', function (req, res) {
  res.send('You suffer in measure to your authority.')
})

app.listen(3000, function () {
  console.log('Example app listening on port 3000!')
})

and commit your code your CodeCommit repository.

If you log in to the management console for CodePipeline and view the pipeline you created, you should see the following:

AWS CodePipeline Visual

or you can run the following command, substituting the name you gave your pipeline for [PipelineName]:

aws codepipeline get-pipeline-state --name [PipelineName]

You now have a fully automated delivery pipeline on Amazon Web Services! Feel free to add more steps to your pipeline or experiment with other projects you might want to implement this with. For additional know-how on using AWS CodePipeline, check out the AWS documentation

Cheers!

14. May 2017 23:00
by Aaron Medacco
0 Comments

Creating a CI/CD Pipeline on AWS - Part III: CodeDeploy

14. May 2017 23:00 by Aaron Medacco | 0 Comments

Welcome to Part III of this series on setting up your own continuous integration and continuous delivery pipeline on Amazon Web Services. Last time in Part II, we created a build process with testing using AWS CodeBuild.

In this part, we'll be setting up the deployment stage of our pipeline that will push build artifacts created by our build project to a deployment group using Amazon's automated deployment service, AWS CodeDeploy. Like the previous posts in the series, I'll be sticking to AWS CLI commands as the web console is subject to rapid change.

AWS CodeDeploy

Since we already created an S3 bucket to store our build artifacts from CodeBuild, we've already done some of the setup necessary for building the deployment stage of our pipeline. We'll need to specify this location when it comes to configuring CodeDeploy. I'll assume the reader has already gone thru Part I and Part II of the series, therefore any steps involving pushing source changes to CodeCommit or running builds of the CodeBuild project will not be detailed.

Granting permissions for your user account to use AWS CodeDeploy:

  1. Open a command prompt or terminal window.
  2. Run the following commands substituting your user's name for [username]:
    aws iam attach-user-policy --user-name [username] --policy-arn arn:aws:iam::aws:policy/AWSCodeDeployFullAccess

This enables your user access to interact with CodeDeploy assuming that you didn't already have sufficient privileges.

Creating a service role for the AWS CodeDeploy service:

Just like with CodeBuild, we need to create a service role that grants the CodeDeploy service permission to use other resources and services on our behalf. Taken from Amazon's documentation, we need to:

  1. Make an empty directory on your file system and create the following file:
    CodeDeployDemo-Trust.json
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": [
              "codedeploy.amazonaws.com"
            ]
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
  2. In your command prompt or terminal window, switch your working directory to the directory where this file lives.
  3. Run the following commands:
    aws iam create-role --role-name CodeDeployServiceRole --assume-role-policy-document file://CodeDeployDemo-Trust.json
    aws iam attach-role-policy --role-name CodeDeployServiceRole --policy-arn arn:aws:iam::aws:policy/service-role/AWSCodeDeployRole
  4. Write down the ARN value of the created role output by the first command. We'll need it later when we configure CodeDeploy.

Creating an instance profile for your EC2 instance(s):

Taken from Amazon's documentation, we need to:

  1. Make an empty directory on your file system and create the following files:
    CodeDeployDemo-EC2-Trust.json
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": "ec2.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
    CodeDeployDemo-EC2-Permissions.json
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Action": [
            "s3:Get*",
            "s3:List*"
          ],
          "Effect": "Allow",
          "Resource": "*"
        }
      ]
    }
  2. In your command prompt or terminal window, switch your working directory to the directory where these files live.
  3. Run the following commands:
    aws iam create-role --role-name CodeDeployDemo-EC2-Instance-Profile --assume-role-policy-document file://CodeDeployDemo-EC2-Trust.json
    aws iam put-role-policy --role-name CodeDeployDemo-EC2-Instance-Profile --policy-name CodeDeployDemo-EC2-Permissions --policy-document file://CodeDeployDemo-EC2-Permissions.json
    aws iam create-instance-profile --instance-profile-name CodeDeployDemo-EC2-Instance-Profile
    aws iam add-role-to-instance-profile --instance-profile-name CodeDeployDemo-EC2-Instance-Profile --role-name CodeDeployDemo-EC2-Instance-Profile

Provisioning your EC2 instance(s):

  1. Follow the instructions for deploying an EC2 instance(s) if you haven't already got an instance(s) to deploy to. You will need to attach the role you just created to the instance(s).
  2. Be sure to install Node.js on the instance once it's done initializing.
  3. Tag the instance with the name-value pair: (CodeDeploy, Yes). This is important because it will tell CodeDeploy what instances to deploy to. You can use a different tag if you want, but a tag will be required.

For this tutorial, I provisioned a t2.micro instance running Windows Server Base 2016. 

Installing and running the AWS CodeDeploy agent on your instance(s):

In order for CodeDeploy to work properly, the AWS CodeDeploy agent must be running on the instances you want to deploy to. Follow these instructions for installing the agent

  1. Since I am using Windows, I pulled the .msi file from https://s3.amazonaws.com/aws-codedeploy-us-east-1/latest/codedeploy-agent.msi. As the documentation states, you may need to change the region to the one you are working in or you won't be able to access the file.
  2. Run the .msi file and validate it's running using the following command in PowerShell:
    Get-Service -Name codedeployagent

 Creating an application in CodeDeploy:

  1. In your command prompt or terminal window, run the following command, substituting the name you want to give to your application for [ApplicationName]:
    aws deploy create-application --application-name [ApplicationName]

Adding an Application Spec File to your project:

Application spec files give CodeDeploy information on how to deploy your application. We'll need to add one to our project for CodeDeploy to function.

  1. Navigate to your local repository we set up in Part I of this series.
  2. Create a file named appspec.yml with the following contents:
    version: 0.0
    os: windows
    files:
      - source: \app.js
        destination: c:\host
      - source: \node_modules
        destination: c:\host\node_modules
  3. Commit this file to your local Git repository and push it to your CodeCommit repository from Part I.
  4. Run another build of the CodeBuild project from Part II using the latest source.

This will ensure the appspec.yml file appears in our build artifacts zip file. If this file were to be missing, our deployments would fail since CodeDeploy wouldn't know what to do.

Creating a deployment group in CodeDeploy:

  1. In your command prompt or terminal window, run the following command, substituting for the following values:

    Application name you created earlier in the post -> [ApplicationName]
    A name for your deployment group -> [DeploymentGroupName]
    The ARN for the service role you created earlier in this post -> [ServiceRoleARN]
    aws deploy create-deployment-group --application-name [ApplicationName] --deployment-group-name [DeploymentGroupName] --deployment-config-name CodeDeployDefault.OneAtATime --ec2-tag-filters Key=CodeDeploy,Value=Yes,Type=KEY_AND_VALUE --service-role-arn [ServiceRoleARN]

    Notice that I specified the key value pairs for (CodeDeploy, Yes) for the --ec2-tag-filters argument. If you deviated from what I used for tagging, you'll need to change this command to use your values.

Deploying your application using CodeDeploy:

  1. Using the same command prompt or terminal window, run the following command, substituting for the following values:
    Application name you created earlier in the post -> [ApplicationName]
    Deployment group name you used earlier in the post -> [DeploymentGroupName]
    The name of the S3 bucket you chose to send build artifacts to in Part II -> [BucketName]
    aws deploy create-deployment --application-name [ApplicationName] --deployment-config-name CodeDeployDefault.OneAtATime --deployment-group-name [DeploymentGroupName] --s3-location bucket=[BucketName],bundleType=zip,key=BuildOutput.zip

Validating your application deployed successfully:

Assuming that your deployment was successful, you can validate this by RDP'ing into your instance(s) and checking the host directory on the C: drive for the project files. If you find them there, your CodeDeploy is correctly configured!

Deployment Successful

To explore CodeDeploy in more detail, check out the documentation provided by AWS. In the next and final part of this series, we'll finish our pipeline by incorporating all the pieces we've built so far using AWS CodePipeline. After which, we'll have a fully automated pipeline triggered by source code commits and ending with a deployment to our instances without having to manually invoke each part of the process.

Cheers!

Copyright © 2016-2017 Aaron Medacco