1. December 2017 17:26
by Aaron Medacco
0 Comments

AWS re:Invent 2017 - Day 1 Experience

1. December 2017 17:26 by Aaron Medacco | 0 Comments

Welcome to the first in a series of blog posts detailing my experience at AWS re:Invent 2017. If you're someone who is considering going to an AWS re:Invent conference, hopefully what follows will give you a flavor for what you can expect should you choose to fork over the cash for a ticket. The following content contains my personal impressions and experience, and may not (probably doesn't?) reflect the typical experience. Also, there will be some non-AWS fluff as well as I have not been to Las Vegas before.

AWS re:Invent 2017

My adventure starts at about Midnight. Yes, midnight. Living in Scottsdale, AZ, I figured, "Why not just drive instead of fly? After all, it's only a 6 hour drive and there won't be any traffic in the middle of the night." While that was true, what a mistake in retrospect. Arriving in Las Vegas with hardly any sleep after the road trip left me in pretty ragged shape for Monday's events. Next year, I'll definitely be flying and will get there on Sunday so I get can settle prior to Monday. I actually arrived so early, I couldn't check into my room and needed to burn some time. What better activity to do when exhausted than sit down at poker tables. Lost a quick $900 in short order. Hahaha! Truth be told, I got "coolered" back to back, but I probably played bad, too.

Once I got checked into my room at the Bellagio around 9:00am, I headed back to the Aria to get registered and pick up my re:Invent hoodie. Unfortunately, they didn't have my size, only had up to a Small. I couldn't help but smile about that. I ended up going to the Venetian later to exchange my Small for a Medium. Anyways, got my badge, ready to go! Or was I?

By the way, kudos to the Bellagio for putting these in every room. Forgot my phone charger. Well, the correct phone charger at least...

 AWS re:Invent 2017

...except it didn't have a charger compatible with my Samsung Galaxy S8. Kind of funny, but I wasn't laughing. Alright, maybe a little. Would end up getting one at a Phone store among one of the malls at the Strip. Oh yeah, and I also forgot to buy a memory card for my video recorder prior to leaving. Picked up one of those from a Best Buy Express vending machine. Vegas knows.

By this time I was crashing. Came back to my room, fell asleep, and missed 2 breakout sessions I was reserved for. Great job, Aaron! Off to a great start! 

Walked to the Aria to go check out the Certification Lounge. They had tables set up, food and drink, and some goodies available depending on what certifications you'd achieved. The registration badges have indicators on them that tell people if you're AWS certified or not, which they use to allow or deny access. I didn't end up staying too long, but there were a decent number of attendees with laptops open working and networking. Here's some of the things collected this year by walking around to the events: 

AWS re:Invent 2017

The re:Invent hoodie picked up at Registration (left) and the certification t-shirt inside the Certification Lounge (right).

AWS re:Invent 2017

Water bottle and AWS pins were given away at the Venetian Expo (top-left), badge and info packet at Registration (right), and the certification stickers at the Certification Lounge depending on which ones you've completed (bottom-left).

Headed over to the MGM Grand for my first breakout session, GPS: Anti Patterns: Learning From Failure (GPSTEC302). Before I discuss the session, I have to talk about something I severely underestimated about re:Invent. Walking! My body was definitely NOT ready. And I'm not an out-of-shape or big guy, either. The walking is legit! I remember tweeting about what I imagined would be my schedule weeks before re:Invent and Eric Hammond telling me I was being pretty optimistic about what I would actually be able to attend. No joke. Okay, enough of my complaining.

AWS re:Invent 2017

Waiting for things to get started.

AWS re:Invent 2017

Session about half-full. Plenty of room to get comfortable.

AWS re:Invent 2017

Presenter's shirt says, "got root?". Explaining methods for ensuring account resource compliance and using AWS account best practices when it comes to logging, backups, and fast reaction to nefarious changes.

This was an excellent session. The presenters were fantastic and poked fun at mistakes they themselves have made or those of customers they've talked to have made regarding automation (or lack thereof), compliance, and just overall bone-headedness (is that a word?). The big takeaways I found were to consider using services like CloudWatch, CloudTrail and Config to monitor and log activity in your AWS accounts to become aware when stupid raises it's ugly head. They threw out questions like, "What would happen if the root account's credentials were compromised and you didn't know about it until it was too late?", and "You have an automated process for creating backups, but do you actually test those backups?". From this came suggestions to regularly store and test backups to another account in case an account gets compromised and using things like MFA, especially for root and privileged users.

Additionally, the presenters made a good argument for not using the management console for activities once you become more familiar with AWS, particularly if you're leveraging the automation tools AWS provides like OpsWorks and CloudFormation as that kind of manual mucking around via the console can leave you in funny states for stacks deployed with those services. Along those lines, they also suggested dividing up the different tiers of your application infrastructure into their own stacks so that when you need to make changes to something or scale, you don't end up changing the whole system. Instead, you only modify or scale the relevant stack. Overall good session. If they have it again next year, I would recommend it. You'll get some laughs, if nothing else. The guys were pretty funny.

Once out, I had a meeting scheduled to talk with a company (presumably about upcoming Pluralsight work) at the Global Partner Summit Welcome Reception. Now, I'll admit I got a little frustrated trying to find where the **** this was taking place! AWS did a great job sending lots of guides with re:Invent flags everywhere to answer questions and direct attendees to their events, and these guys were godsends every time except when it came to finding this event. I think I just got unlucky with a few that were misinformed.

AWS re:Invent 2017

These guys were scattered all over the strip and inside the hotels. Very helpful!

First, I was told to go to the one of the ballrooms. Found what appeared to be some kind of Presenter's Registration there. Then, found another guide who said to go to the Garden Grand Arena. Walked over there, total graveyard, and ironically, a random dude there who wasn't even one of the re:Invent guides told me where it actually was. He also said, "Oh yeah, and unless you want to be standing in line all night, you might want to reconsider." It was late enough at this point, I figured I'd just head back to the Bellagio for a much needed poker session, so that's what I did. However, on the way back, holy ****, he was right. I've never seen a line as long as the one to get into the GPS Welcome Reception in my life. It went from the food court, through the entire casino, out of the casino, and further back to I couldn't tell where. Apparently, I was the only one who missed the memo, since everyone else knew where to go, but still, that line. 

Long hike back to the Bellagio, played poker for about 3 hours, lost $200 (man, I suck), and on my way back to my room discovered I didn't eat anything all day. LOL! Picked up a couple pizza slices and crashed for the night. A good night's sleep? Yes, please. Tomorrow would be better.

Cheers!

10. September 2017 14:04
by Aaron Medacco
0 Comments

Tracking Request Counts of Your S3 Objects

10. September 2017 14:04 by Aaron Medacco | 0 Comments

There are cases where you might be interested in knowing how many times your content hosted in S3 is requested by end users. Maybe you want to determine what content you serve is most popular or just want to have more metrics available for an application that heavily relies on S3 storage. Your knee-jerk reaction when trying to find this kind of information should be to look within CloudWatch under the S3 metrics. However, you might be surprised to find there's nothing there, as this is something you need to enable.

 S3 Request Metrics

Additionally, there is a charge for enabling these metrics which is identical to that of custom CloudWatch metrics. CloudWatch costs are pretty cheap, but you can review them here.

Amazon has provided documentation on how to accomplish this in their documentation

Keep in mind that these metrics are what Amazon defines as "best-effort":

"The completeness and timeliness of metrics is not guaranteed. The data point for a particular request might be returned with a timestamp that is later than when the request was actually processed, or the data point for a minute might be delayed before being available through CloudWatch, or it might not be delivered at all. CloudWatch request metrics give you an idea of the nature of traffic against your bucket in near real time. It is not meant to be a complete accounting of all requests."

Cheers!

22. July 2017 19:05
by Aaron Medacco
0 Comments

Scheduling Notifications for Rotating Old IAM Access Keys

22. July 2017 19:05 by Aaron Medacco | 0 Comments

Access keys allow you to give programmatic access to a user so they can accomplish tasks and interact with services within your AWS environment. These keys should be heavily guarded and kept secret. Once exposed, anyone can wreak havoc in your AWS account using any permissions that were granted to the user's whose keys were exposed.

A best practice for security is to rotate these keys regularly. We want to keep our access keys fresh and not have old or unused keys running about waiting to be abused. Therefore, I've created an automated method for notifying an administrator when user access keys are old and should be rotated. Since the number of days for keys to be considered "old" varies across organizations, I've included a variable that can be configured to fit the reader's requirements.

IAM Access Key Rotation

Like my recent posts, this will make use of the AWS CLI, which I assume you've installed and configured already. We'll be using (of course) Lambda backed by a CloudWatch event rule. Notifications will be sent using the SNS service when keys should be rotated. Enjoy.

Creating an SNS topic:

  1. Open up a command prompt or terminal window, and invoke the following command:
    aws sns create-topic --name IAM-Access-Key-Rotation-Topic
  2. You'll get a Topic ARN value back which you want to keep.
  3. Then invoke the following command. Substitute Email for the email address you want to receive the notifications.
    Note: You don't have to use email if you don't want to. Feel free to use whichever protocol/endpoint fits you.
    aws sns subscribe --topic-arn ARN --protocol email --notification-endpoint Email
  4. The email will get an email asking them to confirm a subscription. You'll need to confirm the subscription before moving on if you want notifications to go out.

Creating an IAM policy for access permissions:

  1. Create a file named iam-policy.json with the following contents and save it in your working directory. Substitute Your Topic ARN for the ARN of the topic you just created:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "sns:Publish"
                ],
                "Resource": [
                    "Your Topic ARN"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "iam:ListAccessKeys",
                    "iam:ListUsers"
                ],
                "Resource": [
                    "*"
                ]
            }
        ]
    }
  2. In your command prompt or terminal window, execute the following command:
    aws iam create-policy --policy-name rotate-old-access-keys-notification-policy --policy-document file://iam-policy.json
  3. You'll receive details for the policy you just created. Write down the ARN value. You will need it in a later step.

Creating the IAM role for the Lambda function:

  1. Create a file named role-trust-policy.json with the following contents and save it in your working directory:
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": "lambda.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
  2. In your command prompt or terminal window, invoke the following command:
    aws iam create-role --role-name rotate-old-access-keys-notification-role --assume-role-policy-document file://role-trust-policy.json
  3. You'll get back information about the role you just made. Write down the ARN value for the role. You will need it when we go to create the Lambda function.
  4. Invoke this command to attach the policy to the role. Substitute ARN for the policy ARN you received when you created the IAM policy.
    aws iam attach-role-policy --policy-arn ARN --role-name rotate-old-access-keys-notification-role

Creating the Lambda function:

  1. Create a files named index.js with the following contents and save it in your working directory. Make sure you substitute Topic ARN with the ARN of the topic you created in the first step.
    Note: Notice you can configure the number of days allowed before rotations should be done. Default value I set is 90.
    var AWS = require("aws-sdk");
    var iam = new AWS.IAM();
    var sns = new AWS.SNS();
    
    var daysBeforeRotationRequired = 90; // Number of days from access creation before action is taken on access keys.
    
    exports.handler = (event, context, callback) => {
        var listUsersParams = {};
        iam.listUsers(listUsersParams, function(err, data) {
            if (err) {
                console.log(err, err.stack);
            }
            else {
                for (var i = 0; i < data.Users.length; i++) {
                    var userName = data.Users[i].UserName;
                    var listAccessKeysParams = { UserName: userName };
                    iam.listAccessKeys(listAccessKeysParams, function(err, data) {
                        if (err) {
                            console.log(err, err.stack);
                        } 
                        else {
                            for (var j = 0; j < data.AccessKeyMetadata.length; j++) {
                                var accessKeyId = data.AccessKeyMetadata[j].AccessKeyId;
                                var creationDate = data.AccessKeyMetadata[j].CreateDate;
                                var accessKeyUserName = data.AccessKeyMetadata[j].UserName;
                                var now = new Date();
                                var whenRotationShouldOccur = dateAdd(creationDate, "day", daysBeforeRotationRequired);
                                if (now > whenRotationShouldOccur) {
                                    var message = "You need to rotate access key: " + accessKeyId + " for user: " + accessKeyUserName
                                    console.log(message);
                                    var publishParams = {
                                        Message: message,
                                        Subject: "Access Keys Need Rotating For User: " + accessKeyUserName,
                                        TopicArn: "Topic ARN"
                                    };
                                    sns.publish(publishParams, context.done);
                                }
                                else {
                                    console.log("No access key rotation necessary for: " + accessKeyId + "for user: " + accessKeyUserName);
                                }
                            }
                        }
                    });
                }
            }
        });
    };
    
    var dateAdd = function(date, interval, units) {
        var ret = new Date(date); // don't change original date
        switch(interval.toLowerCase()) {
            case 'year'   :  ret.setFullYear(ret.getFullYear() + units);  break;
            case 'quarter':  ret.setMonth(ret.getMonth() + 3*units);  break;
            case 'month'  :  ret.setMonth(ret.getMonth() + units);  break;
            case 'week'   :  ret.setDate(ret.getDate() + 7*units);  break;
            case 'day'    :  ret.setDate(ret.getDate() + units);  break;
            case 'hour'   :  ret.setTime(ret.getTime() + units*3600000);  break;
            case 'minute' :  ret.setTime(ret.getTime() + units*60000);  break;
            case 'second' :  ret.setTime(ret.getTime() + units*1000);  break;
            default       :  ret = undefined;  break;
        }
        return ret;
    }
  2. Zip this file to a zip called index.zip.
  3. Bring your command prompt or terminal window back up, and execute the following command. Substitute ARN for the role ARN you received from the step where we created the role:
    aws lambda create-function --function-name rotate-old-access-keys-notification-sender --runtime nodejs6.10 --handler index.handler --role ARN --zip-file fileb://index.zip --timeout 30
  4. Once the function is created, you'll get back details for the function. Write down the ARN value for the function for when we schedule the function execution.

Scheduling the Lambda function:

  1.  In your command prompt or terminal window, execute the following command:
    Note: Feel free to adjust the schedule expression for your own purposes.
    aws events put-rule --name Rotate-Old-Access-Keys-Notification-Scheduler --schedule-expression "rate(1 day)"
  2. Write down the ARN value for the rule upon creation finishing.
  3. Run the following command. Substitute ARN for the Rule ARN you just received:
    aws lambda add-permission --function-name rotate-old-access-keys-notification-sender --statement-id LambdaPermission --action "lambda:InvokeFunction" --principal events.amazonaws.com --source-arn ARN
  4. Now run the following command. Substitute ARN for that of the Lambda function you created in the previous step:
    aws events put-targets --rule Rotate-Old-Access-Keys-Notification-Scheduler --targets "Id"="1","Arn"="ARN"

For information regarding ways to rotate your access keys without affecting your applications, click here.

Cheers!

3. July 2017 22:25
by Aaron Medacco
15 Comments

Scheduling Automated AMI Backups of Your EC2 Instances

3. July 2017 22:25 by Aaron Medacco | 15 Comments

When considering disaster recovery options for systems or applications running on Amazon Web Services, a frequent solution is to use AMIs to restore instances to a known acceptable state in the event of failure or catastrophe. If you're team has decided on this approach, you'll want to automate the creation and maintenance of these AMIs to prevent mistakes or somebody "forgetting" to do the task. In this post, I'll walk through how to set this up in AWS within a matter of minutes using Amazon's serverless compute offering, Lambda

Automated AMI Backups

In departure from many of my other posts involving Lambda, the following steps will make use of the AWS CLI so I will assume that you've already installed and configured it on your machine. This is to grant longevity to these posts and to protect their relevance from slipping due to the browser-based console's rate of change.

I have also added some flexibility in the maintenance of these backups I encourage the reader to configure to their liking. These include a variable number of backups the reader would like to maintain for each EC2 instance they wish to have AMIs taken, a customizable tag the reader can assign to EC2 instances they'd like to backup, and the option to also delete snapshots of the AMIs being de-registered once they exit the backup window. I have included default values, but I still encourage you to read the options before implementing this solution.

Let's get started.

Creating an IAM policy for access permissions:

  1. Create a file named iam-policy.json with the following contents and save it in your working directory:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "Stmt1499061014000",
                "Effect": "Allow",
                "Action": [
                    "ec2:CreateImage",
                    "ec2:CreateTags",
                    "ec2:DeleteSnapshot",
                    "ec2:DeregisterImage",
                    "ec2:DescribeImages",
                    "ec2:DescribeInstances"
                ],
                "Resource": [
                    "*"
                ]
            }
        ]
    }
  2. In your command prompt or terminal window, invoke the following command:
    aws iam create-policy --policy-name ami-backup-policy --policy-document file://iam-policy.json
  3. You'll receive output with details of the policy you've just created. Write down the ARN value as you will need it later.

Creating the IAM role for the Lambda function:

  1. Create a file named role-trust-policy.json with the following contents and save it in your working directory:
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": "lambda.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
  2. In your command prompt or terminal window, invoke the following command:
    aws iam create-role --role-name ami-backup-role --assume-role-policy-document file://role-trust-policy.json
  3. You'll receive output with details of the role you've just created. Be sure to write down the role ARN value provided. You'll need it later.
  4. Run the following command to attach the policy to the role. You must substitute ARN for the policy ARN you wrote down from the prior step: 
    aws iam attach-role-policy --policy-arn ARN --role-name ami-backup-role

Creating the Lambda function:

  1.  Create a file named index.js with the following contents and save it in your working directory:
    Note: The following file is the code managing your AMI backups. There are a number of configurable options to be aware of and I have commented descriptions of each in the code. 
    var AWS = require("aws-sdk");
    var ec2 = new AWS.EC2();
    
    var numBackupsToRetain = 2; // The Number Of AMI Backups You Wish To Retain For Each EC2 Instance.
    var instancesToBackupTagName = "BackupAMI"; // Tag Key Attached To Instances You Want AMI Backups Of. Tag Value Should Be Set To "Yes".
    var imageBackupTagName = "ScheduledAMIBackup"; // Tag Key Attached To AMIs Created By This Process. This Process Will Set Tag Value To "True".
    var imageBackupInstanceIdentifierTagName = "ScheduledAMIInstanceId"; // Tag Key Attached To AMIs Created By This Process. This Process Will Set Tag Value To The Instance ID.
    var deleteSnaphots = true; // True if you want to delete snapshots during cleanup. False if you want to only delete AMI, and leave snapshots intact.
    
    exports.handler = function(event, context) {
        var describeInstancesParams = {
            DryRun: false,
            Filters: [{
                Name: "tag:" + instancesToBackupTagName,
                Values: ["Yes"]
            }]
        };
        ec2.describeInstances(describeInstancesParams, function(err, data) {
            if (err) {
                console.log("Failure retrieving instances.");
                console.log(err, err.stack); 
            }
            else {
                for (var i = 0; i < data.Reservations.length; i++) {
                    for (var j = 0; j < data.Reservations[i].Instances.length; j++) {
                        var instanceId = data.Reservations[i].Instances[j].InstanceId;
                        createImage(instanceId);
                    }
                }
            }
        });
        cleanupOldBackups();
    };
    
    var createImage = function(instanceId) {
        console.log("Found Instance: " + instanceId);
        var createImageParams = {
            InstanceId: instanceId,
            Name: "AMI Scheduled Backup I(" + instanceId + ") T(" + new Date().getTime() + ")",
            Description: "AMI Scheduled Backup for Instance (" + instanceId + ")",
            NoReboot: true,
            DryRun: false
        };
        ec2.createImage(createImageParams, function(err, data) {
            if (err) {
                console.log("Failure creating image request for Instance: " + instanceId);
                console.log(err, err.stack);
            }
            else {
                var imageId = data.ImageId;
                console.log("Success creating image request for Instance: " + instanceId + ". Image: " + imageId);
                var createTagsParams = {
                    Resources: [imageId],
                    Tags: [{
                        Key: "Name",
                        Value: "AMI Backup I(" + instanceId + ")"
                    },
                    {
                        Key: imageBackupTagName,
                        Value: "True"
                    },
                    {
                        Key: imageBackupInstanceIdentifierTagName,
                        Value: instanceId
                    }]
                };
                ec2.createTags(createTagsParams, function(err, data) {
                    if (err) {
                        console.log("Failure tagging Image: " + imageId);
                        console.log(err, err.stack);
                    }
                    else {
                        console.log("Success tagging Image: " + imageId);
                    }
                });
            }
        });
    };
    
    var cleanupOldBackups = function() {
        var describeImagesParams = {
            DryRun: false,
            Filters: [{
                Name: "tag:" + imageBackupTagName,
                Values: ["True"]
            }]
        };
        ec2.describeImages(describeImagesParams, function(err, data) {
            if (err) {
                console.log("Failure retrieving images for deletion.");
                console.log(err, err.stack); 
            }
            else {
                var images = data.Images;
                var instanceDictionary = {};
                var instances = [];
                for (var i = 0; i < images.length; i++) {
                    var currentImage = images[i];
                    for (var j = 0; j < currentImage.Tags.length; j++) {
                        var currentTag = currentImage.Tags[j];
                        if (currentTag.Key === imageBackupInstanceIdentifierTagName) {
                            var instanceId = currentTag.Value;
                            if (instanceDictionary[instanceId] === null || instanceDictionary[instanceId] === undefined) {
                                instanceDictionary[instanceId] = [];
                                instances.push(instanceId);
                            }
                            instanceDictionary[instanceId].push({
                                ImageId: currentImage.ImageId,
                                CreationDate: currentImage.CreationDate,
                                BlockDeviceMappings: currentImage.BlockDeviceMappings
                            });
                            break;
                        }
                    }
                }
                for (var t = 0; t < instances.length; t++) {
                    var imageInstanceId = instances[t];
                    var instanceImages = instanceDictionary[imageInstanceId];
                    if (instanceImages.length > numBackupsToRetain) {
                        instanceImages.sort(function (a, b) {
                           return new Date(b.CreationDate) - new Date(a.CreationDate); 
                        });
                        for (var k = numBackupsToRetain; k < instanceImages.length; k++) {
                            var imageId = instanceImages[k].ImageId;
                            var creationDate = instanceImages[k].CreationDate;
                            var blockDeviceMappings = instanceImages[k].BlockDeviceMappings;
                            deregisterImage(imageId, creationDate, blockDeviceMappings);
                        }   
                    }
                    else {
                        console.log("AMI Backup Cleanup not required for Instance: " + imageInstanceId + ". Not enough backups in window yet.");
                    }
                }
            }
        });
    };
    
    var deregisterImage = function(imageId, creationDate, blockDeviceMappings) {
        console.log("Found Image: " + imageId + ". Creation Date: " + creationDate);
        var deregisterImageParams = {
            DryRun: false,
            ImageId: imageId
        };
        console.log("Deregistering Image: " + imageId + ". Creation Date: " + creationDate);
        ec2.deregisterImage(deregisterImageParams, function(err, data) {
           if (err) {
               console.log("Failure deregistering image.");
               console.log(err, err.stack);
           } 
           else {
               console.log("Success deregistering image.");
               if (deleteSnaphots) {
                    for (var p = 0; p < blockDeviceMappings.length; p++) {
                       var snapshotId = blockDeviceMappings[p].Ebs.SnapshotId;
                       if (snapshotId) {
                           deleteSnapshot(snapshotId);
                       }
                   }    
               }
           }
        });
    };
    
    var deleteSnapshot = function(snapshotId) {
        var deleteSnapshotParams = {
            DryRun: false,
            SnapshotId: snapshotId
        };
        ec2.deleteSnapshot(deleteSnapshotParams, function(err, data) {
            if (err) {
                console.log("Failure deleting snapshot. Snapshot: " + snapshotId + ".");
                console.log(err, err.stack);
            }
            else {
                console.log("Success deleting snapshot. Snapshot: " + snapshotId + ".");
            }
        })
    };
  2. Zip this file to a zip called index.zip.
  3. In your command prompt or terminal window, invoke the following command. You must substitute ARN for the role ARN you wrote down from the prior step: 
    aws lambda create-function --function-name ami-backup-function --runtime nodejs6.10 --handler index.handler --role ARN --zip-file fileb://index.zip --timeout 30
  4. You'll receive output details about the Lambda function you've just created. Write down the Function ARN value for later use.

Scheduling the Lambda function:

  1. In your command prompt or terminal window, invoke the following command:
    Note: Feel free to adjust the schedule expression for your own use.
    aws events put-rule --name ami-backup-event-rule --schedule-expression "rate(1 day)"
  2. You'll get the Rule ARN value back as output. Write this down for later.
  3. Run the following command. Substitute ARN for the Rule ARN you just wrote down:
    aws lambda add-permission --function-name ami-backup-function --statement-id LambdaPermission --action "lambda:InvokeFunction" --principal events.amazonaws.com --source-arn ARN
  4. Run the following command. Substitute ARN for the Function ARN of the Lambda function you wrote down:
    aws events put-targets --rule ami-backup-event-rule --targets "Id"="1","Arn"="ARN"

Remember, you must assign the appropriate tag to each EC2 instance you want windowed AMI backups for. Leave a comment if you run into any issue using this solution.

Cheers!

4. June 2017 20:42
by Aaron Medacco
0 Comments

Scheduling URL Requests w/ AWS Lambda

4. June 2017 20:42 by Aaron Medacco | 0 Comments

There's often a need to request URLs in a scheduled fashion. Whether it's performing a health check on a page, hitting an endpoint that crunches some data, or interacting with a public API of some kind, being able to automate this kind of behavior is often desired. Fortunately, this can be done quite easily with AWS Lambda. Also, because Lambda is serverless, you don't need to worry about machine failure, if for instance, you were to brew your own solution and put it on a specific server or instance. 

Automated URLs

In this post, I'll demonstrate how to build this by using AWS's Lambda service in conjunction with the HTTP and HTTPS libraries available from Node.js. We'll cover permission setup, function creation, and how to input the URLs you'd like to request on a schedule.

Let's get started.

Creating an IAM policy for access permissions:

  1. Navigate to IAM in your management console.
  2. Select "Policies" in the sidebar.
  3. Click "Create Policy".
  4. Select "Create Your Own Policy".
  5. Enter an appropriate policy name and description.
  6. Paste the following JSON into the policy document:
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Action": [
            "logs:CreateLogGroup",
            "logs:CreateLogStream",
            "logs:PutLogEvents"
          ],
          "Resource": "arn:aws:logs:*:*:*"
        }
      ]
    }
  7. Click "Create Policy".

Creating the IAM role for the Lambda function:

  1. Select "Roles" in the sidebar.
  2. Click "Create New Role".
  3. Enter an appropriate role name and click "Next Step".
  4. Select "AWS Lambda" within the AWS Service Roles.
  5. Change the filter to "Customer Managed", check the box of the policy you just created, and click "Next Step".
  6. Click "Create Role".

Creating the Lambda function:

  1. Navigate to Lambda in your management console.
  2. Click "Create a Lambda function".
  3. Select the "Blank Function" blueprint.
  4. Under "Configure triggers", click the grey box and select "CloudWatch Events - Schedule".
  5. Enter an appropriate rule name and description.
  6. Select the frequency you'd like your collection of URLs to be hit in the expression input. For instance, for daily, use "rate(1 day)".
  7. Check the box to "Enable trigger" and click "Next".
  8. Click "Next".
  9. Enter an appropriate function name and description. Select Node.js 6.10 for the runtime.
  10. Under "Lambda function code", select "Edit code inline" for the Code entry type and paste the following code in the box:
    exports.handler = (event, context, callback) => {
        var urls = event.urls;
        var http = require("http");
        var https = require("https");
        for (var i = 0; i < urls.length; i++) {
            var protocol = urls[i].Protocol;
            var domain = urls[i].Domain;
            var queryString = urls[i].QueryString;
            var url = protocol + "://" + domain + queryString;
            if (protocol.toLowerCase() === "http") {
                var j = i;
                http.get(url, function(res) {
                    // Get around async.
                    var requestUrl = urls[j].Protocol + "://" + urls[j].Domain + urls[j].QueryString;
                    console.log("Response from " + requestUrl + ": ");
                    console.log(res.statusCode);
                    console.log(res.statusMessage);
                }).on('error', function(e) {
                    console.log("Got error: " + e.message);
                });
            } else if (protocol.toLowerCase() === "https") {
                https.get(url, function(res) {
                    var j = i;
                    // Get around async.
                    var requestUrl = urls[j].Protocol + "://" + urls[j].Domain + urls[j].QueryString;
                    console.log("Response from " + requestUrl + ": ");
                    console.log(res.statusCode);
                    console.log(res.statusMessage);
                }).on('error', function(e) {
                    console.log("Encountered error: " + e.message);
                });
            }
            // Force break due to async -> output.
            if ((i+1) == urls.length) {
                break;
            }
        }
    };
  11. Leave Handler as "index.handler".
  12. Choose to use an existing role and select the role you created earlier.
  13. Leave the other default values and click "Next".
  14. Click "Create function".

Configuring the URLs to automate in CloudWatch:

  1. Navigate to CloudWatch in your management console.
  2. Select "Rules" in the sidebar.
  3. Click the name of the rule you created when creating the Lambda function.
  4. Click "Actions" -> "Edit".
  5. Under Targets, find your Lambda function and click "Configure input".
  6. Select "Constant (JSON text)".
  7. Paste JSON that conforms to this structure. Fill in the details for the URLs you would like to schedule. The following is an example:
    {
      "urls": [{
          "Protocol": "HTTP",
          "Domain": "www.aaronmedacco.com",
          "QueryString": ""
      }, {
          "Protocol": "HTTPS",
          "Domain": "www.google.com",
          "QueryString": "?key=value"
      }]
    }
  8. Remember to replace the above with your own URLs.
  9. Click "Configure details".
  10. Click "Update rule".

What if you want to configure different URLs on different schedules?

I'd recommend creating a different Lambda function similar to this one and using a different CloudWatch event rule to schedule it. Unless you want to get some kind of persistent storage involved or implement your own scheduling logic, it's probably easier to leverage the tools AWS has built, namely event rules in CloudWatch.

Cheers!

Copyright © 2016-2017 Aaron Medacco