23. January 2017 23:15
by Aaron Medacco
0 Comments

Automatic Backups for Your AWS Account's IAM Configuration

23. January 2017 23:15 by Aaron Medacco | 0 Comments

Having a backup of your IAM configuration on Amazon Web Services can be handy for a number of reasons. Like I said in my post on backing up your Route 53 hosted zones, there would need to be some kind of travesty for you to need to use an IAM backup to restore your access management. However, if you're able to take down the rogue admin that decides to trash your IAM configuration (and not the backups we'll create in this post), at least you'll have something to refer back to. You might also find yourself modifying IAM policies, mess things up, and need to roll back. Finally, IAM config snapshots such as this could be useful for auditing purposes, too. Either way, this solution is pretty simple; we'll be leveraging Lambda to make a call to IAM via the Node.js SDK, and save the configuration info to a JSON file we'll store within a designated bucket on S3. The function's trigger will be a simple and configurable CloudWatch Event Schedule whose frequency you can change to suit your own requirements.

IAM Backup Solution

Let's get started.

Creating an IAM policy for access permissions:

  1. Navigate to IAM in your management console.
  2. Select "Policies" in the sidebar.
  3. Click "Create Policy".
  4. Select "Create Your Own Policy".
  5. Enter an appropriate policy name and description.
  6. Paste the following JSON into the policy document:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "iam:GetAccountAuthorizationDetails"
                ],
                "Resource": [
                    "*"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:PutObject"
                ],
                "Resource": [
                    "arn:aws:s3:::Your Bucket ARN/*"
                ]
            }
        ]
    }
  7. Substitute "Your Bucket ARN" with the ARN for the S3 bucket you will be uploading the backups to. Make sure you add "/*" after the bucket ARN. For instance, if your bucket ARN was "arn:aws:s3:::skullthrone", you would use "arn:aws:s3:::skullthrone/*".
  8. Click "Create Policy".

Creating the IAM role for the Lambda function:

  1. Select "Role" in the sidebar.
  2. Click "Create New Role".
  3. Enter an appropriate role name and click "Next Step".
  4. Select "AWS Lambda" within the AWS Service Roles.
  5. Change the filter to "Customer Managed", check the box of the policy you just created, and click "Next Step".
  6. Click "Create Role".

Creating the Lambda function:

  1. Navigate to Lambda in your management console.
  2. Click "Create a Lambda function".
  3. Select the "Blank Function" blueprint.
  4. Under "Configure triggers", click the grey box and select "CloudWatch Events - Schedule".
  5. Enter an appropriate rule name and description.
  6. Select the frequency you'd like Lambda to backup your account's IAM configuration in the Schedule expression input. I chose "rate(15 days)" for my usage.
  7. Check the box to "Enable trigger" and click "Next".
  8. Click "Next".
  9. Enter an appropriate function name and description. Select Node.js for the runtime.
  10. Under "Lambda function code", select "Edit code inline" for the Code entry type and paste the following code in the box:
    var AWS = require("aws-sdk");
    
    exports.handler = (event, context, callback) => {
        var iam = new AWS.IAM();
        var s3 = new AWS.S3();
        var params = {};
        iam.getAccountAuthorizationDetails(params, function(err, data){
            if (err) {
                console.log(err, err.stack);
            } 
            else {
                var today = new Date();
                var dd = today.getDate();
                var mm = today.getMonth()+1;
                var yyyy = today.getFullYear();
                if (dd < 10) {
                    dd = "0" + dd;
                } 
                if (mm < 10) {
                    mm = "0" + mm;
                }
                var destinationBucket = "Your Bucket Name";
                var objectName = yyyy.toString() + "-" + mm.toString() + "-" + dd.toString() + "-" + "IAM-Config-Backup.json";
                var body = JSON.stringify(data);
                var uploadParam = { Bucket: destinationBucket, Key: objectName, Body: body, ContentType: "application/json", StorageClass: "STANDARD" };
                s3.upload(uploadParam, function(err, data) {
                    if (err) {
                        console.log(err, err.stack);
                    } else{
                        console.log("IAM configuration backup upload successful.")
                    }
                });
            }
        });
    };
  11. Substitute "Your Bucket Name" with the name of your bucket.
  12. Leave Handler as "index.handler".
  13. Choose to use an existing role and select the role you created earlier.
  14. Leave the other default values and click "Next".
  15. Click "Create function".

There's a few things to consider regarding this Lambda function. If you are managing an AWS account that has a very substantial configuration with several hundreds of groups, users, roles, custom policies, etc, you may discover that you'll need to increase the timeout and/or increase the amount of memory allocated to the function. If you run into issues even after increasing these values to their maximums, you can always apply filters to the "getAccountAuthorizationDetails" section to break the payload down.

And like all things stored in S3 that are backups, configuring a lifecycle policy is something you should consider if storage costs are of concern. I'd recommend keeping your backups indefinitely but using a lifecycle that moves them to the Infrequent Access storage class of S3. This way, you don't pay the full cost of S3 Standard for files you hopefully never have to access.

Cheers!

21. January 2017 21:12
by Aaron Medacco
0 Comments

Automatically Converting Your Text Files to Speech MP3s w/ AWS Polly

21. January 2017 21:12 by Aaron Medacco | 0 Comments

AWS Polly is a new service announced at AWS re:Invent 2016 that allows you to convert text to speech almost instantly. For those with this kind of business requirement, you probably don't want to pull this off by manually going file by file. Therefore, I've created a solution that will automatically convert text files (.txt) you upload to an S3 bucket into audio files (.mp3) with the same name in a separate bucket. A definite time-saver if you're doing this to 1000s or more files a day.

 Lambda Polly Diagram

Let's get started.

Creating an IAM policy for access permissions:

  1. Navigate to IAM in your management console.
  2. Select "Policies" in the sidebar.
  3. Click "Create Policy".
  4. Select "Create Your Own Policy".
  5. Enter an appropriate policy name and description.
  6. Paste the following JSON into the policy document:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "polly:SynthesizeSpeech"
                ],
                "Resource": [
                    "*"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:GetObject"
                ],
                "Resource": [
                    "Your Source Bucket ARN/*"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:PutObject"
                ],
                "Resource": [
                    "Your Destination Bucket ARN/*"
                ]
            }
        ]
    }
  7. Substitute "Your Source Bucket ARN" with the ARN for the S3 bucket you will be uploading text files to. Make sure you add "/*" after the bucket ARN. For instance, if your bucket ARN was "arn:aws:s3:::clownshoes", you would use "arn:aws:s3:::clownshoes/*".
  8. Substitute "Your Destination Bucket ARN" with the ARN for the S3 bucket that you want speech files to be generated and placed to. Make sure you add "/*". For instance, if your bucket ARN was "arn:aws:s3:::clownshoes", you would use "arn:aws:s3:::clownshoes/*".
  9. Click "Create Policy".

 Creating the IAM role for the Lambda function:

  1. Select "Roles" in the sidebar.
  2. Click "Create New Role".
  3. Enter an appropriate role name and click "Next Step".
  4. Select "AWS Lambda" within the AWS Service Roles.
  5. Change the filter to "Customer Managed", check the box of the policy you just created, and click "Next Step".
  6. Click "Create Role".

Creating the Lambda function:

  1. Navigate to Lambda in your management console.
  2. Click "Create a Lambda function".
  3. Select the "Blank Function" blueprint.
  4. Under "Configure triggers", click the grey box and select "S3".
  5. Select the source bucket you'll be uploading text files to for the Bucket.
  6. Select "Put" for the Event type.
  7. Check the box to "Enable trigger" and click "Next".
  8. Click "Next".
  9. Enter an appropriate function name and description. Select Node.js for the runtime.
  10. Under "Lambda function code", select "Edit code inline" for the Code entry type and paste the following code in the box:
    var AWS = require("aws-sdk");
    
    exports.handler = (event, context, callback) => {
        var s3 = new AWS.S3();
        var polly = new AWS.Polly();
        var destinationBucket = "Destination Bucket Name";
        var params = {
            Bucket: event.Records[0].s3.bucket.name,
            Key: event.Records[0].s3.object.key
        };
        s3.getObject(params, function(err, data) {
            if (err) {
                console.log(err, err.stack);
            }
            else {
                var objectKey = event.Records[0].s3.object.key;
                var objectNameMp3 = objectKey.replace(".txt", ".mp3");
                var pollyParams = {
                    OutputFormat: "mp3", 
                    SampleRate: "8000", 
                    Text: data.Body.toString('utf-8'), 
                    TextType: "text", 
                    VoiceId: "Joanna"
                };
                polly.synthesizeSpeech(pollyParams, function(err, data) {
                    if (err) {
                        console.log(err, err.stack);  
                    } 
                    else {
                        var uploadParam = { Bucket: destinationBucket, Key: objectNameMp3, Body: data.AudioStream, ContentType: "audio/mpeg", StorageClass: "STANDARD" };
                        s3.upload(uploadParam, function(err, data) {
                            if (err) {
                                console.log(err, err.stack);
                            } else{
                                console.log("Speech file upload successful.")
                            }
                        });
                    }
                });
            }
        });
    };
  11. Substitute "Destination Bucket Name" with the name of the bucket you want the audio files to be placed in.
  12. Leave Handler as "index.handler".
  13. Choose to use an existing role and select the IAM role you created earlier.
  14. Leave the other default values and click "Next".
  15. Click "Create function".

Let's test it out!

We'll create a text file with some text.

Text File Contents

 

Then upload the file to our source bucket. Navigate to the destination bucket in S3, and...

Side Note: While playing with AWS Polly, I was actually reminded of the accreditation courses found within the APN portal, which use similar sounding voices for their audio. Now I'm curious if Amazon "dogfooded" Polly and used it when creating their partner training. I guess if you weren't interested in recording audio for video courses you could stick to PowerPoint for the visuals, write the script and then use Polly to convert the text into audio played during the slides.

Cheers!

19. January 2017 23:31
by Aaron Medacco
0 Comments

Patterns for Utilizing Resource Groups Within AWS

19. January 2017 23:31 by Aaron Medacco | 0 Comments

Resource groups are a common way to organize and manage assets inside your AWS account. If you haven't provisioned a significant amount of infrastructure within one account, there's a good chance you haven't found it necessary to use resource groups, yet. Admittedly, they aren't terribly useful when you're only leveraging AWS to host a small web application, API project, or just storing some photos on S3. However, as you deploy more and more resources on Amazon Web Services, things can quickly become unwieldy. Once this happens, resource groups can be a great way to fight the chaos. The following is a list of patterns I came up with for organizing resources when using choosing to use resource groups:

AWS Resource Groups

  • Organize infrastructure based on type of environment. (Development, Staging, Production)
  • Organize infrastructure based on which department resources apply to. (Human Resources, Management, Marketing)
  • Organize infrastructure based on what project resources belong to. (Company Online Store, Joe's Calculator App, Proprietary Business Intelligence)
  • Organize infrastructure based on versions of an architecture. (Legacy, Current, Experimental)
  • Organize infrastructure based on architecture role. (All Databases, All Storage, Compute, Networking)
  • Organize infrastructure based on continent. (Combine US Regions, Combine EU, Combine Asia)
  • Organize infrastructure based on who it services. (Internal Applications vs. Customer-Facing Applications)
  • Organize infrastructure based on maintenance frequency. ("I always have to change this." vs. "What? We still have this server?")
  • Organize infrastructure based on different development team environments. (Java Team, .NET Team)

I should mention you won't get a whole lot of utility with resource groups without regular use of tagging. In fact, this feature definitely reinforces the best practice for naming your stuff. Not only will it be easier to discern what everything is used for when browsing your list of EC2 Instances or security groups, but it will allow you to be more granular with how you define resource groups. While you can separate groups based on a resource's type and/or what region it exists in out of the box, the real customization lies in being able to filter based on tags.

Simply using a "Name" tag may not be enough depending on how much you have and the level of organization you're trying to achieve. Suppose you have Amazon Web Services powering 20 different projects that all have unique architectures and maintain development, stage, and production environments across 10 different regions. If you haven't set up any resource groups by that time, yikes. And worse, if you're still not naming and tagging everything according to some convention, well, good luck to you.

If anyone has a pattern that isn't on here, leave it in the comments.

Cheers!

17. January 2017 21:44
by Aaron Medacco
0 Comments

Attaching An EBS Volume to a Windows Server EC2 Instance

17. January 2017 21:44 by Aaron Medacco | 0 Comments

Here's a quick tutorial for anyone whose never had to setup an EBS volume outside of what was already attached:

Creating the EBS volume:

  1. Navigate to EC2 in your management console.
  2. Select "Volumes" under Elastic Block Store in the sidebar.
  3. Click "Create Volume".
  4. Enter the details for the new volume. For this demo, I'm using a 1 GiB GP2 volume.
  5. Click "Create".

You may have to wait a moment for the volume to create. I had to refresh my EC2 dashboard until the new volume showed up.

Attaching the EBS volume:

  1. (Optional) Name your volume so you and others managing the account keep their sanity.
  2. Select the volume you just created.
  3. Click "Actions" and click "Attach Volume".
  4. Select your Windows EC2 instance and enter a device name. For Windows, this is "xvdf" through "xvdp".
  5. Click "Attach".

Again, you will have to wait a bit before the volume is usable from the instance.

Initializing the EBS volume:

  1. Remote Desktop to your EC2 instance.
  2. Start Server Manager.
  3. Click "Tools" on the top-right and click "Computer Management".
  4. Click "Disk Management" on the left.
    Disk Management 1
  5. Find your new disk in the bottom panel.
  6. If it is offline, right click it and set it to online.
  7. Right click the left panel of the disk, and click "Initialize Disk".
    Disk Management 2
  8. Select the partition style and click "OK".
  9. Right click the right panel of the disk, and select what type of volume you want. For mine, I selected "New Simple Volume...".
  10. Complete the wizard, and you're ready to use your new volume.
    Disk Management 2

Yes, that's almost 1 whole gigabyte of space.

Cheers!

15. January 2017 15:41
by Aaron Medacco
3 Comments

AWS Lambda Functions That Dynamically Schedule Their Next Runtime

15. January 2017 15:41 by Aaron Medacco | 3 Comments

I saw a question on Stack Overflow the other day where someone asked if they could run Lambda functions on a dynamic schedule. AWS Lambda is event-driven, and Amazon accepts a variety of event types for triggering any particular function. For functions that should run on a schedule you can use CloudWatch event rules, but those rules must adhere to a pattern. For instance, once every 15 minutes, or once every Friday at 3:00am.

What if you want code to execute on a consistent basis where the amount of time between executions is variable and isn't known until the prior run executes? That is, you might wait 10 minutes to execute a function the first time, then determine the next execution should occur in 40 minutes, then the next in 3 days, then 4 hours, etc.

Lambda Dynamic Schedule

Creating a solution for this without resorting to continually polling for the next determined run time (from say, a database or S3 object) seemed like a cool problem to solve. Fortunately, since we can modify CloudWatch event rules using the AWS SDKs, we can change our event schedule on the fly each time our Lambda function is invoked. We insert the logic we need to determine subsequent run times, perform whatever work is required, then configure the event rule to fire again when we need it to. In this case, I'm choosing to randomly set the next run time to either 3, 5, or 7 minutes in the future on each run.

Let's get started.

Creating an IAM policy for access permissions:

  1. Navigate to IAM in your management console.
  2. Select "Policies" in the sidebar.
  3. Click "Create Policy".
  4. Select "Create Your Own Policy".
  5. Enter an appropriate policy name and description.
  6. Paste the following JSON into the policy document:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "events:PutRule"
                ],
                "Resource": [
                    "*"
                ]
            }
        ]
    }

Create an IAM role for the Lambda function:

  1. Select "Role" in the sidebar.
  2. Click "Create New Role".
  3. Enter an appropriate role name and click "Next Step".
  4. Select "AWS Lambda" within the AWS Service Roles.
  5. Change the filter to "Customer Managed", check the box of the policy you just created, and click "Next Step".
  6. Click "Create Role".

Creating the Lambda function:

  1. Navigate to Lambda in your management console.
  2. Click "Create a Lambda function".
  3. Select the "Blank Function" blueprint.
  4. Under "Configure triggers", click the grey box and select "CloudWatch Events - Schedule".
  5. Enter an appropriate rule name and description.
  6. Enter a placeholder expression for the Schedule expression input. We'll be changing this later.
  7. Check the box to "Enable trigger" and click "Next".
  8. Enter an appropriate function name and description. Select Node.js for the runtime.
  9. Under "Lambda function code" select "edit code inline" for the Code entry type and paste the following code in the box:
    var AWS = require("aws-sdk");
    
    exports.handler = function(event, context) {
        var cloudwatchevents = new AWS.CloudWatchEvents();
        var intervals = Array(3, 5, 7);
        var nextInterval = intervals[Math.floor(Math.random()*intervals.length)];
        var currentTime = new Date().getTime(); // UTC Time
        var nextTime = dateAdd(currentTime, "minute", nextInterval);
        var nextMinutes = nextTime.getMinutes();
        var nextHours = nextTime.getHours();
        
        //  =================================
        //  DO YOUR WORK HERE
        //  =================================
        
        var scheduleExpression = "cron(" + nextMinutes + " " + nextHours + " * * ? *)";
        var params = {
            Name: "Your CloudWatch Rule Name",
            ScheduleExpression: scheduleExpression
        };
        cloudwatchevents.putRule(params, function(err, data) {
            if (err) {
                console.log(err, err.stack);  
            }
            else {
                console.log(data);
            }
        })
    };
    
    var dateAdd = function(date, interval, units) {
        var ret = new Date(date); // don't change original date
        switch(interval.toLowerCase()) {
            case 'year'   :  ret.setFullYear(ret.getFullYear() + units);  break;
            case 'quarter':  ret.setMonth(ret.getMonth() + 3*units);  break;
            case 'month'  :  ret.setMonth(ret.getMonth() + units);  break;
            case 'week'   :  ret.setDate(ret.getDate() + 7*units);  break;
            case 'day'    :  ret.setDate(ret.getDate() + units);  break;
            case 'hour'   :  ret.setTime(ret.getTime() + units*3600000);  break;
            case 'minute' :  ret.setTime(ret.getTime() + units*60000);  break;
            case 'second' :  ret.setTime(ret.getTime() + units*1000);  break;
            default       :  ret = undefined;  break;
        }
        return ret;
    }
  10. Leave Handler as "index.handler".
  11. Choose to use an existing role and select the IAM role you created earlier.
  12. Leave the other default values and click "Next".
  13. Click "Create function".

Modifying the Lambda function:

Now that the Lambda function and the CloudWatch event rule are created, you need to revisit the function code and substitute "Your CloudWatch Rule Name" with the name of the CloudWatch rule that just got made.

Optional:

You should go back into the IAM policy you created earlier and change the resource value for the statement granting "PutRule" permissions from "*" to the ARN of your CloudWatch event rule. This follows the best practice of granting the least required permissions to IAM entities.

Cheers!

Copyright © 2016-2017 Aaron Medacco