4. August 2017 01:44
by Aaron Medacco
0 Comments

Use Bzip2 Compression w/ AWS Athena

4. August 2017 01:44 by Aaron Medacco | 0 Comments

For those using Amazon Athena to query their S3 data, it's no secret that you can save money and boost performance by using compression and columnar data formats whenever possible. Currently, Amazon's documentation doesn't list Bzip2 compression yet as a supported compression format for Athena, however it's absolutely supported.

This was confirmed by Jeff Barr's post made on optimizing performance with Athena. You can see Bzip2 is a splittable data format which allows you to take advantage of multiple readers. Other compression formats that are not splittable don't have this benefit, so it stands to reason you should use Bzip2 if you aren't using a columnar format such as ORC or Parquet.

Amazon Athena

In this post, I'll show how easy it is to compress your data to this format. Once this is done, just upload your data to S3, define a schema within the Athena catalog which points to the location you upload the compressed files to, and query away.

For Windows users:

  1. Download 7-zip here. Once installed, you'll be able to invoke 7-zip from File Explorer by right-clicking on files > 7-Zip > Add to archive....
  2. Select "bzip2" as the Archive format with "BZip2" as the Compression method: 

    Compress To Bzip2
  3. Click OK. 

For Linux users:

  1. Open a command prompt and change your working directory to that where the files you want to compress are located.
  2. Invoke the following command: 
    bzip2 file.csv
    If you want to compress multiple files, you can list them out:
    bzip2 file.csv file2.csv file3.csv
    For more information on other options with bzip2 command, check this out.

Those of you interested in using the recommended columnar storage formats, should check out the AWS documentation, which shows how you can spin up an EMR cluster to convert data to Parquet.

Cheers!

22. July 2017 19:05
by Aaron Medacco
0 Comments

Scheduling Notifications for Rotating Old IAM Access Keys

22. July 2017 19:05 by Aaron Medacco | 0 Comments

Access keys allow you to give programmatic access to a user so they can accomplish tasks and interact with services within your AWS environment. These keys should be heavily guarded and kept secret. Once exposed, anyone can wreak havoc in your AWS account using any permissions that were granted to the user's whose keys were exposed.

A best practice for security is to rotate these keys regularly. We want to keep our access keys fresh and not have old or unused keys running about waiting to be abused. Therefore, I've created an automated method for notifying an administrator when user access keys are old and should be rotated. Since the number of days for keys to be considered "old" varies across organizations, I've included a variable that can be configured to fit the reader's requirements.

IAM Access Key Rotation

Like my recent posts, this will make use of the AWS CLI, which I assume you've installed and configured already. We'll be using (of course) Lambda backed by a CloudWatch event rule. Notifications will be sent using the SNS service when keys should be rotated. Enjoy.

Creating an SNS topic:

  1. Open up a command prompt or terminal window, and invoke the following command:
    aws sns create-topic --name IAM-Access-Key-Rotation-Topic
  2. You'll get a Topic ARN value back which you want to keep.
  3. Then invoke the following command. Substitute Email for the email address you want to receive the notifications.
    Note: You don't have to use email if you don't want to. Feel free to use whichever protocol/endpoint fits you.
    aws sns subscribe --topic-arn ARN --protocol email --notification-endpoint Email
  4. The email will get an email asking them to confirm a subscription. You'll need to confirm the subscription before moving on if you want notifications to go out.

Creating an IAM policy for access permissions:

  1. Create a file named iam-policy.json with the following contents and save it in your working directory. Substitute Your Topic ARN for the ARN of the topic you just created:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "sns:Publish"
                ],
                "Resource": [
                    "Your Topic ARN"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "iam:ListAccessKeys",
                    "iam:ListUsers"
                ],
                "Resource": [
                    "*"
                ]
            }
        ]
    }
  2. In your command prompt or terminal window, execute the following command:
    aws iam create-policy --policy-name rotate-old-access-keys-notification-policy --policy-document file://iam-policy.json
  3. You'll receive details for the policy you just created. Write down the ARN value. You will need it in a later step.

Creating the IAM role for the Lambda function:

  1. Create a file named role-trust-policy.json with the following contents and save it in your working directory:
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": "lambda.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
  2. In your command prompt or terminal window, invoke the following command:
    aws iam create-role --role-name rotate-old-access-keys-notification-role --assume-role-policy-document file://role-trust-policy.json
  3. You'll get back information about the role you just made. Write down the ARN value for the role. You will need it when we go to create the Lambda function.
  4. Invoke this command to attach the policy to the role. Substitute ARN for the policy ARN you received when you created the IAM policy.
    aws iam attach-role-policy --policy-arn ARN --role-name rotate-old-access-keys-notification-role

Creating the Lambda function:

  1. Create a files named index.js with the following contents and save it in your working directory. Make sure you substitute Topic ARN with the ARN of the topic you created in the first step.
    Note: Notice you can configure the number of days allowed before rotations should be done. Default value I set is 90.
    var AWS = require("aws-sdk");
    var iam = new AWS.IAM();
    var sns = new AWS.SNS();
    
    var daysBeforeRotationRequired = 90; // Number of days from access creation before action is taken on access keys.
    
    exports.handler = (event, context, callback) => {
        var listUsersParams = {};
        iam.listUsers(listUsersParams, function(err, data) {
            if (err) {
                console.log(err, err.stack);
            }
            else {
                for (var i = 0; i < data.Users.length; i++) {
                    var userName = data.Users[i].UserName;
                    var listAccessKeysParams = { UserName: userName };
                    iam.listAccessKeys(listAccessKeysParams, function(err, data) {
                        if (err) {
                            console.log(err, err.stack);
                        } 
                        else {
                            for (var j = 0; j < data.AccessKeyMetadata.length; j++) {
                                var accessKeyId = data.AccessKeyMetadata[j].AccessKeyId;
                                var creationDate = data.AccessKeyMetadata[j].CreateDate;
                                var accessKeyUserName = data.AccessKeyMetadata[j].UserName;
                                var now = new Date();
                                var whenRotationShouldOccur = dateAdd(creationDate, "day", daysBeforeRotationRequired);
                                if (now > whenRotationShouldOccur) {
                                    var message = "You need to rotate access key: " + accessKeyId + " for user: " + accessKeyUserName
                                    console.log(message);
                                    var publishParams = {
                                        Message: message,
                                        Subject: "Access Keys Need Rotating For User: " + accessKeyUserName,
                                        TopicArn: "Topic ARN"
                                    };
                                    sns.publish(publishParams, context.done);
                                }
                                else {
                                    console.log("No access key rotation necessary for: " + accessKeyId + "for user: " + accessKeyUserName);
                                }
                            }
                        }
                    });
                }
            }
        });
    };
    
    var dateAdd = function(date, interval, units) {
        var ret = new Date(date); // don't change original date
        switch(interval.toLowerCase()) {
            case 'year'   :  ret.setFullYear(ret.getFullYear() + units);  break;
            case 'quarter':  ret.setMonth(ret.getMonth() + 3*units);  break;
            case 'month'  :  ret.setMonth(ret.getMonth() + units);  break;
            case 'week'   :  ret.setDate(ret.getDate() + 7*units);  break;
            case 'day'    :  ret.setDate(ret.getDate() + units);  break;
            case 'hour'   :  ret.setTime(ret.getTime() + units*3600000);  break;
            case 'minute' :  ret.setTime(ret.getTime() + units*60000);  break;
            case 'second' :  ret.setTime(ret.getTime() + units*1000);  break;
            default       :  ret = undefined;  break;
        }
        return ret;
    }
  2. Zip this file to a zip called index.zip.
  3. Bring your command prompt or terminal window back up, and execute the following command. Substitute ARN for the role ARN you received from the step where we created the role:
    aws lambda create-function --function-name rotate-old-access-keys-notification-sender --runtime nodejs6.10 --handler index.handler --role ARN --zip-file fileb://index.zip --timeout 30
  4. Once the function is created, you'll get back details for the function. Write down the ARN value for the function for when we schedule the function execution.

Scheduling the Lambda function:

  1.  In your command prompt or terminal window, execute the following command:
    Note: Feel free to adjust the schedule expression for your own purposes.
    aws events put-rule --name Rotate-Old-Access-Keys-Notification-Scheduler --schedule-expression "rate(1 day)"
  2. Write down the ARN value for the rule upon creation finishing.
  3. Run the following command. Substitute ARN for the Rule ARN you just received:
    aws lambda add-permission --function-name rotate-old-access-keys-notification-sender --statement-id LambdaPermission --action "lambda:InvokeFunction" --principal events.amazonaws.com --source-arn ARN
  4. Now run the following command. Substitute ARN for that of the Lambda function you created in the previous step:
    aws events put-targets --rule Rotate-Old-Access-Keys-Notification-Scheduler --targets "Id"="1","Arn"="ARN"

For information regarding ways to rotate your access keys without affecting your applications, click here.

Cheers!

14. July 2017 03:22
by Aaron Medacco
0 Comments

Copying Multiple AMIs From One AWS Region to Another

14. July 2017 03:22 by Aaron Medacco | 0 Comments

It's common to want to move AMIs between regions to provision copies of instances to a new region. You can copy an image from one region to another in the web console but only if you select one at a time. At least this was the case for me. Therefore, I wrote the following batch file which will allow you to copy any amount of AMIs to another region.

This post assumes you have installed and configured the AWS CLI appropriately and that you have permission to invoke the copy-image action for EC2.

Creating a batch script and text file:

  1. Create a file named amis.txt and write the IDs of each AMI you want to copy on a new line and save it in your working directory. For example:
    ami-xxxxxxxx
    ami-xxxxxxxx
  2. Create a file named copy-amis.bat with the following contents and substitute the appropriate values for the source region your copying AMIs from and the destination region your copying AMIs to:
    SET source-region=us-east-1
    SET dest-region=us-east-2
    FOR /F "tokens=*" %%A IN (amis.txt) DO (
    	ECHO "Copying AMI %%A from %source-region% to %dest-region%
    	SET ami-name=%%A-copy-from-%source-region%
    	aws ec2 copy-image --source-image-id %%A --source-region %source-region% --region %dest-region% --name %ami-name%
    )
    ECHO "Finished copying AMIs from %source-region% to %dest-region%
  3. Run the batch file.

That's it. You should see copies of your images in the destination region shortly after.

Cheers!

3. July 2017 22:25
by Aaron Medacco
15 Comments

Scheduling Automated AMI Backups of Your EC2 Instances

3. July 2017 22:25 by Aaron Medacco | 15 Comments

When considering disaster recovery options for systems or applications running on Amazon Web Services, a frequent solution is to use AMIs to restore instances to a known acceptable state in the event of failure or catastrophe. If you're team has decided on this approach, you'll want to automate the creation and maintenance of these AMIs to prevent mistakes or somebody "forgetting" to do the task. In this post, I'll walk through how to set this up in AWS within a matter of minutes using Amazon's serverless compute offering, Lambda

Automated AMI Backups

In departure from many of my other posts involving Lambda, the following steps will make use of the AWS CLI so I will assume that you've already installed and configured it on your machine. This is to grant longevity to these posts and to protect their relevance from slipping due to the browser-based console's rate of change.

I have also added some flexibility in the maintenance of these backups I encourage the reader to configure to their liking. These include a variable number of backups the reader would like to maintain for each EC2 instance they wish to have AMIs taken, a customizable tag the reader can assign to EC2 instances they'd like to backup, and the option to also delete snapshots of the AMIs being de-registered once they exit the backup window. I have included default values, but I still encourage you to read the options before implementing this solution.

Let's get started.

Creating an IAM policy for access permissions:

  1. Create a file named iam-policy.json with the following contents and save it in your working directory:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "Stmt1499061014000",
                "Effect": "Allow",
                "Action": [
                    "ec2:CreateImage",
                    "ec2:CreateTags",
                    "ec2:DeleteSnapshot",
                    "ec2:DeregisterImage",
                    "ec2:DescribeImages",
                    "ec2:DescribeInstances"
                ],
                "Resource": [
                    "*"
                ]
            }
        ]
    }
  2. In your command prompt or terminal window, invoke the following command:
    aws iam create-policy --policy-name ami-backup-policy --policy-document file://iam-policy.json
  3. You'll receive output with details of the policy you've just created. Write down the ARN value as you will need it later.

Creating the IAM role for the Lambda function:

  1. Create a file named role-trust-policy.json with the following contents and save it in your working directory:
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": "lambda.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
  2. In your command prompt or terminal window, invoke the following command:
    aws iam create-role --role-name ami-backup-role --assume-role-policy-document file://role-trust-policy.json
  3. You'll receive output with details of the role you've just created. Be sure to write down the role ARN value provided. You'll need it later.
  4. Run the following command to attach the policy to the role. You must substitute ARN for the policy ARN you wrote down from the prior step: 
    aws iam attach-role-policy --policy-arn ARN --role-name ami-backup-role

Creating the Lambda function:

  1.  Create a file named index.js with the following contents and save it in your working directory:
    Note: The following file is the code managing your AMI backups. There are a number of configurable options to be aware of and I have commented descriptions of each in the code. 
    var AWS = require("aws-sdk");
    var ec2 = new AWS.EC2();
    
    var numBackupsToRetain = 2; // The Number Of AMI Backups You Wish To Retain For Each EC2 Instance.
    var instancesToBackupTagName = "BackupAMI"; // Tag Key Attached To Instances You Want AMI Backups Of. Tag Value Should Be Set To "Yes".
    var imageBackupTagName = "ScheduledAMIBackup"; // Tag Key Attached To AMIs Created By This Process. This Process Will Set Tag Value To "True".
    var imageBackupInstanceIdentifierTagName = "ScheduledAMIInstanceId"; // Tag Key Attached To AMIs Created By This Process. This Process Will Set Tag Value To The Instance ID.
    var deleteSnaphots = true; // True if you want to delete snapshots during cleanup. False if you want to only delete AMI, and leave snapshots intact.
    
    exports.handler = function(event, context) {
        var describeInstancesParams = {
            DryRun: false,
            Filters: [{
                Name: "tag:" + instancesToBackupTagName,
                Values: ["Yes"]
            }]
        };
        ec2.describeInstances(describeInstancesParams, function(err, data) {
            if (err) {
                console.log("Failure retrieving instances.");
                console.log(err, err.stack); 
            }
            else {
                for (var i = 0; i < data.Reservations.length; i++) {
                    for (var j = 0; j < data.Reservations[i].Instances.length; j++) {
                        var instanceId = data.Reservations[i].Instances[j].InstanceId;
                        createImage(instanceId);
                    }
                }
            }
        });
        cleanupOldBackups();
    };
    
    var createImage = function(instanceId) {
        console.log("Found Instance: " + instanceId);
        var createImageParams = {
            InstanceId: instanceId,
            Name: "AMI Scheduled Backup I(" + instanceId + ") T(" + new Date().getTime() + ")",
            Description: "AMI Scheduled Backup for Instance (" + instanceId + ")",
            NoReboot: true,
            DryRun: false
        };
        ec2.createImage(createImageParams, function(err, data) {
            if (err) {
                console.log("Failure creating image request for Instance: " + instanceId);
                console.log(err, err.stack);
            }
            else {
                var imageId = data.ImageId;
                console.log("Success creating image request for Instance: " + instanceId + ". Image: " + imageId);
                var createTagsParams = {
                    Resources: [imageId],
                    Tags: [{
                        Key: "Name",
                        Value: "AMI Backup I(" + instanceId + ")"
                    },
                    {
                        Key: imageBackupTagName,
                        Value: "True"
                    },
                    {
                        Key: imageBackupInstanceIdentifierTagName,
                        Value: instanceId
                    }]
                };
                ec2.createTags(createTagsParams, function(err, data) {
                    if (err) {
                        console.log("Failure tagging Image: " + imageId);
                        console.log(err, err.stack);
                    }
                    else {
                        console.log("Success tagging Image: " + imageId);
                    }
                });
            }
        });
    };
    
    var cleanupOldBackups = function() {
        var describeImagesParams = {
            DryRun: false,
            Filters: [{
                Name: "tag:" + imageBackupTagName,
                Values: ["True"]
            }]
        };
        ec2.describeImages(describeImagesParams, function(err, data) {
            if (err) {
                console.log("Failure retrieving images for deletion.");
                console.log(err, err.stack); 
            }
            else {
                var images = data.Images;
                var instanceDictionary = {};
                var instances = [];
                for (var i = 0; i < images.length; i++) {
                    var currentImage = images[i];
                    for (var j = 0; j < currentImage.Tags.length; j++) {
                        var currentTag = currentImage.Tags[j];
                        if (currentTag.Key === imageBackupInstanceIdentifierTagName) {
                            var instanceId = currentTag.Value;
                            if (instanceDictionary[instanceId] === null || instanceDictionary[instanceId] === undefined) {
                                instanceDictionary[instanceId] = [];
                                instances.push(instanceId);
                            }
                            instanceDictionary[instanceId].push({
                                ImageId: currentImage.ImageId,
                                CreationDate: currentImage.CreationDate,
                                BlockDeviceMappings: currentImage.BlockDeviceMappings
                            });
                            break;
                        }
                    }
                }
                for (var t = 0; t < instances.length; t++) {
                    var imageInstanceId = instances[t];
                    var instanceImages = instanceDictionary[imageInstanceId];
                    if (instanceImages.length > numBackupsToRetain) {
                        instanceImages.sort(function (a, b) {
                           return new Date(b.CreationDate) - new Date(a.CreationDate); 
                        });
                        for (var k = numBackupsToRetain; k < instanceImages.length; k++) {
                            var imageId = instanceImages[k].ImageId;
                            var creationDate = instanceImages[k].CreationDate;
                            var blockDeviceMappings = instanceImages[k].BlockDeviceMappings;
                            deregisterImage(imageId, creationDate, blockDeviceMappings);
                        }   
                    }
                    else {
                        console.log("AMI Backup Cleanup not required for Instance: " + imageInstanceId + ". Not enough backups in window yet.");
                    }
                }
            }
        });
    };
    
    var deregisterImage = function(imageId, creationDate, blockDeviceMappings) {
        console.log("Found Image: " + imageId + ". Creation Date: " + creationDate);
        var deregisterImageParams = {
            DryRun: false,
            ImageId: imageId
        };
        console.log("Deregistering Image: " + imageId + ". Creation Date: " + creationDate);
        ec2.deregisterImage(deregisterImageParams, function(err, data) {
           if (err) {
               console.log("Failure deregistering image.");
               console.log(err, err.stack);
           } 
           else {
               console.log("Success deregistering image.");
               if (deleteSnaphots) {
                    for (var p = 0; p < blockDeviceMappings.length; p++) {
                       var snapshotId = blockDeviceMappings[p].Ebs.SnapshotId;
                       if (snapshotId) {
                           deleteSnapshot(snapshotId);
                       }
                   }    
               }
           }
        });
    };
    
    var deleteSnapshot = function(snapshotId) {
        var deleteSnapshotParams = {
            DryRun: false,
            SnapshotId: snapshotId
        };
        ec2.deleteSnapshot(deleteSnapshotParams, function(err, data) {
            if (err) {
                console.log("Failure deleting snapshot. Snapshot: " + snapshotId + ".");
                console.log(err, err.stack);
            }
            else {
                console.log("Success deleting snapshot. Snapshot: " + snapshotId + ".");
            }
        })
    };
  2. Zip this file to a zip called index.zip.
  3. In your command prompt or terminal window, invoke the following command. You must substitute ARN for the role ARN you wrote down from the prior step: 
    aws lambda create-function --function-name ami-backup-function --runtime nodejs6.10 --handler index.handler --role ARN --zip-file fileb://index.zip --timeout 30
  4. You'll receive output details about the Lambda function you've just created. Write down the Function ARN value for later use.

Scheduling the Lambda function:

  1. In your command prompt or terminal window, invoke the following command:
    Note: Feel free to adjust the schedule expression for your own use.
    aws events put-rule --name ami-backup-event-rule --schedule-expression "rate(1 day)"
  2. You'll get the Rule ARN value back as output. Write this down for later.
  3. Run the following command. Substitute ARN for the Rule ARN you just wrote down:
    aws lambda add-permission --function-name ami-backup-function --statement-id LambdaPermission --action "lambda:InvokeFunction" --principal events.amazonaws.com --source-arn ARN
  4. Run the following command. Substitute ARN for the Function ARN of the Lambda function you wrote down:
    aws events put-targets --rule ami-backup-event-rule --targets "Id"="1","Arn"="ARN"

Remember, you must assign the appropriate tag to each EC2 instance you want windowed AMI backups for. Leave a comment if you run into any issue using this solution.

Cheers!

1. July 2017 20:41
by Aaron Medacco
3 Comments

Automating Image Compression Using S3 & Lambda

1. July 2017 20:41 by Aaron Medacco | 3 Comments

For those who leverage images heavily, there are cases where you might want to serve compressed images instead of the originals to boost performance. For images being stored on Amazon S3, it'd be nice if you didn't have to manually compress these yourself as they get uploaded. In this post, I'll show how you can automate the compression of your images in S3 using AWS Lambda and the ImageMagick Node.js library that's already built-in to the Lamdba runtime for Node.js.

S3 Image Compression

The following assumes you've created the buckets in S3 where your images should be managed. Feel free to leverage suffixes and prefixes for your own usage where applicable.

Creating an IAM policy for access permissions:

  1. Navigate to IAM in your management console.
  2. Select "Policies" in the sidebar.
  3. Click "Create Policy".
  4. Select "Create Your Own Policy".
  5. Enter an appropriate policy name and description.
  6. Paste the following JSON into the policy document:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "s3:GetObject"
                ],
                "Resource": [
                    "Your Source Bucket ARN/*"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:PutObject"
                ],
                "Resource": [
                    "Your Destination Bucket ARN/*"
                ]
            }
        ]
    }
  7. Substitute "Your Source Bucket ARN" with the ARN for the S3 bucket that you'll be uploading your original, uncompressed objects to. Make sure you add "/*" after the bucket ARN. For instance, if your bucket ARN was "arn:aws:s3:::sufferforyourbetrayal", you would use "arn:aws:s3:::sufferforyourbetrayal/*".
  8. Substitute "Your Destination Bucket ARN" with the ARN for the S3 bucket where you want your compressed objects to end up. Make sure you add "/*" after the bucket ARN. For instance, if your bucket ARN was "arn:aws:s3:::sufferforyourbetrayal", you would use "arn:aws:s3:::sufferforyourbetrayal/*".
  9. Click "Create Policy".

Creating the IAM role for the Lambda function:

  1. Select "Roles" in the sidebar.
  2. Click "Create New Role".
  3. Configure the role type such that it us an AWS Service Role for AWS Lambda, attach the policy you just created to it, name it, and continue.

Creating the Lambda function:

  1. Navigate to Lambda in your management console.
  2. Click "Create a Lambda function".
  3. Select the "Blank Function" blueprint.
  4. Under "Configure triggers", click the grey box and select "S3".
  5. Select the source bucket where original, uncompressed objects will be uploaded for the Bucket.
  6. Select the appropriate Event type. For example, "Put".
  7. Enter a Prefix and/or Suffix if you want. I left mine blank.
  8. Check the box to "Enable trigger" and click "Next".
  9. Click "Next".
  10. Enter an appropriate function name and description. Select Node.js 6.10 for the runtime.
  11. Under "Lambda function code", select "Edit code inline" for the Code entry type and paste the following code in the box:
    var AWS = require("aws-sdk");
    var IM = require('imagemagick');
    var FS = require('fs');
    var compressedJpegFileQuality = 0.80;
    var compressedPngFileQuality = 0.95;
    
    exports.handler = (event, context, callback) => {
        var s3 = new AWS.S3();
        var sourceBucket = "Source Bucket Name";
        var destinationBucket = "Destination Bucket Name";
        var objectKey = event.Records[0].s3.object.key;
        var getObjectParams = {
    		Bucket: sourceBucket,
    		Key: objectKey
    	};
    	s3.getObject(getObjectParams, function(err, data) {
    		if (err) {
    			console.log(err, err.stack);
    		} else {
    			console.log("S3 object retrieval get successful.");
    			var resizedFileName = "/tmp/" + objectKey;
    			var quality;
    			if (resizedFileName.toLowerCase().includes("png")){
    			    quality = compressedPngFileQuality;
    			}
    			else {
    			    quality = compressedJpegFileQuality;
    			}
    			var resize_req = { width:"100%", height:"100%", srcData:data.Body, dstPath: resizedFileName, quality: quality, progressive: true, strip: true };
    			IM.resize(resize_req, function(err, stdout) {
                    if (err) {
                        throw err;
                    }
                    console.log('stdout:', stdout);
                    var content = new Buffer(FS.readFileSync(resizedFileName));
                    var uploadParams = { Bucket: destinationBucket, Key: objectKey, Body: content, ContentType: data.ContentType, StorageClass: "STANDARD" };
                    s3.upload(uploadParams, function(err, data) {
                        if (err) {
                            console.log(err, err.stack);
                        } else{
                            console.log("S3 compressed object upload successful.");
                        }
                    });
                });
    		}
    	});
    };
  12. Substitute "Source Bucket Name" with the name of the buckets the original, uncompressed objects will be uploaded to.
  13. Substitute "Destination Bucket Name" with the name of the bucket the compressed objects should end up.
  14. Leave Handler as "index.handler".
  15. Choose to use an existing role and select the IAM role you created earlier.
  16. Under the Advanced Settings, you may want to allocate additional memory or increase the timeout to fit your usage.
  17. Finish the wizard.

I did run into a lot of issues getting PNG images to compress down appropriately. If someone knows a solution for this when using the Node.js library for ImageMagick, leave a comment. I'd like to extend this solution to work with other image file types if possible. 

Cheers!

Copyright © 2016-2017 Aaron Medacco