28. February 2017 23:41
by Aaron Medacco
0 Comments

Setting up Alerts for When Critical EBS Volumes Start Running Out of Space on Windows EC2 Instances

28. February 2017 23:41 by Aaron Medacco | 0 Comments

Note: This solution is for AWS users using EC2 instances w/ a Windows operating system. Linux users can employ a similar design, but won't be able to use the exact pieces in this solution.

If you're like me, you'd like to have a heads up on when your EC2 instances are running out of space. For example, if you're managing your own database server instead of using RDS, and have the data written to a particular EBS volume, it sure would be nice to know it's running out of space before, well...it's run out of space. In this post, we'll take the proactive approach by setting up monitoring and an alarm on our volumes' available space . That way, we'll know we need to provision extra storage ahead of time, instead of in response to an angry phone call.

Unfortunately, while Amazon Web Services does keep track of several metrics regarding your instances and EBS volumes within AWS CloudWatch, available space is not one of them. However, they do allow you to submit your own custom metrics and use all of the features of CloudWatch on them. Therefore, we can calculate how much free disk space we have ourselves and leverage CloudWatch when it comes to monitoring.

EBS Volume Usage Alarm

Specifically, this solution employs a simple PowerShell script that, when scheduled using Task Scheduler, can calculate and submit what percentage of our volume has already been written to. Once this metric is provided to CloudWatch on a recurring basis, we'll setup a CloudWatch alarm to monitor it and send out a notification via SNS topic if our specified threshold has been crossed (i.e. volume usage has passed 80%).

The PowerShell script uses the AWS CLI which you may need to configure if you haven't already on your EC2 instance(s). You can find the instructions for that here.

Creating an SNS topic to send notifications:

  1. Navigate to SNS in your management console.
  2. Select "Topics" in the sidebar.
  3. Click the "Create new topic" button.
  4. Enter an appropriate topic name and display name and click "Create topic".

Subscribing to the SNS topic:

  1. Select "Topics" in the sidebar.
  2. Click the ARN link for the topic you just created.
  3. Under Subscriptions, click "Create subscription".
  4. Select Email as the Protocol and enter your email address as the Endpoint.
  5. Repeat steps 3 and 4 for each email address you want to receive notifications.
  6. Each email address endpoint will receive an email asking to confirm the subscription. Confirm the subscriptions.

Setting up the PowerShell script that will submit our CloudWatch metric:

  1. Create a PowerShell script file and paste the following code:
    #Parameters
    $computerName = "Your EC2 Instance Hostname"; #For example, "EC2AMAZ-XXXXXXX"
    $deviceId = "Your Volume Device ID"; #For example, "C:"
    $instanceId = "Your EC2 Instance ID"; #For example, "i-xxxxxxxxxxxxxxxxx"
    
    #Send To CloudWatch
    $metricName = "EBS Volume Usage";
    $deviceCommand = "DeviceID='" + $deviceId + "'";
    $unit = "Count";
    $disk = Get-WmiObject Win32_LogicalDisk -ComputerName $computerName -Filter $deviceCommand | Select-Object Size,FreeSpace;
    $value = (100 - $disk.FreeSpace / $disk.Size * 100);
    Write-Host "Posting Volume Usage To CloudWatch: $($value)";
    aws cloudwatch put-metric-data --metric-name $metricName --namespace "Custom Metrics" --value $value --dimensions InstanceId=$instanceId,DeviceID=$deviceId;
    Write-Host "Done";
    You can also download the script here.
  2. Substitute the parameter values with those that apply to your EC2 instance and volume. 

Configure Task Scheduler to run the PowerShell script at 5 minute intervals:

  1. Start up Task Scheduler on your EC2 instance.
  2. Navigate to the Task Scheduler Library.
  3. On the right hand side, click "Create Task...".
  4. On the General tab, enter an appropriate Name and Description.
  5. Under Security options, select "Run whether user is logged on or not" and select the appropriate Windows Server version in the "Configure for:" drop-down.
    Task Scheduler 1
  6. On the Triggers tab, click "New...".
  7. Select "On a schedule" for Begin the task.
  8. Under Settings, select "One time" and select a start time.
  9. Under Advanced settings, select "Repeat task every:", 5 minutes, and "Indefinitely" for the duration.
  10. Make sure the Enabled checkbox is checked.
    Task Scheduler 2
  11. Click "OK".
  12. On the Actions tab, click "New...".
  13. Select "Start a program" for the Action drop-down.
  14. Under Program/script, enter the path to the executable for PowerShell on your machine.
  15. In the Add arguments (optional) field, enter the absolute path to the PowerShell script.
    Task Scheduler 3
  16. Click "OK".
  17. On the Conditions tab, I left all highest level checkboxes unchecked.
    Task Scheduler 4
  18. On the Settings tab, configure the options as you desire.
  19. Click "OK".
  20. Run your newly scheduled task.

At this point, you should begin seeing the custom metric in your CloudWatch management console. If this isn't the case, there may be an issue where the task is not running properly. 

Note: When you created the Action in your scheduled task, if you instead chose to enter the path of the PowerShell script under Program/script, it's possible that Task Scheduler is opening the script in Notepad instead of actually running the commands.

Creating the CloudWatch alarm which will send notifications when our EBS volume is close to reaching full capacity:

  1. Navigate to CloudWatch in your management console.
  2. Select "Alarms" in the sidebar.
  3. Click "Create Alarm".
  4. Select the metric you just created. It should be within the "Custom Metrics" namespace.
  5. Click "Next".
  6. Enter an appropriate Name and Description.
  7. Set the threshold to meet your requirements. For my personal use, I configure the alarm for when space usage is >= 80 for 1 consecutive period.
  8. Under Actions, select "State is ALARM" and the name of the SNS topic you created earlier for the notification.
  9. Click "Create Alarm".

You can test that the alarm is working by using a threshold value below what your current volume space usage is, and checking that the SNS topic was published to.

If anyone has any improvements to this solution or issues implementing it, feel free to leave a note in the comments.

Cheers!

25. February 2017 18:18
by Aaron Medacco
1 Comments

PowerShell Script for Uploading a Local Directory to an S3 Bucket on AWS

25. February 2017 18:18 by Aaron Medacco | 1 Comments

For those needing to upload several files to S3 that are on a local directory, I've put together a simple PowerShell snippet for sending all files in a directory to a specified S3 bucket.

PowerShell S3 Upload

If you have sub-directories within a directory, those will also be copied up as long as there are files within them.

This script makes use of the AWS CLI:

$bucketName = "Your Bucket Name"; #For example, "s3://my-bucket-name"
$relativePath = "Your Relative Path"; #For example, "objects" or "myfolder/objects"
aws s3 sync $relativePath $bucketName;
Write-Host "Done";

You can also download the file here

Don't forget to swap in the name of your bucket name and the relative path of the directory from where you execute the script in the appropriate variables. And keep in mind that you can also choose to use an absolute path to the source directory instead of relative. 

If instead you want to download objects from an S3 bucket to your local directory, just swap the order of the variables in the command like this:

aws s3 sync $bucketName $relativePath;

This uses the 'sync' command within the S3 library of the AWS CLI. You can find the reference for that here.

Cheers!

18. February 2017 21:21
by Aaron Medacco
0 Comments

Saving Money by Automatically Stopping Non-Production Instances During Off Hours w/ AWS Lambda

18. February 2017 21:21 by Aaron Medacco | 0 Comments

For those with staging, development, or QA environments provisioned in EC2 using the on-demand billing model, you may be paying more than you need. If you have employees working in these environments during business hours, it doesn't make sense to have them running 24/7, if it's not necessary. Now, there may be cases where you would like a staging environment to run at all times, but for those where you don't, you might as well stop the instances to save money.

For example, suppose your QA team does testing on an application running on several on-demand EC2 instances from 8:00AM to 5:00PM, Monday thru Friday. If no one stops those instances, your paying for those resources while your team isn't even at work! By stopping the instances during off-hours, you can slash your bill for those instances by more than half, especially if you include the weekend.

EC2 Off Hours Management

The following is an automated solution for stopping EC2 instances at a particular hour (7:00PM, Monday thru Friday) and starting them back up at a particular hour (7:00AM, Monday thru Friday). It operates using 2 separate Lambda functions (one for stopping, one for starting) triggered by scheduled CloudWatch events. Within each function, a search is done for EC2 instances that have a specific tag so you can designate exactly which resources you want this process to affect. Instances without this tag are ignored. In this example, I'm choosing to include any EC2 instance that has a tag of "Environment" where the value is set to "Development", but the code and values shown can easily be modified to suit your own requirements.

Environment Tag

Let's dig in.

Creating an IAM policy for the Lambda function stopping instances at off-hours:

  1. Navigate to IAM in your management console.
  2. Select "Policies" in the sidebar.
  3. Click "Create Policy".
  4. Select "Create Your Own Policy".
  5. Enter an appropriate policy name and description.
  6. Paste the following JSON into the policy document:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:DescribeInstances",
                    "ec2:StopInstances"
                ],
                "Resource": [
                    "*"
                ]
            }
        ]
    }
  7. Click "Create Policy".

Creating the IAM role for the Lambda function stopping instances at off-hours:

  1. Select "Roles" in the sidebar.
  2. Click "Create New Role".
  3. Enter an appropriate role name and click "Next Step".
  4. Select "AWS Lambda" within the AWS Service Roles.
  5. Change the filter to "Customer Managed", check the box of the policy you just created, and click "Next Step".
  6. Click "Create Role".

Creating the Lambda function stopping instances at off-hours:

  1. Navigate to Lambda in your management console.
  2. Click "Create a Lambda function".
  3. Select the "Blank Function" blueprint.
  4. Under "Configure triggers", click the grey box and select "CloudWatch Events - Schedule".
  5. Enter an appropriate rule name and description.
  6. I want the instances to be stopped at 7:00PM after the workday. Since I live in Arizona (UTC-7:00), the correct cron expression for this is 0 2 ? * MON-FRI *. You'll need to change this based on when you want the instances to stop and what timezone you're in.
  7. Check the box to "Enable trigger" and click "Next".
  8. Enter an appropriate function name and description. Select Node.js for the runtime.
  9. Under "Lambda function code", select "Edit code inline" for the Code entry type and paste the following code in the box:
    var AWS = require("aws-sdk");
    
    exports.handler = (event, context, callback) => {
        var ec2 = new AWS.EC2();
        var describeParams = { Filters: [
            {
                Name:"tag:Environment",
                Values: [
                    "Development"
                ]
            },
            {
                Name:"instance-state-name",
                Values: [
                    "running"
                ]
            }
        ]};
        var instances = [];
        ec2.describeInstances(describeParams, function(err, data) {
            if (err) {
                console.log(err, err.stack);
            } else {
                console.log(data);
                for (var i = 0; i < data.Reservations.length; i++) {
                    for (var j = 0; j < data.Reservations[i].Instances.length; j++) {
                        var instanceId = data.Reservations[i].Instances[j].InstanceId;
                        if (instanceId != undefined && instanceId != null && instanceId != "") {
                            instances.push(instanceId);   
                        }
                    }
                }
                if (instances.length > 0){
                    var stopParams = { InstanceIds: instances };
                    ec2.stopInstances(stopParams, function(err,data) {
                        if (err) {
                           console.log(err, err.stack);
                        } else {
                           console.log(data);
                        }
                    });   
                }
           }
        });
    };
  10. Leave Handler as "index.handler".
  11. Choose to use an existing role and select the IAM role you created earlier for stopping instances.
  12. You may want to select a higher value for the Timeout depending on how many instances will be involved in this process.
  13. Leave the other default values and click "Next".
  14. Click "Create function".

Creating the IAM policy for the Lambda function starting instances at on-hours:

  1. Navigate to IAM in your management console.
  2. Select "Policies" in the sidebar.
  3. Click "Create Policy".
  4. Select "Create Your Own Policy".
  5. Enter an appropriate policy name and description.
  6. Paste the following JSON into the policy document:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:DescribeInstances",
                    "ec2:StartInstances"
                ],
                "Resource": [
                    "*"
                ]
            }
        ]
    }
  7. Click "Create Policy".

Creating the IAM role for the Lambda function starting instances at on-hours:

  1. Select "Roles" in the sidebar.
  2. Click "Create New Role".
  3. Enter an appropriate role name and click "Next Step".
  4. Select "AWS Lambda" within the AWS Service Roles.
  5. Change the filter to "Customer Managed", check the box of the policy you just created, and click "Next Step".
  6. Click "Create Role".

Creating the Lambda function starting instances at on-hours:

  1. Navigate to Lambda in your management console.
  2. Click "Create a Lambda function".
  3. Select the "Blank Function" blueprint.
  4. Under "Configure triggers", click the grey box and select "CloudWatch Events - Schedule".
  5. Enter an appropriate rule name and description.
  6. I want the instances to be started at 7:00AM before the workday. Since I live in Arizona (UTC-7:00), the correct cron expression for this is 0 14 ? * MON-FRI *. You'll need to change this based on when you want the instances to start and what timezone you're in.
  7. Check the box to "Enable trigger" and click "Next".
  8. Enter an appropriate function name and description. Select Node.js for the runtime.
  9. Under "Lambda function code", select "Edit code inline" for the Code entry type and paste the following code in the box:
    var AWS = require("aws-sdk");
    
    exports.handler = (event, context, callback) => {
        var ec2 = new AWS.EC2();
        var describeParams = { Filters: [
            {
                Name:"tag:Environment",
                Values: [
                    "Development"
                ]
            },
            {
                Name:"instance-state-name",
                Values: [
                    "stopped"
                ]
            }
        ]};
        var instances = [];
        ec2.describeInstances(describeParams, function(err, data) {
            if (err) {
                console.log(err, err.stack);
            } else {
                console.log(data);
                for (var i = 0; i < data.Reservations.length; i++) {
                    for (var j = 0; j < data.Reservations[i].Instances.length; j++) {
                        var instanceId = data.Reservations[i].Instances[j].InstanceId;
                        if (instanceId != undefined && instanceId != null && instanceId != "") {
                            instances.push(instanceId);   
                        }
                    }
                }
                if (instances.length > 0){
                    var stopParams = { InstanceIds: instances };
                    ec2.startInstances(stopParams, function(err,data) {
                        if (err) {
                           console.log(err, err.stack);
                        } else {
                           console.log(data);
                        }
                    });   
                }
           }
        });
    };
  10. Leave Handler as "index.handler".
  11. Choose to use an existing role and select the IAM role you created earlier for starting instances.
  12. You may want to select a higher value for the Timeout depending on how many instances will be involved in this process.
  13. Leave the other default values and click "Next".
  14. Click "Create function".

Feel free to modify this solution to achieve your specific needs. Hopefully, this will allow you to save some money on your AWS bill every month.

Cheers!

16. February 2017 23:43
by Aaron Medacco
0 Comments

Powershell Script for Deleting Unused IAM Customer Managed Policies on AWS

16. February 2017 23:43 by Aaron Medacco | 0 Comments

When assigning IAM permissions within your AWS environment, you may find yourself with numerous customer managed policies. This is especially true if you are following the best practice of least privilege when granting access to services and resources managed under your AWS account.

As time passes, many of these policies may become obsolete as newer policies are used instead or the users, groups, or resources those policies were attached to no longer exist. This adds clutter and can make managing your IAM more difficult.

I've written a PowerShell script that will select the customer managed policies that are not being used (unattached) and delete them to help keep your IAM environment clean.

IAM Customer Managed Policy Cleanup

The script makes use of the AWS CLI:

$policies = aws iam list-policies --scope Local --output text --query "Policies[?AttachmentCount<``1``].{ARN:Arn, PolicyName:PolicyName}";
Write-Host "AWS CLI Output: $($policies)";
Foreach($i in $policies){
	$arn = $i.Split("`t")[0];
	$policyName = $i.Split("`t")[1];
	Write-Host "Deleting IAM Policy: $($policyName)";
	$defaultPolicyVersion = aws iam list-policy-versions --policy-arn $arn --output text --query "Versions[?IsDefaultVersion==``true``].{VersionId:VersionId}";
	$policyVersions = aws iam list-policy-versions --no-paginate --policy-arn $arn --output text --query "Versions[?IsDefaultVersion==``false``].{VersionId:VersionId}";
	Foreach($j in $policyVersions){
		Write-Host "Deleting Policy Version: $($j)";
		aws iam delete-policy-version --policy-arn $arn --version-id $j
	}
	Write-Host "Deleting Policy Version: $($defaultPolicyVersion) (Default)";
	aws iam delete-policy --policy-arn $arn;
	Write-Host "Done";
}

You can also download the file here.

Because this makes use of the AWS CLI, you will have to install and configure that on your machine before running this script. You can find instructions for doing so here.

Cheers!

12. February 2017 16:13
by Aaron Medacco
0 Comments

PowerShell Script for Removing Unassociated Elastic IPs on AWS

12. February 2017 16:13 by Aaron Medacco | 0 Comments

A big selling point of adopting Amazon Web Services is the "pay only for what you use" and "no minimum purchase" billing model. While this holds true of just about every service provided by AWS, Elastic IPs are an exception to the rule.

When allocating static IP addresses to your EC2 instances, Amazon charges you for the addresses you are NOT using, but still allocated. Addresses you are using are free of charge (unless you are associating more than one address to one resource). This encourages you not to waste the ever-dwindling pool of available IPv4 addresses.

I've written a script in PowerShell that will check any addresses you have allocated, and release those that are not associated with an instance. 

Elastic IP Cleanup

The script is quite simple and makes use of the AWS CLI:

$unassociatedAddresses = aws ec2 describe-addresses --output text --query "Addresses[?InstanceId==null].{PublicIp:PublicIp,AllocationId:AllocationId}";
Write-Host "AWS CLI Output: $($unassociatedAddresses)";
Foreach($i in $unassociatedAddresses){
	$ip = $i.Split("`t")[1];
	$allocationId = $i.Split("`t")[0];
	Write-Host "Releasing Elastic IP: $($ip)";
	aws ec2 release-address --allocation-id $allocationId;
	Write-Host "Done";
}

You can also download the file here.

Keep in mind that this will only be useful if you have installed and configured the AWS CLI in your environment. If you haven't done this, you can read the instructions provided by Amazon.

You may still find you have Elastic IP charges incurred on your bill after running this script.

  • If you have Elastic IPs associated to instances that are not running, you will still be charged and this script will not release those addresses in order to not step on your environment.
  • If you've elected to assign more than one Elastic IP to the same instance, this script will not release any of those addresses, but you will still be charged for using multiple addresses on a single resource.

Cheers!

Copyright © 2016-2017 Aaron Medacco