30. December 2016 19:00
by Aaron Medacco
0 Comments

Automating Alerts for Unassociated Elastic IPs w/ AWS

30. December 2016 19:00 by Aaron Medacco | 0 Comments

Amazon charges for Elastic IP addresses that are allocated, but not associated with a running instance. This is to discourage AWS customers from wasting the dwindling pool of available iPv4 addresses available. Wouldn't it be nice if, as someone who manages AWS resources, you received alerts when your account's allocated Elastic IPs are being wasted?

I've created an automated process to send out an email when this occurs. Using a simple Lambda function (triggered by a CloudWatch schedule) and an SNS topic, notifications can be sent to the appropriate employees when someone forgets to cleanup after terminating their instances.

Elastic IP Waste Diagram

Creating the SNS topic:

  1. Navigate to SNS in your management console.
  2. Select "Topics" in the sidebar.
  3. Click the "Create new topic" button.
  4. Enter an appropriate topic name and display name and click "Create topic".

Subscribing to the SNS topic:

  1. Select "Topics" in the sidebar.
  2. Click the ARN link for the topic you just created.
  3. Under Subscriptions, click "Create subscription".
  4. Select Email as the Protocol and enter your email address as the Endpoint.
  5. Repeat steps 3 and 4 for each email address you want to receive notifications.
  6. Each email address endpoint will receive an email asking to confirm the subscription. Confirm the subscriptions.

Creating an IAM policy for access permissions:

  1. Navigate to IAM in your management console.
  2. Select "Policies" in the sidebar.
  3. Click "Create Policy".
  4. Select "Create Your Own Policy".
  5. Enter an appropriate policy name and description.
  6. Paste the following JSON into the policy document:

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "sns:Publish",
                    "sns:Subscribe"
                ],
                "Resource": [
                    "Your Topic ARN"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:DescribeAddresses"
                ],
                "Resource": [
                    "*"
                ]
            }
        ]
    }
  7. Substitute "Your Topic ARN" with the ARN for the SNS topic you created and click "Create Policy".

Creating an IAM role for the Lambda function:

  1. Select "Roles" in the sidebar.
  2. Click "Create New Role".
  3. Enter an appropriate role name and click "Next Step".
  4. Select "AWS Lambda" within the AWS Service Roles.
  5. Change the filter to "Customer Managed", check the box of the policy you just created, and click "Next Step".
  6. Click "Create Role".

Creating the Lambda function:

  1. Navigate to Lambda in your management console.
  2. Click "Create a Lambda function".
  3. Select the "Blank Function" blueprint.
  4. Under "Configure triggers", click the grey box and select "CloudWatch Events - Schedule".
  5. Enter an appropriate rule name and description.
  6. Select the frequency you'd like Lambda to check for unassociated Elastic IPs in the Schedule expression input. I chose "rate(1 day)" for my usage.
  7. Check the box to "Enable trigger" and click "Next".
  8. Enter an appropriate function name and description. Select Node.js for the runtime.
  9. Under "Lambda function code", select "Edit code inline" for the Code entry type and paste the following code in the box:

    var AWS = require("aws-sdk");
    
    exports.handler = function(event, context) {
        var sns = new AWS.SNS();
        var ec2 = new AWS.EC2();
        var message = "The following Elastic IPs are not associated:\n\n";
        var params = {};
        ec2.describeAddresses(params, function(err, data) {
            if (err) {
                console.log(err, err.stack); 
            }
            else {
                var unassociatedAddresses = 0;
                for (var i = 0; i < data.Addresses.length; i++){
                    if (!data.Addresses[i].hasOwnProperty("InstanceId")){
                        console.log(data.Addresses[i].PublicIp);
                        unassociatedAddresses++;
                        message += " " + data.Addresses[i].PublicIp + "\n";
                    }
                }
                if (unassociatedAddresses > 0){
                    var publishParams = {
                        Message: message, 
                        Subject: "Elastic IP Addresses Unassociated",
                        TopicArn: "Your Topic ARN"
                    };
                    sns.publish(publishParams, context.done);
                }
            }
        });
    };
  10. Substitute "Your Topic ARN" with the ARN for the SNS topic you created earlier.
  11. Leave Handler as "index.handler".
  12. Choose to use an existing role and select the IAM role you created earlier.
  13. Leave the other default values and click "Next".
  14. Click "Create function".

That's it! Now you'll at least be made aware when your Elastic IPs are being wasted. Hopefully before whoever is paying your account's AWS bill.

Cheers!

10. December 2016 22:45
by Aaron Medacco
0 Comments

AWS Solution Architect - Associate Certification Tips & Advice

10. December 2016 22:45 by Aaron Medacco | 0 Comments

AWS Certified Solution Architect - Associate

A little over a month ago, I decided that receiving the Amazon Web Services Certified Solution Architect - Associate certification was a great way to validate my expertise using Amazon's cloud computing platform. Those that know me know that while I come from a development background, I don't have a history as an IT Professional or Systems Administrator. However, I rarely find myself wearing only the development hat, so being able to assume another role seemed like a great opportunity.

I set my exam date 30 days out from making my decision, and while I certainly was not "experienced" using Amazon Web Services at the time, I was confident in my ability to learn whatever was necessary quickly. What a ride. The last month has been an absolute grind of no sleep, red eyes, notes, training, videos, quizzes, whitepapers, labs, FAQs, blogs, and Mountain Dew (shower me, oh coder fuel). Last Monday, I passed with substantial time left, but definitely with my share of incorrect responses. Unfortunately, I didn't get to review what I missed after submission. The testing center only provides the percentages for how well you performed in each of the four areas the exam covers.

Anyways, I'm providing some pointers to anyone else attempting to get the AWS Solution Architect Associate certification, especially for those who do not come from a traditional IT or administrator background. Maybe someone can benefit from my experience.

1) Review the certification guide provided by Amazon Web Services here.

You should be able to check off most, if not all of the requirements provided in their guide. The instructor led training can be expensive for some and isn't necessary if you are good at self-learning. If you're a developer, brushing up on system architecture, networking and security best practices will be helpful.

2) Set a date, but give yourself more than one month to prepare.

This is especially true if you are new to cloud computing. Unless you're an impatient wretch like me and are prepared to burn the midnight oil, I'd recommend spacing it out so you feel completely confident on exam day. There's a lot of content and frankly, a ton of reading, which will consume a lot of your time. I'd recommend 3 months if you want to have time for other things while still remaining dedicated to study. However, set a date so you stay disciplined.

3) Take advantage of online video training.

I used Pluralsight, although there are several e-learning resources designed for users seeking certification. If you choose Pluralsight as well, I recommend the following courses. They benefited me most at the time of this writing:

These provide a solid overview of Amazon Web Services, address the core services, and go over many of the important features.

4) Run through as many practice questions as possible.

Amazon Web Services offers a collection of example questions which can be found here. Practice exams are available for each certification. These cost $20, are timed, and will give you a taste of what the real exam will be like. While I recommend taking one practice exam, you should take your results with a grain of salt. Don't let yourself get too comfortable just because you pass the practice exam. Keep practicing.

There are a few mobile apps available that can help you prepare, too. I used this one, although the amount of questions the app has is limited. It's a good tool to use until you start memorizing the questions, at which point it loses its value. Worth a few bucks, though.

The best resource I found for practice questions was Cloud Academy. They provide a rich volume of questions perfect for exam preparation. They also offer video training, but I mainly stuck to the quizzes. What I enjoyed about Cloud Academy was the ability to filter questions you wanted to receive by service level. For example, if you felt weak on Simple Storage Service (S3), you could start a quiz where you were asked only questions related to S3. Again, I wouldn't celebrate being able to perform well on each quiz. The question pool Cloud Academy pulls from definitely has a lot of "softball" questions that are easy and designed to throw beginners a bone. However, there isn't a better service I could find that offered a high volume of relevant questions in a timed environment.

5) Read the whitepapers and the FAQ for each service.

Not every service of Amazon Web Services will appear on the exam. However, since the vast majority of the questions are supposed to test your "ability to design and architect cloud solutions", you'd be doing yourself a disservice by not familiarizing yourself with all the available tools. The length of the FAQs vary by service, but many of them can be read in 20-30 minutes. Several questions in the exam pull directly from information found in the FAQs, and I recommend going over them the night before the exam so they are fresh in your mind.

Concerning the whitepapers, I found reading those outlined in the exam guide was enough. And while I don't want to discourage anyone from reading more of them, I believe your time would be better spent on the FAQs, quizzes, or hands on practice. Again, I recommend having at least a high level understanding of each service. There are a handful of services that you should have a complete understanding of in order to be successful on the exam. They include the obvious such as EC2, VPC, Route 53, S3, etc. I won't get more specific since all exam participants are required to agree to their NDA.

Going forward, I'm already looking forward to preparing for the Developer and SysOps Administrator exams. Plus, there's a huge amount of information that just got posted from the AWS re:Invent 2016 event that happened a little bit ago. Exciting stuff.

Hope this was helpful to anyone pursuing the certification.

Cheers!

4. December 2016 15:42
by Aaron Medacco
1 Comments

AWS VPC Basics for Dummies

4. December 2016 15:42 by Aaron Medacco | 1 Comments

AWS VPC (Virtual Private Cloud), one of the core offerings of Amazon Web Services, is a crucial service that every professional operating on AWS needs to be familiar with. It allows you to gather, connect, and protect the resources you provision on AWS. With VPC, you can configure and secure your own virtual private network(s) within the AWS cloud. As an administrator, security should be at the top of the priority list, especially when others are trusting you with the technology that powers their business. And if you do not know how to secure resources using VPC, you have no business administering cloud infrastructure for anyone using Amazon Web Services. The following is a high level outline (using a simple web application architecture) for those who are new to cloud or unfamiliar with the service and is by no means comprehensive of everything AWS VPC has to offer.

VPC 1

VPCs, which are the virtual networks you create using the service, are region specific and can include resources across all availability zones within the same region. You can think of a VPC as an apartment unit for an apartment building. All of your belongings (your virtual instances, databases, storage) are separated from other tenants (other AWS customers) living in the complex (the AWS cloud). When you first create a VPC, you will be asked to provide a CIDR block to define a collection of available private IP addresses for resources within your virtual network.

In this example, we'll use 192.168.0.0/16.

Later, we'll need to partition this collection of IP addresses into groups for the subnets we'll create.

However, since we are hosting a web application that needs to be public, we'll first need a way to expose resources to the internet. AWS VPC allows you to do this via Internet Gateways. This is pretty self-explanatory using the web console. You simply create an internet gateway and attach it to your VPC. You should know that any traffic going to and coming from the internet to your resources will go thru the Internet Gateway.

Moving down a layer, the next step is to define our subnets. What is that?

A subnet is just a piece of a network (your VPC). It is a logical grouping of connected resources. In the apartment analogy, it's like a bedroom. AWS VPC allows you to select whether you want your subnets to be private or public. The difference between whether a subnet is public or private really just comes down to whether the subnet has a route to an internet gateway or not. Routes are defined in route tables and each subnet needs a route table.

What is a route table, you ask? Route tables define how to connect to other resources on the network (VPC). They are like maps, giving directions to the resources for how to get somewhere in the network. By default, your subnets will have a route in their route table that allows them to get to all other resources within the same VPC. However, if you do not provide a route to the Internet Gateway associated to the VPC, your subnet is considered private. If you do provide that route, your subnet is considered public. Simple.

Below is a diagram of the VPC explained so far. Notice that the IP ranges of the subnets (192.168.0.0/24 and 192.168.1.0/24) are taken from the pool of IP addresses for the VPC (192.168.0.0/16). They must be different since no two resources can have the same private IP address in the virtual network. The public and private subnets exist in different availability zones to illustrate that your network can and should (especially, as you seek high availability) span multiple zones in whichever region you provision your resources.

 VPC Diagram 1

So how do we secure the network we've established so far?

NACLs (Network Access Control Lists) are one of the tools Amazon Web Services provides for protecting resources from unwanted traffic. They are firewalls that act at the subnet level. NACLs are stateless, which means that return traffic is not implied. If communication is allowed into the subnet, it doesn't necessarily mean that the response communication back to the sender is allowed. Thus, you can define rules to allow or deny traffic that is either inbound or outbound. These rules exist in a list that is ordered by rule number. When traffic attempts to go in or out of the subnet, the NACL evaluates the rules in order and acts on the first match, whether it is allow or deny. Rules further down are not evaluated once a match is found. If no match is found, traffic is denied (by the * fallback rule). The following is what a NACL might look like in the AWS management console.

NACL Example

In this example, inbound traffic is evaluated to see if it is attempting to use SSH on port 22 from anywhere (0.0.0.0/0). If this is true, the traffic is denied. Then, the same traffic is checked to see if RDP on port 3389 is being attempted. If so, it is denied. If neither is true, the traffic is allowed in because of rule 200, which allows all traffic on all ports from anywhere. Notice there are gaps between rule numbers. It's good practice to leave some space between rule numbers (100, 150, 200) so you can come back later and place rules in between those already existing. That is why you don't see rules ordered 1, 2, 3...where if you needed to insert a rule in between two others, you would have to move a lot of rules to achieve the correct configuration.

This is just an example NACL. You'd likely want to be able to SSH or RDP into your EC2 instances from a remote location, like your office network or your home, so you wouldn't use this configuration which would obviously prevent that.

VPC Diagram 2

Now we can add resources to our protected subnets. We'll have one web server instance in our public subnet to handle web requests coming from the internet, and one database server our web application depends on in the private subnet. The infrastucture for a real world web application would likely be more sophisticated than this, accounting for things such as high availability at both the database and web tier. You'd also see an elastic load balancer combined with an auto scaling group to distribute traffic to multiple web servers so no particular resource handling requests gets overwhelmed.

Great, so how do we secure our resources once we are within the subnet? That is where security groups come in.

Security groups are the resource level firewalls guarding against unwanted traffic. Similar to NACLs, you define rules for the traffic you want to be allowed in. By default, when a security group is created, it has no rules so all traffic is denied. This is to help you implement best practice which is to configure the least amount of access necessary. Unlike NACLs, security groups are stateful, so when communication is allowed in, the response communication is allowed out. When you create an inbound rule, you provide the type, protocol, port range, and source of the traffic that should be allowed in. For example, if our web server instance was running Windows Server 2016, I'd create a rule for RDP, protocol TCP (6), using port 3389, where the source is my office IP address. When the security group is updated, I'll be able to administer my web server remotely using RDP. You can also attach more than one security group to a resource. This is useful when you want to combine multiple access configurations. For instance, if you provisioned four EC2 instances running in a subnet and wanted to have RDP access to all of them from your office, but you also wanted FTP access from your home to two of the instances, you could configure one security group for the RDP access, and another for the FTP access, and attach both security groups to the instances requiring both your home FTP and office RDP access.

VPC Diagram 3

Those are really the bare essentials for configuring a VPC in Amazon Web Services. If you want another layer of security, there is nothing stopping you from using a host based firewall like Windows Firewall on your virtual instances. Additionally, your going to want to create an Elastic IP for the instances you want publicly accessible, which in this case would be our EC2 instance acting as a web server. Otherwise, you'll find that the public IP address of your instance can change which is definitely not going to be okay for your DNS if you are running a web application.

AWS VPC also allows you to configure dedicated connectivity from your on-premises to the AWS cloud using Direct Connect, setup VPN connections from your network to your cloud network, create NAT Gateways, DHCP Option Sets, and setup VPC Peering to allow connectivity between VPCs. These are great tools and knowing what they are and when you would use them is important, but they aren't necessary for all use cases.

A final recommendation is to tag and name your VPC configured resources. Name your security groups, name your NACLs, name your route tables, name everything. As your infrastructure grows on Amazon Web Services, you will be driven insane if everything has names like "acl-1ad75f7c" that provide no context into what its purpose is. Therefore, its a good idea to name everything from the start so things stay organized and other users (particular those that weren't there when you set things up) of the AWS account will have a clue when they need to make changes.

For even more detail on VPCs, I'd recommend the Pluralsight course, AWS VPC Operations by Nigel Poulton which you can find here.

Cheers!

Copyright © 2016-2017 Aaron Medacco