24. April 2017 21:01
by Aaron Medacco
0 Comments

Testing if Your AWS Application Load Balancer is Relaying Traffic

24. April 2017 21:01 by Aaron Medacco | 0 Comments

A common requirement before deploying an elastic load balancer into production on AWS is to test if traffic is being relayed from the load balancer to EC2 instances in the assigned target group. Those familiar with developing web applications know that you can modify your hosts file to force DNS to resolve to an address. Therefore, we just need to find out the load balancer's public facing IP address, update our hosts file and validate our site still loads.

Let's get started.

Retrieving the DNS name of your elastic load balancer:

  1. Navigate to your load balancer in your AWS management console under the EC2 service.
  2. With your load balancer selected, copy the DNS name value under the description tab.
    Application Load Balancer DNS Name

Identifying the IP address of your elastic load balancer:

  1. Open up a command prompt or terminal window.
  2. Run the following command, substituting your DNS name value for mine.
    nslookup Test-Load-Balancer-148217235.us-east-1.elb.amazonaws.com
  3. You should see something similar to the following:
    Application Load Balancer IP Address
  4. We'll take one of the IP address returned by our nslookup command and plug it into our hosts file.

Modifying your hosts file and requesting your application:

  1. Navigate to your hosts file.
    For Windows, this is located at C:\Windows\System32\drivers\etc\hosts.
    For Linux, this is located at \etc\hosts.
  2. Add an entry for your domain with the IP address you took from the previous step:
    Hosts File
  3. Save the file.
  4. Use a browser and navigate to the domain being serviced by your elastic load balancer.

If you receive a response, that means your load balancer is forwarding traffic to instances in your target group. If your request hangs and times out, that means that something still needs to be done before your load balancer is ready. Keep in mind that the IP addresses you looked up in step 2 are subject to change and should not be treated as if they are static.

Cheers!

22. April 2017 23:50
by Aaron Medacco
0 Comments

Using Nested Stacks w/ AWS CloudFormation

22. April 2017 23:50 by Aaron Medacco | 0 Comments

When describing your cloud infrastructure using AWS CloudFormation, your templates can become large and difficult to manage as your desired stacks grow significant. CloudFormation allows you to nest templates giving you the ability to piecemeal different chunks of your infrastructure into smaller modules. For instance, suppose you have several templates that involve creating an elastic load balancer. Rather than copying / pasting the same JSON between each template, you can write one template for provisioning the load balancer and then reference that template in "parent" templates that require it. 

This has several advantages. Code concerning the load balancer is consolidated in one place. This means when changes need to be made to it's configuration, you don't need to revisit each template where you copied the code at one point in time. This saves you both time and grief, removing the chance for human error to cause one template whose ELB is different than another template when they should be identical. It also enhances your ability to develop and test your CloudFormation templates. When writing templates, it's common to make incremental changes to JSON, create a stack from the template to validate structure and behavior, tear down the stack, and rinse / repeat. For large templates, provisioning and deleting stacks will slow you down as you wait for feedback. Templates that are more modular / smaller generate more specific testing providing feedback in faster iterations.

AWS CloudFormation 

In this post, I'll reveal two CloudFormation templates, one that provisions an application load balancer, and one that creates a hosted zone with a record set pointing to the load balancer being provisioned. I'll do this by nesting the template for the load balancer inside the template that creates a Route 53 hosted zone. Keep in mind, I won't be setting up listeners or target groups for the load balancer. I'm only demonstrating how to nest CloudFormation templates.

Here is my template for provisioning an application load balancer without any configuration:

Note: You can download this template here.

{
    "AWSTemplateFormatVersion": "2010-09-09",
	"Description": "Simple Load Balancer",
	"Parameters": {
		"VPC": {
			"Type": "AWS::EC2::VPC::Id",
			"Description": "VPC for the load balancer."
		},
		"PublicSubnet1": {
			"Type": "AWS::EC2::Subnet::Id",
			"Description": "First public subnet."
		},
		"PublicSubnet2": {
			"Type": "AWS::EC2::Subnet::Id",
			"Description": "Second public subnet."
		}
	},
	"Resources": {
		"ElasticLoadBalancer": {
			"Type": "AWS::ElasticLoadBalancingV2::LoadBalancer",
			"Properties" : {
				"Name": "Load-Balancer",
				"Scheme": "internet-facing",
				"Subnets": [ {"Ref": "PublicSubnet1"}, {"Ref": "PublicSubnet2"} ],
				"Tags": [ { "Key": "Name", "Value": "CloudFormation Load Balancer" } ]
			}
		}
	},
	"Outputs": {
		"LoadBalancerDNS": {
			"Description": "Public DNS For Load Balancer",
			"Value": { "Fn::GetAtt": [ "ElasticLoadBalancer", "DNSName" ] }
		},
		"LoadBalancerHostedZoneID": {
			"Description": "Canonical Hosted Zone ID of load balancer.",
			"Value": { "Fn::GetAtt": [ "ElasticLoadBalancer", "CanonicalHostedZoneID" ] } 
		}
	}
}

Notice that there are some parameters related to networking being asked for. In this case, I'm requesting two public subnets for the ELB since I intend to have it exposed to the internet. I've also defined some outputs, one is the public DNS for the load balancer I'm creating, and the other is it's hosted zone. These values will come in handy when my parent template sets up an A record set pointing to the newly made ELB.

The following is the template for provisioning a hosted zone given a domain name:

Note: You can download this template here.

{
    "AWSTemplateFormatVersion": "2010-09-09",
	"Description": "Generate internet-facing load balancer.",
	"Parameters": {
		"Domain": {
			"Type": "String",
			"Description": "Domain serviced by load balancer."
		},
		"VPC": {
			"Type": "AWS::EC2::VPC::Id",
			"Description": "VPC for the load balancer."
		},
		"PublicSubnet1": {
			"Type": "AWS::EC2::Subnet::Id",
			"Description": "First public subnet."
		},
		"PublicSubnet2": {
			"Type": "AWS::EC2::Subnet::Id",
			"Description": "Second public subnet."
		}
	},
	"Resources": {
		"HostedZone": {
			"Type": "AWS::Route53::HostedZone",
			"Properties": {
				"Name": { "Ref": "Domain" }
			}
		},
		"HostedZoneRecords": {
			"Type": "AWS::Route53::RecordSetGroup",
			"Properties": {
				"HostedZoneId": { "Ref": "HostedZone" },
				"RecordSets": [{
					"Name": { "Ref": "Domain" },
					"Type": "A",
					"AliasTarget": {
						"DNSName": { "Fn::GetAtt": [ "LoadBalancerStack", "Outputs.LoadBalancerDNS" ]},
						"HostedZoneId": { "Fn::GetAtt": [ "LoadBalancerStack", "Outputs.LoadBalancerHostedZoneID" ]}
					}
				}]
			}
		},
		"LoadBalancerStack": {
			"Type": "AWS::CloudFormation::Stack",
			"Properties": {
				"Parameters": {
					"VPC": { "Ref": "VPC" },
					"PublicSubnet1": { "Ref": "PublicSubnet1" },
					"PublicSubnet2": { "Ref": "PublicSubnet2" }
				},
				"TemplateURL": "https://s3.amazonaws.com/cf-templates-1bc7bmahm5ald-us-east-1/loadbalancer.json"
			}
		}
	}
}

You can see I've added a parameter asking for a domain in addition to the values required for the previous template. Then, in the resources section of the template, I'm creating an AWS::CloudFormation::Stack referencing the location my nested template is stored and passing the parameters required in order to invoke it. In the section where I define my hosted zone records, I need to know the DNS name and hosted zone of the application load balancer. These are retrieved by referencing the outputs returned by the nested template. 

Creating a CloudFormation stack using the parent template, I now have a Route 53 hosted zone for the input domain pointing to the newly created load balancer. From here, I could reference the load balancer template in any number of templates requiring it without bloating each of them with pasted JSON. The next step would be to create listeners and target groups with EC2 instance targets, but that is a separate exercise.

Cheers!

19. April 2017 01:36
by Aaron Medacco
0 Comments

Simple Web Hosting w/ AWS Lightsail

19. April 2017 01:36 by Aaron Medacco | 0 Comments

What if you just want to host a WordPress blog or a simple website on AWS?

Maybe you don't want to learn all the tools necessary to configure your cloud environment from scratch. Perhaps your a business owner or a web designer who knows enough to get a site running on GoDaddy, but got overwhelmed with this when you created an AWS account:

AWS Management Console

Note: The list of services actually keep going, but this is all I could fit in a barely readable image.

Amazon Web Services has recently released AWS Lightsail to service this type of customer. Many hosting providers like WinHost, GoDaddy, BlueHost, etc. offer cheap hosting packages that allow you to get a simple website running quickly. They typically come with their own management console or control panel for users to manage items like DNS, domains, billing, SSL certificates, email, etc. AWS Lightsail offers a similar experience, where setup is fast, easy, and cheap. In other words, you won't have to hire an AWS expert or sink your time learning about Amazon Web Services in order to get up and running. The key to Lightsail is simplicity, to not be overwhelmed by the flexibility and options thrown at you when setting up your environment in AWS.

For example, I was able to get a minimal WordPress environment running within a few clicks:

AWS Lightsail Console

If you're used to most hosting provider consoles, this should look pretty familiar. You can see how Amazon has peeled back the complexity of the normal AWS management console in order to make Lightsail more accessible for the "less" technically-minded customer (newbs). Readers who are familiar with Amazon Web Services will identify how items like security groups and elastic IPs are presented differently in Lightsail's simplified user interface. 

For those that are curious, it appears the instances running within Lightsail are EC2 instances of the t2 instance family under the covers. Resources provisioned with Lightsail do not appear in the normal AWS management console. At the time of this writing, it does not appear that you can "graduate" your Lightsail environment to normal AWS, however I believe this will be a common customer request so it may become an option in the future. Instance-level firewalls (security groups), DNS, monitoring, static (elastic) IPs, and volume snapshots can all be leveraged within AWS Lightsail. However, you should think of AWS Lightsail as AWS Lite. You're not going to have all of the options available that you would normally within AWS, but the intern you hired from the local community college to make your website might be able to figure it out.

If you're interested in this service, check out Jeff Barr's launch post where he shows how easy it is to get started with AWS Lightsail.

Cheers!

13. April 2017 22:48
by Aaron Medacco
0 Comments

Testing Your Web Tier Auto Scaling Groups on AWS w/ Artillery.io

13. April 2017 22:48 by Aaron Medacco | 0 Comments

Amazon Web Services has made adding elasticity to your system architecture easy through their offering of auto scaling groups. By following a few easy steps, you can expand and retract your infrastructure to optimally service whatever volume of traffic your applications encounter. 

But how can you test your auto scaling group on AWS? How do you know if everything is configured correctly?

You'll need to simulate specific loads of traffic in order to test that the scaling rules you've set up are correct. You can do this simply using an easy-to-use tool called Artillery.io. Artillery.io is an open source load testing toolkit written in Node.js. I won't go into detail on everything Artillery.io can do, but I encourage you to go check it out. For our purposes, I'm only going to run a few simple commands to show how easy testing an auto scaling group can be.

In order to use Artillery.io, you will need to install Node.js. You can download and install Node.js by downloading the appropriate package here.

Auto Scaling Groups

Assuming you've installed Node.js, you can test your auto scaling group by following these steps. The installation steps for Artillery.io can also be found here.

Installing Artillery.io:

  1. Open a command prompt or terminal window.
  2. Run the following command:
    npm install -g artillery
  3. Check your installation by running this command:
    artillery dino
  4. If you see the ASCII dinosaur, that means everything is ready to go.

Testing your auto scaling group:

  1. Run the following command:
    artillery quick --duration 300 --rate 10 -n 20 http://www.yourwebsite.com/resource
    Note: You'll notice I've set my duration to 5 minutes instead of the example given by Artillery.io's documentation. This is because my auto scaling group only scales out if average CPU utilization over 5 minutes is sufficiently high. You'll need to play with these values depending on how powerful the instances in your auto scaling group are and what scaling rules you've defined.
  2. Substitute the final argument with the URL to the domain your trying to test.
  3. If you've invoked a heavy enough load, you can monitor your EC2 instances serving the requests and notice something akin to the following:

    Test Auto Scaling Group 1
  4. This should trigger your auto scaling group to provision additional instances provided you've set up the appropriate scale out rules:

    Test Auto Scaling Group 2

    For this example, you can see my auto scaling group scaled up to it's maximum (3 instances) during the course of the test and then scaled down using t2.micro instances in the launch configuration. 

If your auto scaling group scaled appropriately, you can rest assured that it will work under real traffic and that your configuration is sound. Make sure that the scale down rules also took effect and that once your use of Artillery.io has ended, your auto scaling group terminates the unnecessary instances.

Note: Some of the inspiration for this post comes from Mike Pfeiffer's Pluralsight course AWS Certified DevOps Engineer: High Availability and Elasticity which I encourage everyone to check out here.

Cheers!

2. April 2017 20:51
by Aaron Medacco
0 Comments

AWS Phoenix Meetup - Security Strategy When Migrating to the Public Cloud

2. April 2017 20:51 by Aaron Medacco | 0 Comments

A little over a week ago, I attended an AWS Meetup in Phoenix regarding how to approach security when migrating applications to the cloud. The event was sponsored by NextNet and Alert Logic and took place at the ASU research center. Charles Johnson of Alert Logic presented and did a fantastic job. This event being my first, I was expecting a dry lecture. What I got was an engaging discussion with a lot of very smart people. 

Meetup

As someone who develops software, my primary takeaways involved the integration of security into the software development lifecycle, and how several of the application level frameworks are becoming the most targeted surfaces used by attackers, especially for applications in the cloud.

The movement of tearing down the walls separating development, operations and security commonly referred to as DevOps or DevSecOps was core to much of the discussion. Charles talked about how everybody involved with designing, developing, and maintaining applications in the cloud needs to take ownership of security, not just the "Security Team". Additionally, security should not be this annoying thing slapped on to an application at the end of the development lifecycle. Instead, security should be discussed early and often by application developers, system admins, and the security engineers so each piece of the application and the infrastructure powering it is designed to be secure. This means incorporating security testing alongside application unit testing from an early stage, and deciding upfront how to store API keys, login credentials, etc. as a team so they aren't exposed or hard-coded by a lazy developer. It also means constantly checking for where you might be vulnerable, and deciding how to address those vulnerabilities together. 

Incorporating security into the software development lifecycle also has the benefit of reducing the amount of tools you need to use after the fact in order to feel secure. If the application is designed from the ground up with security in mind, you shouldn't need to purchase tons of security tools in order to compensate. Charles mentioned that some of the teams he's assisted have bought expensive security products, but still haven't even implemented them months after purchase. Yikes!

And just because you've bought the latest and greatest tools, don't assume you are not vulnerable. In fact, you should assume the opposite. Assume you are vulnerable. Assume that the products you purchase and the frameworks you leverage are vulnerable, and monitor for breaches all the time. Additionally, consider what products you're buying. Are they really helping you? Most security products are designed to protect server operating systems, networking, the hypervisors, or aid in cloud management. But what about the application frameworks used by the developers? The databases, server-side apps, third-party libraries, etc? Do these products help secure those? Do the developers who are most intimate with these tools have the authority to purchase security products anyways? Maybe not. And who are the sales teams for these products selling to? Probably not developers. 

Note: I wish I had the graphic of the presentation, which showed that most of the attack surface for applications living in the cloud occurred higher up, i.e. the application level. SQL Injection, XSS, etc.

Lastly, use the tools provided by AWS. Use WAF. Use Inspector. Use Shield. Use Config. 

I'm sure I've left out a lot of information. I had a hard time concentrating on the presentation while taking notes on my laptop. However, I'm definitely going to the next event. (More cookies)

For those of you in the Phoenix area, consider checking out the AWS Phoenix Meetup and Blue Team - Greater Phoenix Area if your interested in attending these kinds of events.

Cheers!

Copyright © 2016-2017 Aaron Medacco