3. December 2017 18:42
by Aaron Medacco
0 Comments

AWS re:Invent 2017 - Day 2 Experience

3. December 2017 18:42 by Aaron Medacco | 0 Comments

The following is my Day 2 re:Invent 2017 experience. Missed Day 1? Check it out here.

Day 2, I started off the day waking up around 11:30am. Needed the rest from the all-nighter drive Monday morning. Cleaning lady actually woke me up when she was making her rounds on the 4th floor. This meant that I missed the breakout session I reserved, Deploying Business Analytics at Enterprise Scale with Amazon QuickSight (ABD311). I'm currently in the process of recording a Pluralsight course on Amazon QuickSight, so I felt this information could be helpful as I wrap up that course. Guess I'll have to check it out later once the sessions are uploaded to YouTube. Just another reason to never drive through the night before re:Invent again.

After getting ready, I exposed my nerd skin to sunlight and walked over to the Venetian. This is where I'd be the majority of the day. I kind of lucked out because all of my sessions for the day besides the one I slept over were in the same hotel, and pretty back-to-back so I didn't have to get creative with my downtime. 

First session of the day was Deep Dive into the New Network Load Balancer (NET304). I was curious about this since the network load balancer's announcement recently, but never had a use case or a reason to go and implement one myself. 

AWS re:Invent 2017

I have to admit, I didn't know it could route to IP addresses.

AWS re:Invent 2017

Should have picked a better seat.

AWS re:Invent 2017

Putting it all together.

The takeaways I got was that the NLB is essentially your go-to option for TCP traffic at scale, but for web applications you'd still be mostly using the Application Load Balancer or the Classic Load Balancer. The 25% cheaper than ALB fact seems significant and it uses the same kinds of components used by ALB like targets, target groups, and listeners. Additionally, it supports routing to ECS, EC2, and external IPs, as well as allowing for static IPs per availability zone.

As I was walking out of the session, there was a massive line hugging the wall of the hall and around the corner for the next session which I had a reservation seat for (thank god). That session was Running Lean Architectures: How to Optimize for Cost Efficiency (ARC303). Nerds would have to squeeze in and cuddle for this one, this session was full. 

AWS re:Invent 2017

Before the madness.

AWS re:Invent 2017

Wasn't totally full, but pretty full.

AWS re:Invent 2017

People still filing in.

AWS re:Invent 2017

Obvious, but relevant slide.

I had some mixed feelings about this session, but thought it was overall solid. On one hand, much of the information was definitely important for AWS users to save money on their monthly bill, but at the same time, I felt a lot of it was fairly obvious to anyone using AWS. For instance, I have to imagine everybody knows they should be using Reserved Instances. I feel like any potential or current user of AWS would have read about pricing thoroughly before even considering moving to Amazon Web Services as a platform, but perhaps I'm biased. There were a fair number of managers in the session and at re:Invent in general, so maybe they're not aware of obvious ways to save money. 

Aside from covering Spot and Reserved Instance use cases, there was some time covering Convertable Reserved Instances, which is still fairly new. I did enjoy the tips and tricks given on reducing Lambda costs by looking for ways to cut down on function idle time, using Step Functions instead of manual sleep calls, and migrating smaller applications into ECS instead of running each application on their own instance. The Lambda example highlighted that many customer functions involve several calls to APIs which occur sequentially where each call waits on the prior to finish. This can rack up billed run time even though Lambda isn't actually performing work during those waiting periods. The trick they suggested was essentially to shotgun the requests all at once, instead of one-by-one, but as I thought about it, that would only work if those API calls were such that they didn't depend on the result of the prior. When they brought up Step Functions it was kind of obvious you could just use that service if that was the case, though.

The presenters did a great job, and kept highlighting the need to move to a "cattle" mentality instead of "pet" mentality when thinking about your cloud infrastructure. Essentially, they encouraged moving away from manual pushing, RDPing, naming instances thinks like "Smeagle" and the like. Honestly, a lot of no-brainers and elementary information but still a  good session.

Had some downtime after to go get something to eat. Grabbed the Baked Rigatoni from the Grand Lux Cafe in the Venetian. The woman who directed me to it probably thought I was crazy. I was in a bit of a rush, and basically attacked her with, "OMG, where is food!? Help me!"

AWS re:Invent 2017

Overall, 7/10. Wasn't very expensive and now that I think about it, my first actual meal (some cold pizza slices don't count) since getting to Vegas. 

Next up was ElastiCache Deep Dive: Best Practices and Usage Patterns (DAT305) inside the Venetian Theatre. I was excited about this session since I haven't done much of anything with ElastiCache in practice, but I know some projects running on AWS that leverage it heavily. 

AWS re:Invent 2017

About 10 minutes before getting mind-pwned.

AWS re:Invent 2017

Random guy playing some Hearthstone while waiting for the session to begin.

AWS re:Invent 2017

Sick seat.

Definitely felt a little bit out of my depth on this one. I'm not someone who is familiar with Redis, outside of just knowing what it does so I was clueless during some of the session. I didn't take notes, but I recall a lot of talk about re-sharding clusters, the old backup and recovery method vs. the new online managed re-sharding available, pros of enabling cluster mode (was clueless, made sense at the time, but couldn't explain it to someone else), security, and best practices. My favorite part of the session involved use cases and how ElastiCache can benefit: IoT, gaming apps, chat apps, rate limiting services, big data, geolocation or recommendation apps, and Ad Tech. Again, I was out of my depth, but I'll be taking a closer look at this service during 2018 to fix that.

After some more Frogger in the hallways, headed over to the Expo, grabbed some swag and walked around all the vendor booths. Food and drinks were provided by AWS and the place was a lot bigger than I expected. There's a similar event occurring at the Aria (the Quad), which I'll check out later in the week. 

AWS re:Invent 2017

Wall in front of the Expo by Registration.

There were AWS experts and team members involved with just about everything AWS scattered around to answer attendee questions which I though was freaking awesome. Valuable face-time with the actual people that work and know about the stuff being developed.

AWS re:Invent 2017

Favorite section of the Venetian Expo.

AWS re:Invent 2017

More madness.

Talked to the guys at the new PrivateLink booth to ask if QuickSight was getting an "endpoint" or a method for connecting private AWS databases soon. Ended up getting the de facto answer from the Analytics booth who had Quinton Alsbury at it. Once I saw him there, I'm like "Oh here we go, this guy is the guy!" Apparently, the feature's been recently put in public preview, which I somehow missed. Visited a few other AWS booths like Trusted Advisor and the Partner Network one, and then walked around all the vendor booths for a bit.

AWS re:Invent 2017

Unfortunately, I didn't have much time to chit chat with a lot of them since the Expo was closing soon. I'll have to do so more at the Quad. Walked over to the interactive search map they had towards the sides of the room to look for a certain company I thought might be there. Sure enough, found something familiar:

AWS re:Invent 2017

A wild technology learning company appears.

Spoke with Dan Anderegg, whose the Curriculum Manager for AWS within Pluralsight. After some talk about Pluralsight path development, I finished my beer and got out, only to find I actually stayed too long and was already super late to my final session for the day, Deep Dive on Amazon Elastic Block Store (Amazon EBS) (STG306). Did I mention how it's hard to try to do everything you want to at re:Invent? 

Ended up walking home and wanting to just chill out, which is how this post is getting done. 

Cheers!

22. April 2017 23:50
by Aaron Medacco
0 Comments

Using Nested Stacks w/ AWS CloudFormation

22. April 2017 23:50 by Aaron Medacco | 0 Comments

When describing your cloud infrastructure using AWS CloudFormation, your templates can become large and difficult to manage as your desired stacks grow significant. CloudFormation allows you to nest templates giving you the ability to piecemeal different chunks of your infrastructure into smaller modules. For instance, suppose you have several templates that involve creating an elastic load balancer. Rather than copying / pasting the same JSON between each template, you can write one template for provisioning the load balancer and then reference that template in "parent" templates that require it. 

This has several advantages. Code concerning the load balancer is consolidated in one place. This means when changes need to be made to it's configuration, you don't need to revisit each template where you copied the code at one point in time. This saves you both time and grief, removing the chance for human error to cause one template whose ELB is different than another template when they should be identical. It also enhances your ability to develop and test your CloudFormation templates. When writing templates, it's common to make incremental changes to JSON, create a stack from the template to validate structure and behavior, tear down the stack, and rinse / repeat. For large templates, provisioning and deleting stacks will slow you down as you wait for feedback. Templates that are more modular / smaller generate more specific testing providing feedback in faster iterations.

AWS CloudFormation 

In this post, I'll reveal two CloudFormation templates, one that provisions an application load balancer, and one that creates a hosted zone with a record set pointing to the load balancer being provisioned. I'll do this by nesting the template for the load balancer inside the template that creates a Route 53 hosted zone. Keep in mind, I won't be setting up listeners or target groups for the load balancer. I'm only demonstrating how to nest CloudFormation templates.

Here is my template for provisioning an application load balancer without any configuration:

Note: You can download this template here.

{
    "AWSTemplateFormatVersion": "2010-09-09",
	"Description": "Simple Load Balancer",
	"Parameters": {
		"VPC": {
			"Type": "AWS::EC2::VPC::Id",
			"Description": "VPC for the load balancer."
		},
		"PublicSubnet1": {
			"Type": "AWS::EC2::Subnet::Id",
			"Description": "First public subnet."
		},
		"PublicSubnet2": {
			"Type": "AWS::EC2::Subnet::Id",
			"Description": "Second public subnet."
		}
	},
	"Resources": {
		"ElasticLoadBalancer": {
			"Type": "AWS::ElasticLoadBalancingV2::LoadBalancer",
			"Properties" : {
				"Name": "Load-Balancer",
				"Scheme": "internet-facing",
				"Subnets": [ {"Ref": "PublicSubnet1"}, {"Ref": "PublicSubnet2"} ],
				"Tags": [ { "Key": "Name", "Value": "CloudFormation Load Balancer" } ]
			}
		}
	},
	"Outputs": {
		"LoadBalancerDNS": {
			"Description": "Public DNS For Load Balancer",
			"Value": { "Fn::GetAtt": [ "ElasticLoadBalancer", "DNSName" ] }
		},
		"LoadBalancerHostedZoneID": {
			"Description": "Canonical Hosted Zone ID of load balancer.",
			"Value": { "Fn::GetAtt": [ "ElasticLoadBalancer", "CanonicalHostedZoneID" ] } 
		}
	}
}

Notice that there are some parameters related to networking being asked for. In this case, I'm requesting two public subnets for the ELB since I intend to have it exposed to the internet. I've also defined some outputs, one is the public DNS for the load balancer I'm creating, and the other is it's hosted zone. These values will come in handy when my parent template sets up an A record set pointing to the newly made ELB.

The following is the template for provisioning a hosted zone given a domain name:

Note: You can download this template here.

{
    "AWSTemplateFormatVersion": "2010-09-09",
	"Description": "Generate internet-facing load balancer.",
	"Parameters": {
		"Domain": {
			"Type": "String",
			"Description": "Domain serviced by load balancer."
		},
		"VPC": {
			"Type": "AWS::EC2::VPC::Id",
			"Description": "VPC for the load balancer."
		},
		"PublicSubnet1": {
			"Type": "AWS::EC2::Subnet::Id",
			"Description": "First public subnet."
		},
		"PublicSubnet2": {
			"Type": "AWS::EC2::Subnet::Id",
			"Description": "Second public subnet."
		}
	},
	"Resources": {
		"HostedZone": {
			"Type": "AWS::Route53::HostedZone",
			"Properties": {
				"Name": { "Ref": "Domain" }
			}
		},
		"HostedZoneRecords": {
			"Type": "AWS::Route53::RecordSetGroup",
			"Properties": {
				"HostedZoneId": { "Ref": "HostedZone" },
				"RecordSets": [{
					"Name": { "Ref": "Domain" },
					"Type": "A",
					"AliasTarget": {
						"DNSName": { "Fn::GetAtt": [ "LoadBalancerStack", "Outputs.LoadBalancerDNS" ]},
						"HostedZoneId": { "Fn::GetAtt": [ "LoadBalancerStack", "Outputs.LoadBalancerHostedZoneID" ]}
					}
				}]
			}
		},
		"LoadBalancerStack": {
			"Type": "AWS::CloudFormation::Stack",
			"Properties": {
				"Parameters": {
					"VPC": { "Ref": "VPC" },
					"PublicSubnet1": { "Ref": "PublicSubnet1" },
					"PublicSubnet2": { "Ref": "PublicSubnet2" }
				},
				"TemplateURL": "https://s3.amazonaws.com/cf-templates-1bc7bmahm5ald-us-east-1/loadbalancer.json"
			}
		}
	}
}

You can see I've added a parameter asking for a domain in addition to the values required for the previous template. Then, in the resources section of the template, I'm creating an AWS::CloudFormation::Stack referencing the location my nested template is stored and passing the parameters required in order to invoke it. In the section where I define my hosted zone records, I need to know the DNS name and hosted zone of the application load balancer. These are retrieved by referencing the outputs returned by the nested template. 

Creating a CloudFormation stack using the parent template, I now have a Route 53 hosted zone for the input domain pointing to the newly created load balancer. From here, I could reference the load balancer template in any number of templates requiring it without bloating each of them with pasted JSON. The next step would be to create listeners and target groups with EC2 instance targets, but that is a separate exercise.

Cheers!

4. February 2017 23:37
by Aaron Medacco
0 Comments

Pluralsight Audition on AWS Private Hosted Zones

4. February 2017 23:37 by Aaron Medacco | 0 Comments

It's been a while since my last post. Had quite a schedule the last couple weeks. One of the more important priorities has been recording my audition for Pluralsight authorship. I ended up changing my topic twice as the time limit of 10 minutes was definitely not sufficient for my initial ideas. Eventually, I decided to stick to something simple that most AWS professionals would already know, setting up a private hosted zone. Some takeaways from participating in the audition process:

  • Camtasia has some bugs, and isn't always the most intuitive program, at least in my opinion. While editing my recordings, I was reminded of when I first learned Photoshop. "Where is this at?", "Why doesn't this do what I want it to?", "Why would they put this tool over here, are you kidding me?". Thankfully, TechSmith has provided several tutorials for common video editing tasks.
  • Write a script first. Do not just wing it. Know exactly what you will say before you record. Otherwise you will "um", "so", and awkward pause your way through each take.
  • Record audio first, then do the video. My Blue Yeti microphone was capturing all the noise from my mechanical keyboard, mouse, and any shuffling I may have done while recording demos, which was distracting from the material, forcing me to do it over.
  • Don't talk forever in front of slides. Even the final audition was guilty of this. Something I need to improve on.
  • Record each audio segment the same distance from the microphone.
  • Demos are fun, except when you speak too fast for you to keep up with the audio. Need to get better at using Camtasia so I can just speed up the visual when this happens.

Looking forward to getting feedback.

Update: This audition was accepted by the Pluralsight team.

Cheers!

13. January 2017 23:37
by Aaron Medacco
0 Comments

Creating Private DNS Zones w/ AWS Route 53

13. January 2017 23:37 by Aaron Medacco | 0 Comments

It's common to have private DNS in place so you don't have to memorize the IP addresses for your internal resources. Amazon makes this pretty simple within Route 53.

In this post, I'll be creating a domain name of "zeal.", and adding a record set for the "furyand" sub-domain. Once done, I should be able to receive a response using ping between two instances in the same VPC for the "furyand.zeal" DNS name. At the moment, if I try to send packets to "furyand.zeal" using ping, it has no idea what I'm talking about.

Command Prompt 1

Before we get started, you'll need to enable DNS support for your VPC by enabling the "DNS Resolution" and "DNS Hostnames" settings.

Creating the private hosted zone:

  1. Navigate to Route 53 in your management console.
  2. Select "Hosted Zones" in the sidebar.
  3. Click "Create Hosted Zone".
  4. On the right hand side, enter the domain name for your private zone and a comment if necessary.
  5. Select "Private Hosted Zone for Amazon VPC" as the Type.
  6. Select the VPC identifier of the VPC you'd like the hosted zone to apply to.

At this point you should see two records types, NS and SOA, already created for you.

Creating a record set:

  1. In the dashboard of the record set you just created, click "Create Record Set".
  2. On the right hand side, enter the sub-domain. (In my case "furyand")
  3. Keep "A - IPv4 address" as the type for this example.
  4. Select "No" for alias.
  5. Set the TTL (Seconds) to whatever value suits your use case.
  6. Enter the Private IP address of the instance you want to have a DNS name in the Value field.
  7. Leave the Routing Polucy as "Simple" and click "Create".

You might have to wait a little bit for the record sets to become in use, but once they are...

Command Prompt 2

If you are receiving request time out responses after taking these steps, make sure the security groups for your instances are allowing ICMP traffic appropriately. If that appears correct, you might also check if Windows Firewall is disabling "File and Printer Sharing (Echo Request - ICMPv4-In)". You'll need to allow that for ping to work.

Cheers!

3. January 2017 21:13
by Aaron Medacco
2 Comments

Automating Backups of Your Route 53 Hosted Zone DNS Records

3. January 2017 21:13 by Aaron Medacco | 2 Comments

Not too long ago I was editing entries in a Route 53 hosted zone and thought to myself what would happen if the record sets for the zones were lost. A pretty epic disaster would need to occur for you to somehow lose your DNS record sets. Maybe someone accidentally gets rid of a zone believing it to no longer be necessary, or perhaps someone configured IAM permissions incorrectly and a disgruntled employee who notices he has access wreaks havoc on your DNS before he finds the door. Either way, without your DNS, your perfectly designed system architecture might as well be driftwood. Therefore, I created a serverless method for storing backups of all Route 53 hosted zone records in S3, just in case:

Route 53 Backup Diagram

Creating the S3 bucket:

  1. Navigate to S3 in your management console.
  2. Click "Create Bucket".
  3. Enter an appropriate name and select a region.
  4. Click "Create".

Creating an IAM policy for the first Lambda function's role:

  1. Navigate to IAM in your management console.
  2. Select "Policies" in the sidebar.
  3. Click "Create Policy".
  4. Select "Create Your Own Policy".
  5. Enter an appropriate policy name and description.
  6. Paste the following JSON into the policy document:

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "route53:ListResourceRecordSets"
                ],
                "Resource": [
                    "*"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:PutObject"
                ],
                "Resource": [
                    "Your Bucket ARN"
                ]
            }
        ]
    }
  7. Substitute "Your Bucket ARN" with the ARN for the S3 bucket you created. Make sure you add "/*" after the bucket ARN. For instance, if your bucket ARN was "arn:aws:s3:::furyandzealbrothers", you would use "arn:aws:s3:::furyandzealbrothers/*".
  8. Click "Create Policy".

Creating the IAM role for the first Lambda function:

  1. Select "Roles" in the sidebar.
  2. Click "Create New Role".
  3. Enter an appropriate role name and click "Next Step".
  4. Select "AWS Lambda" within the AWS Service Roles.
  5. Change the filter to "Customer Managed", check the box of the policy you just created, and click "Next Step".
  6. Click "Create Role".

Creating the first Lambda function:

  1. Navigate to Lambda in your management console.
  2. Click "Create a Lambda function".
  3. Select the "Blank Function" blueprint.
  4. Click "Next".
  5. Enter an appropriate function name and description. Select Node.js for the runtime.
  6. Under "Lambda function code", select "Edit code inline" for the Code entry type and paste the following code in the box:

    var AWS = require("aws-sdk");
    
    exports.handler = (event, context, callback) => {
        var route53 = new AWS.Route53();
        var id = event.id;
        var name = event.name;
        var recordParams = { HostedZoneId: id };
        route53.listResourceRecordSets(recordParams, function(err, data){
            if (err) {
                console.log(err, err.stack);
            }
            else {
                console.log(JSON.stringify(data));
                var records = [];
                for (var j = 0; j < data.ResourceRecordSets.length; j++){
                    records.push(data.ResourceRecordSets[j]);
                }
                var zone = { id:id, name:name, records:records };
                uploadBackupToS3(zone);
            }
        });
    };
    
    var uploadBackupToS3 = function(data) {
        var s3 = new AWS.S3();
        var bucket = "Your Bucket Name";
        var timeStamp = Date.now();
        var key = data.name + "_" + data.id.replace(/\//g, '').replace("hostedzone", '') + "_backup_" + timeStamp;
        key = key.replace(/[.]/g, "_");
        var body = JSON.stringify(data);
        var param = { Bucket: bucket, Key: key, Body: body, ContentType: "text/plain", StorageClass: "STANDARD_IA" };
        s3.upload(param, function(err, data) {
            if (err){
                console.log(err, err.stack);
            } else{
                console.log("Route 53 backup successful.")
            }
        });
    };
  7. Substitute "Your Bucket Name" with the name of the bucket you created earlier.
  8. Leave Handler as "index.handler".
  9. Choose to use an existing role and select the IAM role you created earlier.
  10. Leave the other default values and click "Next".
  11. Click "Create function".

Creating an IAM policy for the second Lambda function's role:

  1. Navigate to IAM in your management console.
  2. Select "Policies" in the sidebar.
  3. Click "Create Policy".
  4. Select "Create Your Own Policy".
  5. Enter an appropriate policy name and description.
  6. Paste the following JSON into the policy document:

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "route53:ListHostedZones"
                ],
                "Resource": [
                    "*"
                ]
            },
            {
                "Action": [
                    "lambda:InvokeFunction"
                ],
                "Effect": "Allow",
                "Resource": "Your Lambda Function ARN"
            }
        ]
    }
  7. Substitute "Your Lambda Function ARN" with the ARN of the lambda function you created earlier and click "Create Policy".

Creating the IAM role for the second Lambda function:

  1. Select "Roles" in the sidebar.
  2. Click "Create New Role".
  3. Enter an appropriate role name and click "Next Step".
  4. Select "AWS Lambda" within the AWS Service Roles.
  5. Change the filter to "Customer Managed", check the box of the second policy you just created, and click "Next Step".
  6. Click "Create Role".

Creating the second Lambda function:

  1. Navigate to Lambda in your management console.
  2. Click "Create a Lambda function".
  3. Select the "Blank Function" blueprint.
  4. Under "Configure triggers", click the grey box and select "CloudWatch Events - Schedule".
  5. Enter an appropriate rule name and description.
  6. Select the frequency you'd like Lambda to backup your Route 53 hosted zone records in the Schedule expression input. I chose "rate(30 days)" for my usage.
  7. Check the box to "Enable trigger" and click "Next".
  8. Enter an appropriate function name and description. Select Node.js for the runtime.
  9. Under "Lambda function code", select "Edit code inline" for the Code entry type and paste the following code in the box:

    var AWS = require("aws-sdk");
    
    exports.handler = (event, context, callback) => {
        var route53 = new AWS.Route53();
        var lambda = new AWS.Lambda();
        var params = {};
        route53.listHostedZones(params, function(err, data){
            if (err) {
                console.log(err, err.stack);
            } 
            else {
                for (var i = 0; i < data.HostedZones.length; i++) {
                    var id = data.HostedZones[i].Id;
                    var name = data.HostedZones[i].Name;
                    var payload = { id:id, name:name };
                    var lambdaParams = {
                        FunctionName: "Your Lambda Function Name", 
                        InvocationType: "Event",
                        Payload: JSON.stringify(payload)
                    };
                    lambda.invoke(lambdaParams, function(err, data) {
                        if (err) {
                            console.log(err, err.stack);
                        }
                        else {
                            console.log(data);  
                        }
                    });
                }
            }
        });
    };
  10. Substitute "Your Lambda Function Name" with the name of the first lambda function you created earlier.
  11. Leave Handler as "index.handler".
  12. Choose to use an existing role and select the second IAM role you created earlier.
  13. Leave the other default values and click "Next".
  14. Click "Create function".

Depending on how frequently you schedule the backups, you might also want to configure a lifecycle policy in S3 to archive or delete them after a period of time.

Cheers!

Copyright © 2016-2017 Aaron Medacco