3. December 2017 18:42
by Aaron Medacco
0 Comments

AWS re:Invent 2017 - Day 2 Experience

3. December 2017 18:42 by Aaron Medacco | 0 Comments

The following is my Day 2 re:Invent 2017 experience. Missed Day 1? Check it out here.

Day 2, I started off the day waking up around 11:30am. Needed the rest from the all-nighter drive Monday morning. Cleaning lady actually woke me up when she was making her rounds on the 4th floor. This meant that I missed the breakout session I reserved, Deploying Business Analytics at Enterprise Scale with Amazon QuickSight (ABD311). I'm currently in the process of recording a Pluralsight course on Amazon QuickSight, so I felt this information could be helpful as I wrap up that course. Guess I'll have to check it out later once the sessions are uploaded to YouTube. Just another reason to never drive through the night before re:Invent again.

After getting ready, I exposed my nerd skin to sunlight and walked over to the Venetian. This is where I'd be the majority of the day. I kind of lucked out because all of my sessions for the day besides the one I slept over were in the same hotel, and pretty back-to-back so I didn't have to get creative with my downtime. 

First session of the day was Deep Dive into the New Network Load Balancer (NET304). I was curious about this since the network load balancer's announcement recently, but never had a use case or a reason to go and implement one myself. 

AWS re:Invent 2017

I have to admit, I didn't know it could route to IP addresses.

AWS re:Invent 2017

Should have picked a better seat.

AWS re:Invent 2017

Putting it all together.

The takeaways I got was that the NLB is essentially your go-to option for TCP traffic at scale, but for web applications you'd still be mostly using the Application Load Balancer or the Classic Load Balancer. The 25% cheaper than ALB fact seems significant and it uses the same kinds of components used by ALB like targets, target groups, and listeners. Additionally, it supports routing to ECS, EC2, and external IPs, as well as allowing for static IPs per availability zone.

As I was walking out of the session, there was a massive line hugging the wall of the hall and around the corner for the next session which I had a reservation seat for (thank god). That session was Running Lean Architectures: How to Optimize for Cost Efficiency (ARC303). Nerds would have to squeeze in and cuddle for this one, this session was full. 

AWS re:Invent 2017

Before the madness.

AWS re:Invent 2017

Wasn't totally full, but pretty full.

AWS re:Invent 2017

People still filing in.

AWS re:Invent 2017

Obvious, but relevant slide.

I had some mixed feelings about this session, but thought it was overall solid. On one hand, much of the information was definitely important for AWS users to save money on their monthly bill, but at the same time, I felt a lot of it was fairly obvious to anyone using AWS. For instance, I have to imagine everybody knows they should be using Reserved Instances. I feel like any potential or current user of AWS would have read about pricing thoroughly before even considering moving to Amazon Web Services as a platform, but perhaps I'm biased. There were a fair number of managers in the session and at re:Invent in general, so maybe they're not aware of obvious ways to save money. 

Aside from covering Spot and Reserved Instance use cases, there was some time covering Convertable Reserved Instances, which is still fairly new. I did enjoy the tips and tricks given on reducing Lambda costs by looking for ways to cut down on function idle time, using Step Functions instead of manual sleep calls, and migrating smaller applications into ECS instead of running each application on their own instance. The Lambda example highlighted that many customer functions involve several calls to APIs which occur sequentially where each call waits on the prior to finish. This can rack up billed run time even though Lambda isn't actually performing work during those waiting periods. The trick they suggested was essentially to shotgun the requests all at once, instead of one-by-one, but as I thought about it, that would only work if those API calls were such that they didn't depend on the result of the prior. When they brought up Step Functions it was kind of obvious you could just use that service if that was the case, though.

The presenters did a great job, and kept highlighting the need to move to a "cattle" mentality instead of "pet" mentality when thinking about your cloud infrastructure. Essentially, they encouraged moving away from manual pushing, RDPing, naming instances thinks like "Smeagle" and the like. Honestly, a lot of no-brainers and elementary information but still a  good session.

Had some downtime after to go get something to eat. Grabbed the Baked Rigatoni from the Grand Lux Cafe in the Venetian. The woman who directed me to it probably thought I was crazy. I was in a bit of a rush, and basically attacked her with, "OMG, where is food!? Help me!"

AWS re:Invent 2017

Overall, 7/10. Wasn't very expensive and now that I think about it, my first actual meal (some cold pizza slices don't count) since getting to Vegas. 

Next up was ElastiCache Deep Dive: Best Practices and Usage Patterns (DAT305) inside the Venetian Theatre. I was excited about this session since I haven't done much of anything with ElastiCache in practice, but I know some projects running on AWS that leverage it heavily. 

AWS re:Invent 2017

About 10 minutes before getting mind-pwned.

AWS re:Invent 2017

Random guy playing some Hearthstone while waiting for the session to begin.

AWS re:Invent 2017

Sick seat.

Definitely felt a little bit out of my depth on this one. I'm not someone who is familiar with Redis, outside of just knowing what it does so I was clueless during some of the session. I didn't take notes, but I recall a lot of talk about re-sharding clusters, the old backup and recovery method vs. the new online managed re-sharding available, pros of enabling cluster mode (was clueless, made sense at the time, but couldn't explain it to someone else), security, and best practices. My favorite part of the session involved use cases and how ElastiCache can benefit: IoT, gaming apps, chat apps, rate limiting services, big data, geolocation or recommendation apps, and Ad Tech. Again, I was out of my depth, but I'll be taking a closer look at this service during 2018 to fix that.

After some more Frogger in the hallways, headed over to the Expo, grabbed some swag and walked around all the vendor booths. Food and drinks were provided by AWS and the place was a lot bigger than I expected. There's a similar event occurring at the Aria (the Quad), which I'll check out later in the week. 

AWS re:Invent 2017

Wall in front of the Expo by Registration.

There were AWS experts and team members involved with just about everything AWS scattered around to answer attendee questions which I though was freaking awesome. Valuable face-time with the actual people that work and know about the stuff being developed.

AWS re:Invent 2017

Favorite section of the Venetian Expo.

AWS re:Invent 2017

More madness.

Talked to the guys at the new PrivateLink booth to ask if QuickSight was getting an "endpoint" or a method for connecting private AWS databases soon. Ended up getting the de facto answer from the Analytics booth who had Quinton Alsbury at it. Once I saw him there, I'm like "Oh here we go, this guy is the guy!" Apparently, the feature's been recently put in public preview, which I somehow missed. Visited a few other AWS booths like Trusted Advisor and the Partner Network one, and then walked around all the vendor booths for a bit.

AWS re:Invent 2017

Unfortunately, I didn't have much time to chit chat with a lot of them since the Expo was closing soon. I'll have to do so more at the Quad. Walked over to the interactive search map they had towards the sides of the room to look for a certain company I thought might be there. Sure enough, found something familiar:

AWS re:Invent 2017

A wild technology learning company appears.

Spoke with Dan Anderegg, whose the Curriculum Manager for AWS within Pluralsight. After some talk about Pluralsight path development, I finished my beer and got out, only to find I actually stayed too long and was already super late to my final session for the day, Deep Dive on Amazon Elastic Block Store (Amazon EBS) (STG306). Did I mention how it's hard to try to do everything you want to at re:Invent? 

Ended up walking home and wanting to just chill out, which is how this post is getting done. 

Cheers!

1. December 2017 17:26
by Aaron Medacco
0 Comments

AWS re:Invent 2017 - Day 1 Experience

1. December 2017 17:26 by Aaron Medacco | 0 Comments

Welcome to the first in a series of blog posts detailing my experience at AWS re:Invent 2017. If you're someone who is considering going to an AWS re:Invent conference, hopefully what follows will give you a flavor for what you can expect should you choose to fork over the cash for a ticket. The following content contains my personal impressions and experience, and may not (probably doesn't?) reflect the typical experience. Also, there will be some non-AWS fluff as well as I have not been to Las Vegas before.

AWS re:Invent 2017

My adventure starts at about Midnight. Yes, midnight. Living in Scottsdale, AZ, I figured, "Why not just drive instead of fly? After all, it's only a 6 hour drive and there won't be any traffic in the middle of the night." While that was true, what a mistake in retrospect. Arriving in Las Vegas with hardly any sleep after the road trip left me in pretty ragged shape for Monday's events. Next year, I'll definitely be flying and will get there on Sunday so I get can settle prior to Monday. I actually arrived so early, I couldn't check into my room and needed to burn some time. What better activity to do when exhausted than sit down at poker tables. Lost a quick $900 in short order. Hahaha! Truth be told, I got "coolered" back to back, but I probably played bad, too.

Once I got checked into my room at the Bellagio around 9:00am, I headed back to the Aria to get registered and pick up my re:Invent hoodie. Unfortunately, they didn't have my size, only had up to a Small. I couldn't help but smile about that. I ended up going to the Venetian later to exchange my Small for a Medium. Anyways, got my badge, ready to go! Or was I?

By the way, kudos to the Bellagio for putting these in every room. Forgot my phone charger. Well, the correct phone charger at least...

 AWS re:Invent 2017

...except it didn't have a charger compatible with my Samsung Galaxy S8. Kind of funny, but I wasn't laughing. Alright, maybe a little. Would end up getting one at a Phone store among one of the malls at the Strip. Oh yeah, and I also forgot to buy a memory card for my video recorder prior to leaving. Picked up one of those from a Best Buy Express vending machine. Vegas knows.

By this time I was crashing. Came back to my room, fell asleep, and missed 2 breakout sessions I was reserved for. Great job, Aaron! Off to a great start! 

Walked to the Aria to go check out the Certification Lounge. They had tables set up, food and drink, and some goodies available depending on what certifications you'd achieved. The registration badges have indicators on them that tell people if you're AWS certified or not, which they use to allow or deny access. I didn't end up staying too long, but there were a decent number of attendees with laptops open working and networking. Here's some of the things collected this year by walking around to the events: 

AWS re:Invent 2017

The re:Invent hoodie picked up at Registration (left) and the certification t-shirt inside the Certification Lounge (right).

AWS re:Invent 2017

Water bottle and AWS pins were given away at the Venetian Expo (top-left), badge and info packet at Registration (right), and the certification stickers at the Certification Lounge depending on which ones you've completed (bottom-left).

Headed over to the MGM Grand for my first breakout session, GPS: Anti Patterns: Learning From Failure (GPSTEC302). Before I discuss the session, I have to talk about something I severely underestimated about re:Invent. Walking! My body was definitely NOT ready. And I'm not an out-of-shape or big guy, either. The walking is legit! I remember tweeting about what I imagined would be my schedule weeks before re:Invent and Eric Hammond telling me I was being pretty optimistic about what I would actually be able to attend. No joke. Okay, enough of my complaining.

AWS re:Invent 2017

Waiting for things to get started.

AWS re:Invent 2017

Session about half-full. Plenty of room to get comfortable.

AWS re:Invent 2017

Presenter's shirt says, "got root?". Explaining methods for ensuring account resource compliance and using AWS account best practices when it comes to logging, backups, and fast reaction to nefarious changes.

This was an excellent session. The presenters were fantastic and poked fun at mistakes they themselves have made or those of customers they've talked to have made regarding automation (or lack thereof), compliance, and just overall bone-headedness (is that a word?). The big takeaways I found were to consider using services like CloudWatch, CloudTrail and Config to monitor and log activity in your AWS accounts to become aware when stupid raises it's ugly head. They threw out questions like, "What would happen if the root account's credentials were compromised and you didn't know about it until it was too late?", and "You have an automated process for creating backups, but do you actually test those backups?". From this came suggestions to regularly store and test backups to another account in case an account gets compromised and using things like MFA, especially for root and privileged users.

Additionally, the presenters made a good argument for not using the management console for activities once you become more familiar with AWS, particularly if you're leveraging the automation tools AWS provides like OpsWorks and CloudFormation as that kind of manual mucking around via the console can leave you in funny states for stacks deployed with those services. Along those lines, they also suggested dividing up the different tiers of your application infrastructure into their own stacks so that when you need to make changes to something or scale, you don't end up changing the whole system. Instead, you only modify or scale the relevant stack. Overall good session. If they have it again next year, I would recommend it. You'll get some laughs, if nothing else. The guys were pretty funny.

Once out, I had a meeting scheduled to talk with a company (presumably about upcoming Pluralsight work) at the Global Partner Summit Welcome Reception. Now, I'll admit I got a little frustrated trying to find where the **** this was taking place! AWS did a great job sending lots of guides with re:Invent flags everywhere to answer questions and direct attendees to their events, and these guys were godsends every time except when it came to finding this event. I think I just got unlucky with a few that were misinformed.

AWS re:Invent 2017

These guys were scattered all over the strip and inside the hotels. Very helpful!

First, I was told to go to the one of the ballrooms. Found what appeared to be some kind of Presenter's Registration there. Then, found another guide who said to go to the Garden Grand Arena. Walked over there, total graveyard, and ironically, a random dude there who wasn't even one of the re:Invent guides told me where it actually was. He also said, "Oh yeah, and unless you want to be standing in line all night, you might want to reconsider." It was late enough at this point, I figured I'd just head back to the Bellagio for a much needed poker session, so that's what I did. However, on the way back, holy ****, he was right. I've never seen a line as long as the one to get into the GPS Welcome Reception in my life. It went from the food court, through the entire casino, out of the casino, and further back to I couldn't tell where. Apparently, I was the only one who missed the memo, since everyone else knew where to go, but still, that line. 

Long hike back to the Bellagio, played poker for about 3 hours, lost $200 (man, I suck), and on my way back to my room discovered I didn't eat anything all day. LOL! Picked up a couple pizza slices and crashed for the night. A good night's sleep? Yes, please. Tomorrow would be better.

Cheers!

13. January 2017 23:37
by Aaron Medacco
0 Comments

Creating Private DNS Zones w/ AWS Route 53

13. January 2017 23:37 by Aaron Medacco | 0 Comments

It's common to have private DNS in place so you don't have to memorize the IP addresses for your internal resources. Amazon makes this pretty simple within Route 53.

In this post, I'll be creating a domain name of "zeal.", and adding a record set for the "furyand" sub-domain. Once done, I should be able to receive a response using ping between two instances in the same VPC for the "furyand.zeal" DNS name. At the moment, if I try to send packets to "furyand.zeal" using ping, it has no idea what I'm talking about.

Command Prompt 1

Before we get started, you'll need to enable DNS support for your VPC by enabling the "DNS Resolution" and "DNS Hostnames" settings.

Creating the private hosted zone:

  1. Navigate to Route 53 in your management console.
  2. Select "Hosted Zones" in the sidebar.
  3. Click "Create Hosted Zone".
  4. On the right hand side, enter the domain name for your private zone and a comment if necessary.
  5. Select "Private Hosted Zone for Amazon VPC" as the Type.
  6. Select the VPC identifier of the VPC you'd like the hosted zone to apply to.

At this point you should see two records types, NS and SOA, already created for you.

Creating a record set:

  1. In the dashboard of the record set you just created, click "Create Record Set".
  2. On the right hand side, enter the sub-domain. (In my case "furyand")
  3. Keep "A - IPv4 address" as the type for this example.
  4. Select "No" for alias.
  5. Set the TTL (Seconds) to whatever value suits your use case.
  6. Enter the Private IP address of the instance you want to have a DNS name in the Value field.
  7. Leave the Routing Polucy as "Simple" and click "Create".

You might have to wait a little bit for the record sets to become in use, but once they are...

Command Prompt 2

If you are receiving request time out responses after taking these steps, make sure the security groups for your instances are allowing ICMP traffic appropriately. If that appears correct, you might also check if Windows Firewall is disabling "File and Printer Sharing (Echo Request - ICMPv4-In)". You'll need to allow that for ping to work.

Cheers!

9. January 2017 07:56
by Aaron Medacco
0 Comments

Creating a VPC Peering Connection on AWS

9. January 2017 07:56 by Aaron Medacco | 0 Comments

If you work with Amazon Web Services long enough, you will run into scenarios where you want to establish connectivity between machines that are living in different VPCs. These VPCs might be in the same account or they might be in different accounts. Amazon makes this possible with VPC Peering Connections. Before diving in, there are a few limitations to be aware of regarding these kinds of connections. 

VPC 1

First, VPC Peering Connections establish 1-to-1 communication between a local and peer VPC. Connectivity is not transitive. That is, if you have three VPCs (A, B, and C) and want them all to connect to one another, you will need to create 3 unique peering connections. (A - B, A - C, B - C).

Second, the CIDR blocks defined for the VPCs involved in the peering connection cannot overlap. This makes sense if you think about it. If two machines have the same private IP address and you attempt to connect to that IP address from an EC2 instance, how should your network know which machine you intend to connect to?

Third, peering connections can only be established between VPCs that are within the same region.

Finally, you cannot create more than one peering connection between the same two VPCs.

Okay, great. For this demo, my goal is to establish RDP access between two Windows Server instances, one living inside my local VPC, and the other in a separate VPC that is also in my account. (Don't worry, connecting between different accounts follows the same process.)

Before setting up my VPC Peering Connection, any attempt I make to RDP to the private IP address of the second instance fails. At the moment, the route table I'm using in my local VPC has no clue the second instance exists.

Remote Desktop 1

Let's change that.

Creating the VPC Peering Connection:

  1. Navigate to VPC in your management console.
  2. Select "Peering Connections" in the sidebar.
  3. Click "Create VPC Peering Connection".
  4. Enter an appropriate name tag and the VPC identifier of the local VPC you want to connect.
  5. Select whether the peer VPC exists in your account or a different account and enter the VPC identifer for it.
  6. Click "Create VPC Peering Connection" and click "OK".

Accepting the VPC Peering Connection:

  1. Navigate to the Peering Connections dashboard of the other account where the peer VPC lives. If both are in the same account, you should already be there.
  2. Select the peering connection you just created, and click "Actions" > "Accept Request".
  3. Click "Yes, Accept" and click "Close".

At this point, you should see an "Active" status on your peering connection. This means that traffic can now exist between your local and peer VPCs.

Establishing connectivity using your VPC Peering Connection:

In order for you to use the peering connection, you have to configure your networking correctly. You will want to create the least permissive rules possible while still allowing traffic to occur between your resources. Therefore, your exact configuration will vary depending on what you are trying to accomplish.

However, you will absolutely need to add a route to the appropriate route tables so traffic is directed accordingly. Here are the steps for creating a route for a route table in the local VPC so it connects to the peer VPC:

  1. Navigate to VPC in your management console.
  2. Select "Route Tables" in the sidebar.
  3. Select a route table you want to add a route to.
  4. Click the "Routes" tab and click "Edit".
  5. Click "Add another route" and enter the CIDR block for the peer VPC as the destination.
  6. Enter the peering connection identifier for the target and click "Save".

Do this on the route table(s) for the peer VPC and your resources will have the proper direction necessary to reach each other across VPCs.

If you are still not able to connect, a little network troubleshooting may be required.

The most common issue for not being able to connect is misconfigured security groups. In my example, since I am establishing RDP access from an instance in my local VPC to an instance in the peer VPC, I need to change my security groups (and possibly my NACLs, too) to allow such traffic. As you can see, I am now able to establish RDP from my local VPC instance to the instance in my peer VPC:

Remote Desktop 2

And now, I generate a password for my second instance using my .pem file. And if everything is configured the correct way...

Remote Desktop 3

For more information regarding VPC peering connections, click here. And if you're interested in learning about VPC in general, I'd recommend Nigel Poulton's course, AWS VPC Operations, on Pluralsight which you can find here.

Hope this helps.

Cheers!

4. December 2016 15:42
by Aaron Medacco
1 Comments

AWS VPC Basics for Dummies

4. December 2016 15:42 by Aaron Medacco | 1 Comments

AWS VPC (Virtual Private Cloud), one of the core offerings of Amazon Web Services, is a crucial service that every professional operating on AWS needs to be familiar with. It allows you to gather, connect, and protect the resources you provision on AWS. With VPC, you can configure and secure your own virtual private network(s) within the AWS cloud. As an administrator, security should be at the top of the priority list, especially when others are trusting you with the technology that powers their business. And if you do not know how to secure resources using VPC, you have no business administering cloud infrastructure for anyone using Amazon Web Services. The following is a high level outline (using a simple web application architecture) for those who are new to cloud or unfamiliar with the service and is by no means comprehensive of everything AWS VPC has to offer.

VPC 1

VPCs, which are the virtual networks you create using the service, are region specific and can include resources across all availability zones within the same region. You can think of a VPC as an apartment unit for an apartment building. All of your belongings (your virtual instances, databases, storage) are separated from other tenants (other AWS customers) living in the complex (the AWS cloud). When you first create a VPC, you will be asked to provide a CIDR block to define a collection of available private IP addresses for resources within your virtual network.

In this example, we'll use 192.168.0.0/16.

Later, we'll need to partition this collection of IP addresses into groups for the subnets we'll create.

However, since we are hosting a web application that needs to be public, we'll first need a way to expose resources to the internet. AWS VPC allows you to do this via Internet Gateways. This is pretty self-explanatory using the web console. You simply create an internet gateway and attach it to your VPC. You should know that any traffic going to and coming from the internet to your resources will go thru the Internet Gateway.

Moving down a layer, the next step is to define our subnets. What is that?

A subnet is just a piece of a network (your VPC). It is a logical grouping of connected resources. In the apartment analogy, it's like a bedroom. AWS VPC allows you to select whether you want your subnets to be private or public. The difference between whether a subnet is public or private really just comes down to whether the subnet has a route to an internet gateway or not. Routes are defined in route tables and each subnet needs a route table.

What is a route table, you ask? Route tables define how to connect to other resources on the network (VPC). They are like maps, giving directions to the resources for how to get somewhere in the network. By default, your subnets will have a route in their route table that allows them to get to all other resources within the same VPC. However, if you do not provide a route to the Internet Gateway associated to the VPC, your subnet is considered private. If you do provide that route, your subnet is considered public. Simple.

Below is a diagram of the VPC explained so far. Notice that the IP ranges of the subnets (192.168.0.0/24 and 192.168.1.0/24) are taken from the pool of IP addresses for the VPC (192.168.0.0/16). They must be different since no two resources can have the same private IP address in the virtual network. The public and private subnets exist in different availability zones to illustrate that your network can and should (especially, as you seek high availability) span multiple zones in whichever region you provision your resources.

 VPC Diagram 1

So how do we secure the network we've established so far?

NACLs (Network Access Control Lists) are one of the tools Amazon Web Services provides for protecting resources from unwanted traffic. They are firewalls that act at the subnet level. NACLs are stateless, which means that return traffic is not implied. If communication is allowed into the subnet, it doesn't necessarily mean that the response communication back to the sender is allowed. Thus, you can define rules to allow or deny traffic that is either inbound or outbound. These rules exist in a list that is ordered by rule number. When traffic attempts to go in or out of the subnet, the NACL evaluates the rules in order and acts on the first match, whether it is allow or deny. Rules further down are not evaluated once a match is found. If no match is found, traffic is denied (by the * fallback rule). The following is what a NACL might look like in the AWS management console.

NACL Example

In this example, inbound traffic is evaluated to see if it is attempting to use SSH on port 22 from anywhere (0.0.0.0/0). If this is true, the traffic is denied. Then, the same traffic is checked to see if RDP on port 3389 is being attempted. If so, it is denied. If neither is true, the traffic is allowed in because of rule 200, which allows all traffic on all ports from anywhere. Notice there are gaps between rule numbers. It's good practice to leave some space between rule numbers (100, 150, 200) so you can come back later and place rules in between those already existing. That is why you don't see rules ordered 1, 2, 3...where if you needed to insert a rule in between two others, you would have to move a lot of rules to achieve the correct configuration.

This is just an example NACL. You'd likely want to be able to SSH or RDP into your EC2 instances from a remote location, like your office network or your home, so you wouldn't use this configuration which would obviously prevent that.

VPC Diagram 2

Now we can add resources to our protected subnets. We'll have one web server instance in our public subnet to handle web requests coming from the internet, and one database server our web application depends on in the private subnet. The infrastucture for a real world web application would likely be more sophisticated than this, accounting for things such as high availability at both the database and web tier. You'd also see an elastic load balancer combined with an auto scaling group to distribute traffic to multiple web servers so no particular resource handling requests gets overwhelmed.

Great, so how do we secure our resources once we are within the subnet? That is where security groups come in.

Security groups are the resource level firewalls guarding against unwanted traffic. Similar to NACLs, you define rules for the traffic you want to be allowed in. By default, when a security group is created, it has no rules so all traffic is denied. This is to help you implement best practice which is to configure the least amount of access necessary. Unlike NACLs, security groups are stateful, so when communication is allowed in, the response communication is allowed out. When you create an inbound rule, you provide the type, protocol, port range, and source of the traffic that should be allowed in. For example, if our web server instance was running Windows Server 2016, I'd create a rule for RDP, protocol TCP (6), using port 3389, where the source is my office IP address. When the security group is updated, I'll be able to administer my web server remotely using RDP. You can also attach more than one security group to a resource. This is useful when you want to combine multiple access configurations. For instance, if you provisioned four EC2 instances running in a subnet and wanted to have RDP access to all of them from your office, but you also wanted FTP access from your home to two of the instances, you could configure one security group for the RDP access, and another for the FTP access, and attach both security groups to the instances requiring both your home FTP and office RDP access.

VPC Diagram 3

Those are really the bare essentials for configuring a VPC in Amazon Web Services. If you want another layer of security, there is nothing stopping you from using a host based firewall like Windows Firewall on your virtual instances. Additionally, your going to want to create an Elastic IP for the instances you want publicly accessible, which in this case would be our EC2 instance acting as a web server. Otherwise, you'll find that the public IP address of your instance can change which is definitely not going to be okay for your DNS if you are running a web application.

AWS VPC also allows you to configure dedicated connectivity from your on-premises to the AWS cloud using Direct Connect, setup VPN connections from your network to your cloud network, create NAT Gateways, DHCP Option Sets, and setup VPC Peering to allow connectivity between VPCs. These are great tools and knowing what they are and when you would use them is important, but they aren't necessary for all use cases.

A final recommendation is to tag and name your VPC configured resources. Name your security groups, name your NACLs, name your route tables, name everything. As your infrastructure grows on Amazon Web Services, you will be driven insane if everything has names like "acl-1ad75f7c" that provide no context into what its purpose is. Therefore, its a good idea to name everything from the start so things stay organized and other users (particular those that weren't there when you set things up) of the AWS account will have a clue when they need to make changes.

For even more detail on VPCs, I'd recommend the Pluralsight course, AWS VPC Operations by Nigel Poulton which you can find here.

Cheers!

Copyright © 2016-2017 Aaron Medacco