5. December 2017 21:38
by Aaron Medacco
0 Comments

AWS re:Invent 2017 - Day 3 Experience

5. December 2017 21:38 by Aaron Medacco | 0 Comments

The following is my Day 3 re:Invent 2017 experience. Missed Day 2? Check it out here.

Out and early at 8:30 (holy ****!) and was famished. Decided to check out the Buffet at the Bellagio. Maybe it was just me, but I was expecting a little bit more from this. Most things in Las Vegas are extravagant and contribute to an overall spectacle, but the Bellagio Buffet made me think of Golden Corral a bit. Maybe it gets better after breakfast, I don't know. 

AWS re:Invent 2017

AWS re:Invent 2017

The plate of an uncultured white bread American.

Food was pretty good, but I didn't grab anything hard to mess up. Second trip back, grabbed some watermelon that tasted like apple crisp and ice cream. Not sure what that was about. Maybe the staff used the same knife for desserts and fruit slicing. From what I could tell, half the patrons were re:Invent attendees, either wearing their badge or the hoodie.

Walked back to my room to watch the Keynote by Andy Jassy, but only caught the last bit of it. After some difficulty getting the live stream to work on my laptop, watched him announce the machine learning and internet of things services. Those aren't really my wheelhouse (yet?), but seemed interesting nontheless. Succumbed to a food coma afterwards for a short nap.

Headed over to the Venetian to go back to the Expo for a new hoodie and for my next breakout session. AWS was holding the merchandise hostage if you didn't fill out evaluations for breakout sessions, so I couldn't get the hoodie until after I came back post-session. Good to know for next year. Back where the session halls were, got in the Reserved line for the Optimizing EC2 for Fun and Profit #bigsavings #newfeatures (CMP202) session. Talked with a gentleman while in line about the new announcements, specifically the S3 Select and Glacier Select features. I wasn't clear what the difference was between S3 Select and Athena and neither was he. I'll have to go try it out for myself.

AWS re:Invent 2017

Awaiting new feature announcements.

AWS re:Invent 2017

Great speaker as always. AWS always has good speakers.

AWS re:Invent 2017

More talk about Reserved and Spot Instances.

Best thing about this session was the announcements of new features. The first one was a really helpful feature AWS added to the Cost Explorer within the management console that gives instance recommendations based on your account's historical usage. Having a tool like this that does cost analysis and recommendations is great, means I don't have to. I pulled up the SDK and AWS CLI reference while he was demonstrating it, but couldn't find any methods where I could pull those recommendations using Lambda or a batch script. I figured it'd be useful to automate a monthly email or something that tells you that month's instance billing recommendations. Ended up talking to the speaker afterwards who said it's not available, but will be in the months to come. Nice!

Second announcement was regarding Spot Instances and being able to hibernate instances when they're going to be terminated. The way this was described was that hibernation acts the same way as when you "open and close your laptop". So if you are using a Spot Instance set to hibernate, when that instance gets terminated in the event another customer bids higher or AWS adds it back to the On-Demand instance pool, it will save state to EBS and when you receive it back, it will pick up where it left off instead of needing to completely re-initialize before doing whatever work you wanted. 

T2 Unlimited was also covered, which essentially allows you to not worry so much about requiring the credits for burst capacity of your T2 series of EC2 instances. The rest of the session covered a lot of cost optimization techniques that have been labored to death. Use Reserved Instances, use Spot Instances, choose the correct instance type for your workload, periodically check in to make sure you actually need the capacity you've provisioned, take advantage of serverless for cases where an always-on machine isn't necessary, and other tips of the "don't be an idiot" variety. Again, I must be biased since most of this information seems elementary. I think next year I need to stick to the 400-level courses to get the most value. That said, the presentation was excellent like always. I won't knock it just because I knew information coming in.

Found the shuttle a short walk from the hall, and decided to be lazy (smart) for once. Got back to the Bellagio for some poker before dinner, and came out plus $105. During all the walks back from the Aria to the Bellagio, I kept eyeballing the Gordon Ramsay Burger across the street at the Planet Hollywood, so I stopped in for dinner. 

AWS re:Invent 2017

Pretty flashy for a burger place.

AWS re:Invent 2017

I ate it all...No, I didn't. But wanted to try out the dogs and the burgers.

For a burger and hot dog place, I'd give it a 7/10. Probably would be a bit higher if they had dill pickles / relish, and honestly, better service. You can imagine this was pretty messy to eat, especially the hot dog, so I asked one of the girls upfront where the bathroom was to go wash my hands. The one across the hall was out of order (go figure), so I had to go to the one out thru some of the casino and next to P.F. Changs. I think the tables next to me thought I just walked out without paying. Heard them say "There he is." when I returned. Really? Do I look like a criminal? Yeah, I came to Vegas for a full week to rip off a burger joint.

Cheers!

3. December 2017 18:42
by Aaron Medacco
0 Comments

AWS re:Invent 2017 - Day 2 Experience

3. December 2017 18:42 by Aaron Medacco | 0 Comments

The following is my Day 2 re:Invent 2017 experience. Missed Day 1? Check it out here.

Day 2, I started off the day waking up around 11:30am. Needed the rest from the all-nighter drive Monday morning. Cleaning lady actually woke me up when she was making her rounds on the 4th floor. This meant that I missed the breakout session I reserved, Deploying Business Analytics at Enterprise Scale with Amazon QuickSight (ABD311). I'm currently in the process of recording a Pluralsight course on Amazon QuickSight, so I felt this information could be helpful as I wrap up that course. Guess I'll have to check it out later once the sessions are uploaded to YouTube. Just another reason to never drive through the night before re:Invent again.

After getting ready, I exposed my nerd skin to sunlight and walked over to the Venetian. This is where I'd be the majority of the day. I kind of lucked out because all of my sessions for the day besides the one I slept over were in the same hotel, and pretty back-to-back so I didn't have to get creative with my downtime. 

First session of the day was Deep Dive into the New Network Load Balancer (NET304). I was curious about this since the network load balancer's announcement recently, but never had a use case or a reason to go and implement one myself. 

AWS re:Invent 2017

I have to admit, I didn't know it could route to IP addresses.

AWS re:Invent 2017

Should have picked a better seat.

AWS re:Invent 2017

Putting it all together.

The takeaways I got was that the NLB is essentially your go-to option for TCP traffic at scale, but for web applications you'd still be mostly using the Application Load Balancer or the Classic Load Balancer. The 25% cheaper than ALB fact seems significant and it uses the same kinds of components used by ALB like targets, target groups, and listeners. Additionally, it supports routing to ECS, EC2, and external IPs, as well as allowing for static IPs per availability zone.

As I was walking out of the session, there was a massive line hugging the wall of the hall and around the corner for the next session which I had a reservation seat for (thank god). That session was Running Lean Architectures: How to Optimize for Cost Efficiency (ARC303). Nerds would have to squeeze in and cuddle for this one, this session was full. 

AWS re:Invent 2017

Before the madness.

AWS re:Invent 2017

Wasn't totally full, but pretty full.

AWS re:Invent 2017

People still filing in.

AWS re:Invent 2017

Obvious, but relevant slide.

I had some mixed feelings about this session, but thought it was overall solid. On one hand, much of the information was definitely important for AWS users to save money on their monthly bill, but at the same time, I felt a lot of it was fairly obvious to anyone using AWS. For instance, I have to imagine everybody knows they should be using Reserved Instances. I feel like any potential or current user of AWS would have read about pricing thoroughly before even considering moving to Amazon Web Services as a platform, but perhaps I'm biased. There were a fair number of managers in the session and at re:Invent in general, so maybe they're not aware of obvious ways to save money. 

Aside from covering Spot and Reserved Instance use cases, there was some time covering Convertable Reserved Instances, which is still fairly new. I did enjoy the tips and tricks given on reducing Lambda costs by looking for ways to cut down on function idle time, using Step Functions instead of manual sleep calls, and migrating smaller applications into ECS instead of running each application on their own instance. The Lambda example highlighted that many customer functions involve several calls to APIs which occur sequentially where each call waits on the prior to finish. This can rack up billed run time even though Lambda isn't actually performing work during those waiting periods. The trick they suggested was essentially to shotgun the requests all at once, instead of one-by-one, but as I thought about it, that would only work if those API calls were such that they didn't depend on the result of the prior. When they brought up Step Functions it was kind of obvious you could just use that service if that was the case, though.

The presenters did a great job, and kept highlighting the need to move to a "cattle" mentality instead of "pet" mentality when thinking about your cloud infrastructure. Essentially, they encouraged moving away from manual pushing, RDPing, naming instances thinks like "Smeagle" and the like. Honestly, a lot of no-brainers and elementary information but still a  good session.

Had some downtime after to go get something to eat. Grabbed the Baked Rigatoni from the Grand Lux Cafe in the Venetian. The woman who directed me to it probably thought I was crazy. I was in a bit of a rush, and basically attacked her with, "OMG, where is food!? Help me!"

AWS re:Invent 2017

Overall, 7/10. Wasn't very expensive and now that I think about it, my first actual meal (some cold pizza slices don't count) since getting to Vegas. 

Next up was ElastiCache Deep Dive: Best Practices and Usage Patterns (DAT305) inside the Venetian Theatre. I was excited about this session since I haven't done much of anything with ElastiCache in practice, but I know some projects running on AWS that leverage it heavily. 

AWS re:Invent 2017

About 10 minutes before getting mind-pwned.

AWS re:Invent 2017

Random guy playing some Hearthstone while waiting for the session to begin.

AWS re:Invent 2017

Sick seat.

Definitely felt a little bit out of my depth on this one. I'm not someone who is familiar with Redis, outside of just knowing what it does so I was clueless during some of the session. I didn't take notes, but I recall a lot of talk about re-sharding clusters, the old backup and recovery method vs. the new online managed re-sharding available, pros of enabling cluster mode (was clueless, made sense at the time, but couldn't explain it to someone else), security, and best practices. My favorite part of the session involved use cases and how ElastiCache can benefit: IoT, gaming apps, chat apps, rate limiting services, big data, geolocation or recommendation apps, and Ad Tech. Again, I was out of my depth, but I'll be taking a closer look at this service during 2018 to fix that.

After some more Frogger in the hallways, headed over to the Expo, grabbed some swag and walked around all the vendor booths. Food and drinks were provided by AWS and the place was a lot bigger than I expected. There's a similar event occurring at the Aria (the Quad), which I'll check out later in the week. 

AWS re:Invent 2017

Wall in front of the Expo by Registration.

There were AWS experts and team members involved with just about everything AWS scattered around to answer attendee questions which I though was freaking awesome. Valuable face-time with the actual people that work and know about the stuff being developed.

AWS re:Invent 2017

Favorite section of the Venetian Expo.

AWS re:Invent 2017

More madness.

Talked to the guys at the new PrivateLink booth to ask if QuickSight was getting an "endpoint" or a method for connecting private AWS databases soon. Ended up getting the de facto answer from the Analytics booth who had Quinton Alsbury at it. Once I saw him there, I'm like "Oh here we go, this guy is the guy!" Apparently, the feature's been recently put in public preview, which I somehow missed. Visited a few other AWS booths like Trusted Advisor and the Partner Network one, and then walked around all the vendor booths for a bit.

AWS re:Invent 2017

Unfortunately, I didn't have much time to chit chat with a lot of them since the Expo was closing soon. I'll have to do so more at the Quad. Walked over to the interactive search map they had towards the sides of the room to look for a certain company I thought might be there. Sure enough, found something familiar:

AWS re:Invent 2017

A wild technology learning company appears.

Spoke with Dan Anderegg, whose the Curriculum Manager for AWS within Pluralsight. After some talk about Pluralsight path development, I finished my beer and got out, only to find I actually stayed too long and was already super late to my final session for the day, Deep Dive on Amazon Elastic Block Store (Amazon EBS) (STG306). Did I mention how it's hard to try to do everything you want to at re:Invent? 

Ended up walking home and wanting to just chill out, which is how this post is getting done. 

Cheers!

5. March 2017 16:01
by Aaron Medacco
0 Comments

Configuring AWS to Send Monthly Invoices to Another Email

5. March 2017 16:01 by Aaron Medacco | 0 Comments

Sometimes you want to be able to have your Amazon Web Services invoices sent to somebody else to manage. This might be a personal assistant, your accounting department, etc. After all, nobody wants to keep forwarding emails all the time to the appropriate people.

You could create a user for your accountant and grant them access to the billing side of AWS. However, if all they need to do is see the bills, it's more secure to just have invoices sent to their email. If their email gets compromised, the damage that can be done is less than that of a compromised AWS user. Plus, you can always re-configure your account to stop the invoice emails or modify the email they're sent to.

AWS Invoices By Email

Setting this up on AWS it's not as obvious as it could be. For something relatively simple, it's actually split into two settings within account management in the web console.

Assuming you are already logged in with the account owner, here are the steps:

Enabling invoice by email:

  1. Navigate to "My Account" via the top toolbar.
  2. Select "Preferences" in the sidebar.
  3. Check the box to "Receive PDF Invoice By Email".
    Invoice By Email 1
  4. Click "Save preferences".

Selecting another email to receive invoices:

  1. Navigate back to the index page of My Account via the top toolbar.
  2. Scroll down to "Alternate Contacts".
    Invoice By Email 2
  3. Click "Edit" and fill in the information for who will receive the invoice under the Billing section.
  4. Click "Update".

At the time of this writing, I'm unaware of a method for sending invoices to multiple separate emails. However, you could setup an email group and set the billing contact to the email group to achieve that.

Cheers!

18. February 2017 21:21
by Aaron Medacco
0 Comments

Saving Money by Automatically Stopping Non-Production Instances During Off Hours w/ AWS Lambda

18. February 2017 21:21 by Aaron Medacco | 0 Comments

For those with staging, development, or QA environments provisioned in EC2 using the on-demand billing model, you may be paying more than you need. If you have employees working in these environments during business hours, it doesn't make sense to have them running 24/7, if it's not necessary. Now, there may be cases where you would like a staging environment to run at all times, but for those where you don't, you might as well stop the instances to save money.

For example, suppose your QA team does testing on an application running on several on-demand EC2 instances from 8:00AM to 5:00PM, Monday thru Friday. If no one stops those instances, your paying for those resources while your team isn't even at work! By stopping the instances during off-hours, you can slash your bill for those instances by more than half, especially if you include the weekend.

EC2 Off Hours Management

The following is an automated solution for stopping EC2 instances at a particular hour (7:00PM, Monday thru Friday) and starting them back up at a particular hour (7:00AM, Monday thru Friday). It operates using 2 separate Lambda functions (one for stopping, one for starting) triggered by scheduled CloudWatch events. Within each function, a search is done for EC2 instances that have a specific tag so you can designate exactly which resources you want this process to affect. Instances without this tag are ignored. In this example, I'm choosing to include any EC2 instance that has a tag of "Environment" where the value is set to "Development", but the code and values shown can easily be modified to suit your own requirements.

Environment Tag

Let's dig in.

Creating an IAM policy for the Lambda function stopping instances at off-hours:

  1. Navigate to IAM in your management console.
  2. Select "Policies" in the sidebar.
  3. Click "Create Policy".
  4. Select "Create Your Own Policy".
  5. Enter an appropriate policy name and description.
  6. Paste the following JSON into the policy document:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:DescribeInstances",
                    "ec2:StopInstances"
                ],
                "Resource": [
                    "*"
                ]
            }
        ]
    }
  7. Click "Create Policy".

Creating the IAM role for the Lambda function stopping instances at off-hours:

  1. Select "Roles" in the sidebar.
  2. Click "Create New Role".
  3. Enter an appropriate role name and click "Next Step".
  4. Select "AWS Lambda" within the AWS Service Roles.
  5. Change the filter to "Customer Managed", check the box of the policy you just created, and click "Next Step".
  6. Click "Create Role".

Creating the Lambda function stopping instances at off-hours:

  1. Navigate to Lambda in your management console.
  2. Click "Create a Lambda function".
  3. Select the "Blank Function" blueprint.
  4. Under "Configure triggers", click the grey box and select "CloudWatch Events - Schedule".
  5. Enter an appropriate rule name and description.
  6. I want the instances to be stopped at 7:00PM after the workday. Since I live in Arizona (UTC-7:00), the correct cron expression for this is 0 2 ? * MON-FRI *. You'll need to change this based on when you want the instances to stop and what timezone you're in.
  7. Check the box to "Enable trigger" and click "Next".
  8. Enter an appropriate function name and description. Select Node.js for the runtime.
  9. Under "Lambda function code", select "Edit code inline" for the Code entry type and paste the following code in the box:
    var AWS = require("aws-sdk");
    
    exports.handler = (event, context, callback) => {
        var ec2 = new AWS.EC2();
        var describeParams = { Filters: [
            {
                Name:"tag:Environment",
                Values: [
                    "Development"
                ]
            },
            {
                Name:"instance-state-name",
                Values: [
                    "running"
                ]
            }
        ]};
        var instances = [];
        ec2.describeInstances(describeParams, function(err, data) {
            if (err) {
                console.log(err, err.stack);
            } else {
                console.log(data);
                for (var i = 0; i < data.Reservations.length; i++) {
                    for (var j = 0; j < data.Reservations[i].Instances.length; j++) {
                        var instanceId = data.Reservations[i].Instances[j].InstanceId;
                        if (instanceId != undefined && instanceId != null && instanceId != "") {
                            instances.push(instanceId);   
                        }
                    }
                }
                if (instances.length > 0){
                    var stopParams = { InstanceIds: instances };
                    ec2.stopInstances(stopParams, function(err,data) {
                        if (err) {
                           console.log(err, err.stack);
                        } else {
                           console.log(data);
                        }
                    });   
                }
           }
        });
    };
  10. Leave Handler as "index.handler".
  11. Choose to use an existing role and select the IAM role you created earlier for stopping instances.
  12. You may want to select a higher value for the Timeout depending on how many instances will be involved in this process.
  13. Leave the other default values and click "Next".
  14. Click "Create function".

Creating the IAM policy for the Lambda function starting instances at on-hours:

  1. Navigate to IAM in your management console.
  2. Select "Policies" in the sidebar.
  3. Click "Create Policy".
  4. Select "Create Your Own Policy".
  5. Enter an appropriate policy name and description.
  6. Paste the following JSON into the policy document:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:DescribeInstances",
                    "ec2:StartInstances"
                ],
                "Resource": [
                    "*"
                ]
            }
        ]
    }
  7. Click "Create Policy".

Creating the IAM role for the Lambda function starting instances at on-hours:

  1. Select "Roles" in the sidebar.
  2. Click "Create New Role".
  3. Enter an appropriate role name and click "Next Step".
  4. Select "AWS Lambda" within the AWS Service Roles.
  5. Change the filter to "Customer Managed", check the box of the policy you just created, and click "Next Step".
  6. Click "Create Role".

Creating the Lambda function starting instances at on-hours:

  1. Navigate to Lambda in your management console.
  2. Click "Create a Lambda function".
  3. Select the "Blank Function" blueprint.
  4. Under "Configure triggers", click the grey box and select "CloudWatch Events - Schedule".
  5. Enter an appropriate rule name and description.
  6. I want the instances to be started at 7:00AM before the workday. Since I live in Arizona (UTC-7:00), the correct cron expression for this is 0 14 ? * MON-FRI *. You'll need to change this based on when you want the instances to start and what timezone you're in.
  7. Check the box to "Enable trigger" and click "Next".
  8. Enter an appropriate function name and description. Select Node.js for the runtime.
  9. Under "Lambda function code", select "Edit code inline" for the Code entry type and paste the following code in the box:
    var AWS = require("aws-sdk");
    
    exports.handler = (event, context, callback) => {
        var ec2 = new AWS.EC2();
        var describeParams = { Filters: [
            {
                Name:"tag:Environment",
                Values: [
                    "Development"
                ]
            },
            {
                Name:"instance-state-name",
                Values: [
                    "stopped"
                ]
            }
        ]};
        var instances = [];
        ec2.describeInstances(describeParams, function(err, data) {
            if (err) {
                console.log(err, err.stack);
            } else {
                console.log(data);
                for (var i = 0; i < data.Reservations.length; i++) {
                    for (var j = 0; j < data.Reservations[i].Instances.length; j++) {
                        var instanceId = data.Reservations[i].Instances[j].InstanceId;
                        if (instanceId != undefined && instanceId != null && instanceId != "") {
                            instances.push(instanceId);   
                        }
                    }
                }
                if (instances.length > 0){
                    var stopParams = { InstanceIds: instances };
                    ec2.startInstances(stopParams, function(err,data) {
                        if (err) {
                           console.log(err, err.stack);
                        } else {
                           console.log(data);
                        }
                    });   
                }
           }
        });
    };
  10. Leave Handler as "index.handler".
  11. Choose to use an existing role and select the IAM role you created earlier for starting instances.
  12. You may want to select a higher value for the Timeout depending on how many instances will be involved in this process.
  13. Leave the other default values and click "Next".
  14. Click "Create function".

Feel free to modify this solution to achieve your specific needs. Hopefully, this will allow you to save some money on your AWS bill every month.

Cheers!

5. January 2017 19:20
by Aaron Medacco
0 Comments

Setting up Consolidated Billing for Accounts on AWS

5. January 2017 19:20 by Aaron Medacco | 0 Comments

Whether you are a consultant managing cloud resources for customers or you're part of a large company where each department has their own AWS account, setting up consolidated billing on Amazon Web Services will join all charges for multiple accounts on to one bill.

Consolidated Billing

This makes tracking charges per account less of a headache, and can even save money overall by allowing the paying account to benefit from volume pricing discounts gained thru aggregate account usage.

Signing up for consolidate billing on the paying account:

  1. Click on your account name on in the top right of your management console.
  2. Click "My Account".
  3. Select "Consolidated Billing" in the sidebar.
  4. Click "Sign up for Consolidated Billing".

You may need to wait before proceeding from here. Amazon will validate your payment information before allowing your to continue.

Linking another account under the paying account's bill:

  1. If you left and came back and/or Amazon validated your payment information (you'll receive an email), navigate back to the "Consolidated Billing" section of your account settings.
  2. Click "Send a Request".
  3. Enter the email address for the root user of the AWS account you want to pay for.
  4. Include notes if necessary, and click "Send".
  5. The account owner of the AWS account will receive an email asking them to verify the request. They must click the request acceptance link.
  6. They must then click "Accept Request".

At this point, you will see the linked account in the "Consolidated Billing" section of the payer account's Account Settings. For additional info on consolidate billing, click here.

Cheers!

Copyright © 2016-2017 Aaron Medacco