14. March 2018 22:20
by Aaron Medacco
0 Comments

Migrating SQL Server Databases to Amazon RDS

14. March 2018 22:20 by Aaron Medacco | 0 Comments

If you're interested in moving an on-premises SQL Server database or a customer-managed SQL Server database powered by EC2 to RDS and need a simple method for doing so then this post is for you. Not everyone is familiar working with the AWS "lego-kit" and sometimes you just need to get things done. The following is meant to get your SQL Server data migrated without requiring a large time investment reading the AWS documentation.

Microsoft SQL Server

For this, I'm going to use the sample database backup found here. This is a small .bak file which, if you use SQL Server a lot, is something you're accustomed to working with.

As usual, please make sure you have installed and configured the AWS CLI on your workstation before invoking these steps. I won't be specifying the AWS region where these resources get provisioned so you'll need to configure the CLI to place everything where you'd like or provide the appropriate options through this process. Additionally, the IAM user I'll be using to invoke these commands has unrestricted administrator access to the account. 

Creating an RDS instance:

  1. If you've already provisioned an RDS instance and just need to move the database to it, you can skip this. In my case, I ran the following to provision a new database instance:
    aws rds create-db-instance --db-instance-identifier sample-instance --allocated-storage 20 --db-instance-class db.t2.medium --engine sqlserver-web --master-username aaron --master-user-password password --db-subnet-group-name default-vpc-07e6y461 --no-multi-az --publicly-accessible --engine-version 14.00.1000.169.v1

    Note: You'll need to provide your own subnet group and your own username and password that's more secure than my demo values.

  2. This commands creates a small RDS instance running SQL Server 2017 Web Edition in a non-Multi-AZ configuration using minimal resources and standard storage.

Creating and uploading your .bak file to S3:

  1. If you already have an S3 bucket and your .bak file stored within it, you can skip this. Otherwise we need to create an S3 bucket:
    aws s3 mb s3://aarons-sqlserver-rds-demo

    Note: Of course, you'll need to choose your own bucket name that is unique and available. Remember to substitute your bucket name for mine on subsequent commands where appropriate.

  2. And to upload the object, I can navigate to the directory where my .bak file lives and invoke the following:
    aws s3 mv AdventureWorks2017.bak s3://aarons-sqlserver-rds-demo/AdventureWorks2017.bak

    Note: Swap the name of your .bak file and bucket name with my example if different.

  3. Understand that this might not work for you depending on the size of the .bak file you are attempting to upload. You may need to use S3 multi-part upload or another method to move a file of a more significant size.

Creating an IAM role granting access to our .bak file for restore:

  1. Create a file named iam-trust-policy.json with the following contents and save it in your working directory:
    {
        "Version": "2012-10-17",
        "Statement":
        [{
            "Effect": "Allow",
            "Principal": {"Service":  "rds.amazonaws.com"},
            "Action": "sts:AssumeRole"
        }]
    }
  2. Create another file named iam-permission-policy.json with the following contents and save it in your working directory:
    {
        "Version": "2012-10-17",
        "Statement":
        [
            {
            "Effect": "Allow",
            "Action":
                [
                    "s3:ListBucket",
                    "s3:GetBucketLocation"
                ],
            "Resource": "arn:aws:s3:::aarons-sqlserver-rds-demo"
            },
            {
            "Effect": "Allow",
            "Action":
                [
                    "s3:GetObjectMetaData",
                    "s3:GetObject",
                    "s3:PutObject",
                    "s3:ListMultipartUploadParts",
                    "s3:AbortMultipartUpload"
                ],
            "Resource": "arn:aws:s3:::aarons-sqlserver-rds-demo/*"
            }
        ]
    }

    Note: This policy assumes you do not require encryption support.
    Note: Remember to swap your bucket name in for mine.

  3. Create the IAM role for RDS by running the following:
    aws iam create-role --role-name rds-backup-and-restore-role --assume-role-policy-document file://iam-trust-policy.json
  4. Write down the ARN value for the role you just created. You'll need it when we add an option to our option group.
  5. Create an IAM policy which defines the necessary permissions to grant RDS by running the following:
    aws iam create-policy --policy-name rds-backup-and-restore-policy --policy-document file://iam-permission-policy.json
  6. Write down the ARN value for the policy you just created. It should be returned to you from the command output.
  7. Now we just need to attach the IAM policy to our IAM role:
    aws iam attach-role-policy --policy-arn <policy-arn> --role-name rds-backup-and-restore-role

Adding the appropriate option group to enable native backup and restore:

In order to restore our .bak file into RDS we need to add an option group to the instance that enables the native backup and restore functionality. 

  1. To create an option group that does this, you can invoke the following command to create an option group. You will need to modify the command for your specific version and edition of SQL Server:
    aws rds create-option-group --option-group-name sqlserver-web-backupandrestore --engine-name sqlserver-web --major-engine-version 14.00 --option-group-description "Allow SQL Server backup and restore functionality."
  2. Now we need to add an option for the native backup and restore. This is where you need to enter the role ARN you saved from earlier:
    aws rds add-option-to-option-group --option-group-name sqlserver-web-backupandrestore --apply-immediately --options OptionName=SQLSERVER_BACKUP_RESTORE,OptionSettings=[Name="IAM_ROLE_ARN",Value="<role-arn>"]
  3. Finally, we need to modify our RDS instance to use the new option group we've setup:
    aws rds modify-db-instance --db-instance-identifier sample-instance --apply-immediately --option-group-name sqlserver-web-backupandrestore
  4. You may need to walk away for a few minutes and come back while RDS modifies the instance option group. If you want to know when the new option group is active then you can run this command:
    aws rds describe-db-instances --db-instance-identifier sample-instance
    and make sure you see a status of "in-sync" for the option group added.

Restoring the .bak file to your RDS instance:

At this point, we can actually begin restoring our backup. You'll need to connect to your database instance, presumably with SQL Server Management Studio or another tool. Assuming that you've defined the appropriate networking settings (enabling public accessibility on the database instance for example) and security group rules (SQL Server requires enabled traffic on port 1433), you can connect using the endpoint for your database instance. If you don't know your database endpoint, you can find it by running the describe-db-instances command again:

aws rds describe-db-instances --db-instance-identifier sample-instance

and checking the address value for "Endpoint".

  1. Once you have connected to the database instance, run the following stored procedure (targeting the "rdsadmin" database is fine). Remember to swap your own values where appropriate:
    exec msdb.dbo.rds_restore_database @restore_db_name='AdventureWorks2017', @s3_arn_to_restore_from='arn:aws:s3:::aarons-sqlserver-rds-demo/AdventureWorks2017.bak';
  2. After running the stored procedure, your restore request will be queued. To see the progress of your request while it executes, you can provide your database name and run this:
    exec msdb.dbo.rds_task_status @db_name='AdventureWorks2017';
  3. Once the request has a status of "SUCCESS", your database should now be available for use.

If you'd like to read the documentation yourself, you can find the relevant material here and here. I thought it would be helpful to condense everything into a step-by-step post for those developers or DBAs still getting up to speed with AWS.

I also encourage readers to review the limitations when using native backup and restore for SQL Server on RDS which you can find here. For instance, you can only restore databases that are 4 TB or less in size at the time of writing. If this solution doesn't work for you due to size limitations or if you require the database to remain online during migration, you may want check out AWS Database Migration Service as recommended by AWS.

Cheers!

5. December 2017 21:38
by Aaron Medacco
0 Comments

AWS re:Invent 2017 - Day 3 Experience

5. December 2017 21:38 by Aaron Medacco | 0 Comments

The following is my Day 3 re:Invent 2017 experience. Missed Day 2? Check it out here.

Out and early at 8:30 (holy ****!) and was famished. Decided to check out the Buffet at the Bellagio. Maybe it was just me, but I was expecting a little bit more from this. Most things in Las Vegas are extravagant and contribute to an overall spectacle, but the Bellagio Buffet made me think of Golden Corral a bit. Maybe it gets better after breakfast, I don't know. 

AWS re:Invent 2017

AWS re:Invent 2017

The plate of an uncultured white bread American.

Food was pretty good, but I didn't grab anything hard to mess up. Second trip back, grabbed some watermelon that tasted like apple crisp and ice cream. Not sure what that was about. Maybe the staff used the same knife for desserts and fruit slicing. From what I could tell, half the patrons were re:Invent attendees, either wearing their badge or the hoodie.

Walked back to my room to watch the Keynote by Andy Jassy, but only caught the last bit of it. After some difficulty getting the live stream to work on my laptop, watched him announce the machine learning and internet of things services. Those aren't really my wheelhouse (yet?), but seemed interesting nontheless. Succumbed to a food coma afterwards for a short nap.

Headed over to the Venetian to go back to the Expo for a new hoodie and for my next breakout session. AWS was holding the merchandise hostage if you didn't fill out evaluations for breakout sessions, so I couldn't get the hoodie until after I came back post-session. Good to know for next year. Back where the session halls were, got in the Reserved line for the Optimizing EC2 for Fun and Profit #bigsavings #newfeatures (CMP202) session. Talked with a gentleman while in line about the new announcements, specifically the S3 Select and Glacier Select features. I wasn't clear what the difference was between S3 Select and Athena and neither was he. I'll have to go try it out for myself.

AWS re:Invent 2017

Awaiting new feature announcements.

AWS re:Invent 2017

Great speaker as always. AWS always has good speakers.

AWS re:Invent 2017

More talk about Reserved and Spot Instances.

Best thing about this session was the announcements of new features. The first one was a really helpful feature AWS added to the Cost Explorer within the management console that gives instance recommendations based on your account's historical usage. Having a tool like this that does cost analysis and recommendations is great, means I don't have to. I pulled up the SDK and AWS CLI reference while he was demonstrating it, but couldn't find any methods where I could pull those recommendations using Lambda or a batch script. I figured it'd be useful to automate a monthly email or something that tells you that month's instance billing recommendations. Ended up talking to the speaker afterwards who said it's not available, but will be in the months to come. Nice!

Second announcement was regarding Spot Instances and being able to hibernate instances when they're going to be terminated. The way this was described was that hibernation acts the same way as when you "open and close your laptop". So if you are using a Spot Instance set to hibernate, when that instance gets terminated in the event another customer bids higher or AWS adds it back to the On-Demand instance pool, it will save state to EBS and when you receive it back, it will pick up where it left off instead of needing to completely re-initialize before doing whatever work you wanted. 

T2 Unlimited was also covered, which essentially allows you to not worry so much about requiring the credits for burst capacity of your T2 series of EC2 instances. The rest of the session covered a lot of cost optimization techniques that have been labored to death. Use Reserved Instances, use Spot Instances, choose the correct instance type for your workload, periodically check in to make sure you actually need the capacity you've provisioned, take advantage of serverless for cases where an always-on machine isn't necessary, and other tips of the "don't be an idiot" variety. Again, I must be biased since most of this information seems elementary. I think next year I need to stick to the 400-level courses to get the most value. That said, the presentation was excellent like always. I won't knock it just because I knew information coming in.

Found the shuttle a short walk from the hall, and decided to be lazy (smart) for once. Got back to the Bellagio for some poker before dinner, and came out plus $105. During all the walks back from the Aria to the Bellagio, I kept eyeballing the Gordon Ramsay Burger across the street at the Planet Hollywood, so I stopped in for dinner. 

AWS re:Invent 2017

Pretty flashy for a burger place.

AWS re:Invent 2017

I ate it all...No, I didn't. But wanted to try out the dogs and the burgers.

For a burger and hot dog place, I'd give it a 7/10. Probably would be a bit higher if they had dill pickles / relish, and honestly, better service. You can imagine this was pretty messy to eat, especially the hot dog, so I asked one of the girls upfront where the bathroom was to go wash my hands. The one across the hall was out of order (go figure), so I had to go to the one out thru some of the casino and next to P.F. Changs. I think the tables next to me thought I just walked out without paying. Heard them say "There he is." when I returned. Really? Do I look like a criminal? Yeah, I came to Vegas for a full week to rip off a burger joint.

Cheers!

1. December 2017 17:26
by Aaron Medacco
0 Comments

AWS re:Invent 2017 - Day 1 Experience

1. December 2017 17:26 by Aaron Medacco | 0 Comments

Welcome to the first in a series of blog posts detailing my experience at AWS re:Invent 2017. If you're someone who is considering going to an AWS re:Invent conference, hopefully what follows will give you a flavor for what you can expect should you choose to fork over the cash for a ticket. The following content contains my personal impressions and experience, and may not (probably doesn't?) reflect the typical experience. Also, there will be some non-AWS fluff as well as I have not been to Las Vegas before.

AWS re:Invent 2017

My adventure starts at about Midnight. Yes, midnight. Living in Scottsdale, AZ, I figured, "Why not just drive instead of fly? After all, it's only a 6 hour drive and there won't be any traffic in the middle of the night." While that was true, what a mistake in retrospect. Arriving in Las Vegas with hardly any sleep after the road trip left me in pretty ragged shape for Monday's events. Next year, I'll definitely be flying and will get there on Sunday so I get can settle prior to Monday. I actually arrived so early, I couldn't check into my room and needed to burn some time. What better activity to do when exhausted than sit down at poker tables. Lost a quick $900 in short order. Hahaha! Truth be told, I got "coolered" back to back, but I probably played bad, too.

Once I got checked into my room at the Bellagio around 9:00am, I headed back to the Aria to get registered and pick up my re:Invent hoodie. Unfortunately, they didn't have my size, only had up to a Small. I couldn't help but smile about that. I ended up going to the Venetian later to exchange my Small for a Medium. Anyways, got my badge, ready to go! Or was I?

By the way, kudos to the Bellagio for putting these in every room. Forgot my phone charger. Well, the correct phone charger at least...

 AWS re:Invent 2017

...except it didn't have a charger compatible with my Samsung Galaxy S8. Kind of funny, but I wasn't laughing. Alright, maybe a little. Would end up getting one at a Phone store among one of the malls at the Strip. Oh yeah, and I also forgot to buy a memory card for my video recorder prior to leaving. Picked up one of those from a Best Buy Express vending machine. Vegas knows.

By this time I was crashing. Came back to my room, fell asleep, and missed 2 breakout sessions I was reserved for. Great job, Aaron! Off to a great start! 

Walked to the Aria to go check out the Certification Lounge. They had tables set up, food and drink, and some goodies available depending on what certifications you'd achieved. The registration badges have indicators on them that tell people if you're AWS certified or not, which they use to allow or deny access. I didn't end up staying too long, but there were a decent number of attendees with laptops open working and networking. Here's some of the things collected this year by walking around to the events: 

AWS re:Invent 2017

The re:Invent hoodie picked up at Registration (left) and the certification t-shirt inside the Certification Lounge (right).

AWS re:Invent 2017

Water bottle and AWS pins were given away at the Venetian Expo (top-left), badge and info packet at Registration (right), and the certification stickers at the Certification Lounge depending on which ones you've completed (bottom-left).

Headed over to the MGM Grand for my first breakout session, GPS: Anti Patterns: Learning From Failure (GPSTEC302). Before I discuss the session, I have to talk about something I severely underestimated about re:Invent. Walking! My body was definitely NOT ready. And I'm not an out-of-shape or big guy, either. The walking is legit! I remember tweeting about what I imagined would be my schedule weeks before re:Invent and Eric Hammond telling me I was being pretty optimistic about what I would actually be able to attend. No joke. Okay, enough of my complaining.

AWS re:Invent 2017

Waiting for things to get started.

AWS re:Invent 2017

Session about half-full. Plenty of room to get comfortable.

AWS re:Invent 2017

Presenter's shirt says, "got root?". Explaining methods for ensuring account resource compliance and using AWS account best practices when it comes to logging, backups, and fast reaction to nefarious changes.

This was an excellent session. The presenters were fantastic and poked fun at mistakes they themselves have made or those of customers they've talked to have made regarding automation (or lack thereof), compliance, and just overall bone-headedness (is that a word?). The big takeaways I found were to consider using services like CloudWatch, CloudTrail and Config to monitor and log activity in your AWS accounts to become aware when stupid raises it's ugly head. They threw out questions like, "What would happen if the root account's credentials were compromised and you didn't know about it until it was too late?", and "You have an automated process for creating backups, but do you actually test those backups?". From this came suggestions to regularly store and test backups to another account in case an account gets compromised and using things like MFA, especially for root and privileged users.

Additionally, the presenters made a good argument for not using the management console for activities once you become more familiar with AWS, particularly if you're leveraging the automation tools AWS provides like OpsWorks and CloudFormation as that kind of manual mucking around via the console can leave you in funny states for stacks deployed with those services. Along those lines, they also suggested dividing up the different tiers of your application infrastructure into their own stacks so that when you need to make changes to something or scale, you don't end up changing the whole system. Instead, you only modify or scale the relevant stack. Overall good session. If they have it again next year, I would recommend it. You'll get some laughs, if nothing else. The guys were pretty funny.

Once out, I had a meeting scheduled to talk with a company (presumably about upcoming Pluralsight work) at the Global Partner Summit Welcome Reception. Now, I'll admit I got a little frustrated trying to find where the **** this was taking place! AWS did a great job sending lots of guides with re:Invent flags everywhere to answer questions and direct attendees to their events, and these guys were godsends every time except when it came to finding this event. I think I just got unlucky with a few that were misinformed.

AWS re:Invent 2017

These guys were scattered all over the strip and inside the hotels. Very helpful!

First, I was told to go to the one of the ballrooms. Found what appeared to be some kind of Presenter's Registration there. Then, found another guide who said to go to the Garden Grand Arena. Walked over there, total graveyard, and ironically, a random dude there who wasn't even one of the re:Invent guides told me where it actually was. He also said, "Oh yeah, and unless you want to be standing in line all night, you might want to reconsider." It was late enough at this point, I figured I'd just head back to the Bellagio for a much needed poker session, so that's what I did. However, on the way back, holy ****, he was right. I've never seen a line as long as the one to get into the GPS Welcome Reception in my life. It went from the food court, through the entire casino, out of the casino, and further back to I couldn't tell where. Apparently, I was the only one who missed the memo, since everyone else knew where to go, but still, that line. 

Long hike back to the Bellagio, played poker for about 3 hours, lost $200 (man, I suck), and on my way back to my room discovered I didn't eat anything all day. LOL! Picked up a couple pizza slices and crashed for the night. A good night's sleep? Yes, please. Tomorrow would be better.

Cheers!

24. August 2017 20:32
by Aaron Medacco
0 Comments

New Pluralsight Course: Getting Started with AWS Athena

24. August 2017 20:32 by Aaron Medacco | 0 Comments

After a few months of developing and recording content, my first Pluralsight course, Getting Started with AWS Athena, is live and published. A lot of work went into this, especially since I've never recorded video content of professional quality, so I'm relieved to finally cross the finish line. I never realized how much can go into producing an online course which has given me a newfound respect for my fellow authors. 

AWS Athena Get Started

Besides learning how to produce quality video content, I underestimated how much more I would learn about AWS and Athena. There's certainly a difference between knowing enough to solve your problem with AWS, and knowing enough to teach others how to solve theirs with it. 

For those interested in checking it out, you can find the course here. You'll need to have an active Pluralsight subscription, otherwise you can start a free trial. If you work in technology, the value of a subscription is pretty crazy given the amount of content available.

The course is separated into 7 modules:

  1. Exploring AWS Athena

    Sets the stage of the course. I speak to the value proposition of Athena, why you would want to use it, the features, supported data formats, limitations, and pricing model. If you're someone whose unfamiliar with Athena, this module's designed to give you a primer.

  2. Establishing Access to Your Data

    Athena's not very useful if you can't invoke it. In this module, I show you how to upload data to S3 and configure a user account with Athena access within IAM. Many will find this review, especially those practiced with Amazon Web Services, but it's a prerequisite before getting your hands dirty in Athena.

  3. Understanding AWS Athena Internals

    You never want to be in a place where things seem like magic. Here I address the technologies that operate underneath Athena, namely Apache Hive and the Presto SQL engine. If you've never used these tools to query data before, knowing what they are and how they fit within Athena is important. The only real barrier to entry for using Athena is the ability to write SQL, so I imagine a lot of users with no experience with big data technologies will be trying it out and this module gives a small crash course to help offset that.

  4. Creating Databases & Tables to Define Your Schema

    We start getting our hands dirty in this module. We talk about what databases and tables are within Athena's catalog and how they compare to those of relational databases. This one's pretty hands-on heavy as I demonstrate how to construct tables correctly using both the console or using third-party tools over a JDBC connection.

  5. Retrieving Information by Querying Tables with SQL

    Here we finally get to started eating our cake. I cover the Presto SQL engine in brief detail and show how easy it is to query S3 data using ANSI-SQL. Athena's built to be easy to get started, so by the end of this module, most will feel comfortable enough to start using Athena on their own datasets.

  6. Optimizing Cost and Performance Using Best Practices

    My favorite module of the course, tailored to those who want to get more performance and keep more of their money. I review what methods you can employ to improve query times and reduce the amount of data scanned. The 3 primary ways of doing this involve compression, columnar formats, and table partitioning. In a lot of cases, it's not as simple as "Just compress your data and win." or "Columnar formats are faster so just use that." and I talk about what factors are important when deciding on an optimization strategy for Athena workloads. I also demonstrate how you would transform data into a columnar format using Amazon Elastic MapReduce for those who may have never done it before.

  7. AWS Athena vs. Other Solutions

    Finally, I thought it would be interesting to discuss how Athena stacks up against other data services with the Amazon cloud. Knowing when to use each service is vital for anyone responsible for proposing solutions within AWS, so I felt some high-level, admittedly apples to oranges, comparisons would help steer viewers in the right direction.

Right, so go watch the course! And leave feedback! I want to keep improving the quality of the content I create, and comments are extremely helpful.

Cheers!

22. July 2017 19:05
by Aaron Medacco
0 Comments

Scheduling Notifications for Rotating Old IAM Access Keys

22. July 2017 19:05 by Aaron Medacco | 0 Comments

Access keys allow you to give programmatic access to a user so they can accomplish tasks and interact with services within your AWS environment. These keys should be heavily guarded and kept secret. Once exposed, anyone can wreak havoc in your AWS account using any permissions that were granted to the user's whose keys were exposed.

A best practice for security is to rotate these keys regularly. We want to keep our access keys fresh and not have old or unused keys running about waiting to be abused. Therefore, I've created an automated method for notifying an administrator when user access keys are old and should be rotated. Since the number of days for keys to be considered "old" varies across organizations, I've included a variable that can be configured to fit the reader's requirements.

IAM Access Key Rotation

Like my recent posts, this will make use of the AWS CLI, which I assume you've installed and configured already. We'll be using (of course) Lambda backed by a CloudWatch event rule. Notifications will be sent using the SNS service when keys should be rotated. Enjoy.

Creating an SNS topic:

  1. Open up a command prompt or terminal window, and invoke the following command:
    aws sns create-topic --name IAM-Access-Key-Rotation-Topic
  2. You'll get a Topic ARN value back which you want to keep.
  3. Then invoke the following command. Substitute Email for the email address you want to receive the notifications.
    Note: You don't have to use email if you don't want to. Feel free to use whichever protocol/endpoint fits you.
    aws sns subscribe --topic-arn ARN --protocol email --notification-endpoint Email
  4. The email will get an email asking them to confirm a subscription. You'll need to confirm the subscription before moving on if you want notifications to go out.

Creating an IAM policy for access permissions:

  1. Create a file named iam-policy.json with the following contents and save it in your working directory. Substitute Your Topic ARN for the ARN of the topic you just created:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "sns:Publish"
                ],
                "Resource": [
                    "Your Topic ARN"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "iam:ListAccessKeys",
                    "iam:ListUsers"
                ],
                "Resource": [
                    "*"
                ]
            }
        ]
    }
  2. In your command prompt or terminal window, execute the following command:
    aws iam create-policy --policy-name rotate-old-access-keys-notification-policy --policy-document file://iam-policy.json
  3. You'll receive details for the policy you just created. Write down the ARN value. You will need it in a later step.

Creating the IAM role for the Lambda function:

  1. Create a file named role-trust-policy.json with the following contents and save it in your working directory:
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": "lambda.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
  2. In your command prompt or terminal window, invoke the following command:
    aws iam create-role --role-name rotate-old-access-keys-notification-role --assume-role-policy-document file://role-trust-policy.json
  3. You'll get back information about the role you just made. Write down the ARN value for the role. You will need it when we go to create the Lambda function.
  4. Invoke this command to attach the policy to the role. Substitute ARN for the policy ARN you received when you created the IAM policy.
    aws iam attach-role-policy --policy-arn ARN --role-name rotate-old-access-keys-notification-role

Creating the Lambda function:

  1. Create a files named index.js with the following contents and save it in your working directory. Make sure you substitute Topic ARN with the ARN of the topic you created in the first step.
    Note: Notice you can configure the number of days allowed before rotations should be done. Default value I set is 90.
    var AWS = require("aws-sdk");
    var iam = new AWS.IAM();
    var sns = new AWS.SNS();
    
    var daysBeforeRotationRequired = 90; // Number of days from access creation before action is taken on access keys.
    
    exports.handler = (event, context, callback) => {
        var listUsersParams = {};
        iam.listUsers(listUsersParams, function(err, data) {
            if (err) {
                console.log(err, err.stack);
            }
            else {
                for (var i = 0; i < data.Users.length; i++) {
                    var userName = data.Users[i].UserName;
                    var listAccessKeysParams = { UserName: userName };
                    iam.listAccessKeys(listAccessKeysParams, function(err, data) {
                        if (err) {
                            console.log(err, err.stack);
                        } 
                        else {
                            for (var j = 0; j < data.AccessKeyMetadata.length; j++) {
                                var accessKeyId = data.AccessKeyMetadata[j].AccessKeyId;
                                var creationDate = data.AccessKeyMetadata[j].CreateDate;
                                var accessKeyUserName = data.AccessKeyMetadata[j].UserName;
                                var now = new Date();
                                var whenRotationShouldOccur = dateAdd(creationDate, "day", daysBeforeRotationRequired);
                                if (now > whenRotationShouldOccur) {
                                    var message = "You need to rotate access key: " + accessKeyId + " for user: " + accessKeyUserName
                                    console.log(message);
                                    var publishParams = {
                                        Message: message,
                                        Subject: "Access Keys Need Rotating For User: " + accessKeyUserName,
                                        TopicArn: "Topic ARN"
                                    };
                                    sns.publish(publishParams, context.done);
                                }
                                else {
                                    console.log("No access key rotation necessary for: " + accessKeyId + "for user: " + accessKeyUserName);
                                }
                            }
                        }
                    });
                }
            }
        });
    };
    
    var dateAdd = function(date, interval, units) {
        var ret = new Date(date); // don't change original date
        switch(interval.toLowerCase()) {
            case 'year'   :  ret.setFullYear(ret.getFullYear() + units);  break;
            case 'quarter':  ret.setMonth(ret.getMonth() + 3*units);  break;
            case 'month'  :  ret.setMonth(ret.getMonth() + units);  break;
            case 'week'   :  ret.setDate(ret.getDate() + 7*units);  break;
            case 'day'    :  ret.setDate(ret.getDate() + units);  break;
            case 'hour'   :  ret.setTime(ret.getTime() + units*3600000);  break;
            case 'minute' :  ret.setTime(ret.getTime() + units*60000);  break;
            case 'second' :  ret.setTime(ret.getTime() + units*1000);  break;
            default       :  ret = undefined;  break;
        }
        return ret;
    }
  2. Zip this file to a zip called index.zip.
  3. Bring your command prompt or terminal window back up, and execute the following command. Substitute ARN for the role ARN you received from the step where we created the role:
    aws lambda create-function --function-name rotate-old-access-keys-notification-sender --runtime nodejs6.10 --handler index.handler --role ARN --zip-file fileb://index.zip --timeout 30
  4. Once the function is created, you'll get back details for the function. Write down the ARN value for the function for when we schedule the function execution.

Scheduling the Lambda function:

  1.  In your command prompt or terminal window, execute the following command:
    Note: Feel free to adjust the schedule expression for your own purposes.
    aws events put-rule --name Rotate-Old-Access-Keys-Notification-Scheduler --schedule-expression "rate(1 day)"
  2. Write down the ARN value for the rule upon creation finishing.
  3. Run the following command. Substitute ARN for the Rule ARN you just received:
    aws lambda add-permission --function-name rotate-old-access-keys-notification-sender --statement-id LambdaPermission --action "lambda:InvokeFunction" --principal events.amazonaws.com --source-arn ARN
  4. Now run the following command. Substitute ARN for that of the Lambda function you created in the previous step:
    aws events put-targets --rule Rotate-Old-Access-Keys-Notification-Scheduler --targets "Id"="1","Arn"="ARN"

For information regarding ways to rotate your access keys without affecting your applications, click here.

Cheers!

Copyright © 2016-2017 Aaron Medacco