21. February 2018 21:57
by Aaron Medacco
0 Comments

Blog Sponsorship

21. February 2018 21:57 by Aaron Medacco | 0 Comments

When I started this blog, I knew I would always keep the content ad-free. That sentiment hasn't and will never change. However, starting today, I'd like to offer a method for companies or organizations within the AWS ecosystem to gain exposure to my readers, whilst also supporting the content and time I invest in producing it. Amazon Web Services is a mammoth, ever-changing catalog of cloud offerings that can be overwhelming to learn and keep up with. The articles I write here are a collection of my own understanding and monkeying around within the Amazon cloud and are intended to assist others who may find the content useful. To that end, using the platform as a means to advertise helpful, relevant products within the AWS ecosystem aligns well with that objective.

Handshake Icon

My audience is primarily technology professionals working with Amazon Web Services in some capacity. Therefore, if you believe you offer a product or service that can service my readers and are interested in sponsoring the material here while gaining some brand exposure to boot, please reach out to me at acmedacco@gmail.com for more details. 

Cheers!

13. April 2017 22:48
by Aaron Medacco
0 Comments

Testing Your Web Tier Auto Scaling Groups on AWS w/ Artillery.io

13. April 2017 22:48 by Aaron Medacco | 0 Comments

Amazon Web Services has made adding elasticity to your system architecture easy through their offering of auto scaling groups. By following a few easy steps, you can expand and retract your infrastructure to optimally service whatever volume of traffic your applications encounter. 

But how can you test your auto scaling group on AWS? How do you know if everything is configured correctly?

You'll need to simulate specific loads of traffic in order to test that the scaling rules you've set up are correct. You can do this simply using an easy-to-use tool called Artillery.io. Artillery.io is an open source load testing toolkit written in Node.js. I won't go into detail on everything Artillery.io can do, but I encourage you to go check it out. For our purposes, I'm only going to run a few simple commands to show how easy testing an auto scaling group can be.

In order to use Artillery.io, you will need to install Node.js. You can download and install Node.js by downloading the appropriate package here.

Auto Scaling Groups

Assuming you've installed Node.js, you can test your auto scaling group by following these steps. The installation steps for Artillery.io can also be found here.

Installing Artillery.io:

  1. Open a command prompt or terminal window.
  2. Run the following command:
    npm install -g artillery
  3. Check your installation by running this command:
    artillery dino
  4. If you see the ASCII dinosaur, that means everything is ready to go.

Testing your auto scaling group:

  1. Run the following command:
    artillery quick --duration 300 --rate 10 -n 20 http://www.yourwebsite.com/resource
    Note: You'll notice I've set my duration to 5 minutes instead of the example given by Artillery.io's documentation. This is because my auto scaling group only scales out if average CPU utilization over 5 minutes is sufficiently high. You'll need to play with these values depending on how powerful the instances in your auto scaling group are and what scaling rules you've defined.
  2. Substitute the final argument with the URL to the domain your trying to test.
  3. If you've invoked a heavy enough load, you can monitor your EC2 instances serving the requests and notice something akin to the following:

    Test Auto Scaling Group 1
  4. This should trigger your auto scaling group to provision additional instances provided you've set up the appropriate scale out rules:

    Test Auto Scaling Group 2

    For this example, you can see my auto scaling group scaled up to it's maximum (3 instances) during the course of the test and then scaled down using t2.micro instances in the launch configuration. 

If your auto scaling group scaled appropriately, you can rest assured that it will work under real traffic and that your configuration is sound. Make sure that the scale down rules also took effect and that once your use of Artillery.io has ended, your auto scaling group terminates the unnecessary instances.

Note: Some of the inspiration for this post comes from Mike Pfeiffer's Pluralsight course AWS Certified DevOps Engineer: High Availability and Elasticity which I encourage everyone to check out here.

Cheers!

2. April 2017 20:51
by Aaron Medacco
0 Comments

AWS Phoenix Meetup - Security Strategy When Migrating to the Public Cloud

2. April 2017 20:51 by Aaron Medacco | 0 Comments

A little over a week ago, I attended an AWS Meetup in Phoenix regarding how to approach security when migrating applications to the cloud. The event was sponsored by NextNet and Alert Logic and took place at the ASU research center. Charles Johnson of Alert Logic presented and did a fantastic job. This event being my first, I was expecting a dry lecture. What I got was an engaging discussion with a lot of very smart people. 

Meetup

As someone who develops software, my primary takeaways involved the integration of security into the software development lifecycle, and how several of the application level frameworks are becoming the most targeted surfaces used by attackers, especially for applications in the cloud.

The movement of tearing down the walls separating development, operations and security commonly referred to as DevOps or DevSecOps was core to much of the discussion. Charles talked about how everybody involved with designing, developing, and maintaining applications in the cloud needs to take ownership of security, not just the "Security Team". Additionally, security should not be this annoying thing slapped on to an application at the end of the development lifecycle. Instead, security should be discussed early and often by application developers, system admins, and the security engineers so each piece of the application and the infrastructure powering it is designed to be secure. This means incorporating security testing alongside application unit testing from an early stage, and deciding upfront how to store API keys, login credentials, etc. as a team so they aren't exposed or hard-coded by a lazy developer. It also means constantly checking for where you might be vulnerable, and deciding how to address those vulnerabilities together. 

Incorporating security into the software development lifecycle also has the benefit of reducing the amount of tools you need to use after the fact in order to feel secure. If the application is designed from the ground up with security in mind, you shouldn't need to purchase tons of security tools in order to compensate. Charles mentioned that some of the teams he's assisted have bought expensive security products, but still haven't even implemented them months after purchase. Yikes!

And just because you've bought the latest and greatest tools, don't assume you are not vulnerable. In fact, you should assume the opposite. Assume you are vulnerable. Assume that the products you purchase and the frameworks you leverage are vulnerable, and monitor for breaches all the time. Additionally, consider what products you're buying. Are they really helping you? Most security products are designed to protect server operating systems, networking, the hypervisors, or aid in cloud management. But what about the application frameworks used by the developers? The databases, server-side apps, third-party libraries, etc? Do these products help secure those? Do the developers who are most intimate with these tools have the authority to purchase security products anyways? Maybe not. And who are the sales teams for these products selling to? Probably not developers. 

Note: I wish I had the graphic of the presentation, which showed that most of the attack surface for applications living in the cloud occurred higher up, i.e. the application level. SQL Injection, XSS, etc.

Lastly, use the tools provided by AWS. Use WAF. Use Inspector. Use Shield. Use Config. 

I'm sure I've left out a lot of information. I had a hard time concentrating on the presentation while taking notes on my laptop. However, I'm definitely going to the next event. (More cookies)

For those of you in the Phoenix area, consider checking out the AWS Phoenix Meetup and Blue Team - Greater Phoenix Area if your interested in attending these kinds of events.

Cheers!

7. January 2017 20:00
by Aaron Medacco
0 Comments

Third Party Tools for Diagramming Your AWS System Architecture

7. January 2017 20:00 by Aaron Medacco | 0 Comments

As AWS professionals, there are times when we're asked to provide diagrams or high-level blueprints of whatever system we are designing. Maybe you need to create a proof of concept of your solution before getting the buy-in you need to start. Perhaps you're working on a team, and need an agreed upon blueprint so team members stay on the same page and can refer back as they build their cloud solution. Also, for those participating or looking to participate in the AWS Partner Network as either a Technical or Consulting Partner, diagrams for your customers' system architecture are required for reference submissions. These references are necessary in order to meet partnership requirements.

The following third-party tools can be helpful in creating such materials. Admittedly, I used to create these using Photoshop, but for larger architectures or for diagrams that require a lot of detail, it'd be more efficient using tools built specifically for this task.

Cloudcraft

Cloudcraft provides an easy-to-use solution for creating AWS architecture diagrams. It features a tilted tiled grid where you can drag and drop resources relevant to your solution. The controls are smart and provide cost estimation. For instance, when placing an EC2 instance, you can choose what kind of instance it is and see it reflected both on the diagram and in the cost estimate. Once your done editing your diagram, you can export it as an image or generate a link to share with others.

CloudCraft Diagram

However, the feature that excited me most was being able to sync real-time information of your resources to Cloudcraft directly. By creating a user with read-only access, you can allow Cloudcraft to pull information from your AWS account, and generate your diagram for you. These diagrams are persistent and will update as you change your architecture over time. Certainly better than using Photoshop.

Cloudcraft offers one free and three paid plans. For simple architectures that don't require large diagrams, you should be able to accomplish what you need without spending any money or providing credit card information. Those with larger architectures who will need larger grids should look at their Pro Solo plan ($49/mo). In addition to the infinite grid size, you'll receive the awesome ability of syncing with your AWS account and gain basic support from Cloudcraft. All of this is limited to one user until you reach the Pro Team plan ($245/mo) which includes priority support from Cloudcraft and allows for team collaboration. For additional customization and larger team sizes, Cloudcraft asks that you contact them for enterprise arrangements. Check them out here.

Lucidchart

Lucidchart also features a custom drag and drop interface for diagramming your AWS resources. With a rich library of AWS shapes and service icons, you can create easily understood top-down blueprints of your system architecture. These are strictly 2D and do not have the 3D feel of the Cloudcraft diagrams. Real-time collaboration with your team members is supported as is team chat. For me, Lucidchart felt like Photoshop Lite for AWS blueprints. There are a lot of options available, and if it's easier, you can select one of their pre-made templates to get a head start.

LucidChart Diagram

Lucidchart also has the ability to import resources from your AWS account using the least access required, something I find very cool (if you haven't noticed). Also, for those who have other diagram needs, Lucidchart is not exclusive to only AWS usage and so it can provide even more value over other cloud-only diagram creation services.

A free trial exists that does not require credit card information. Paid plans start at ($5/mo) for the Basic which supports an unlimited amount of shapes and documents within 100 MB of storage for a single user. Pro plans are ($9/mo) and expand storage limits to 1 GB with access to all shape libraries and support for Visio import/export. Team plans start at ($20/mo) and increase with the number of users. It works out to ($6/mo) per user on your team as you move beyond ($20/mo). Team plans come with management tools and the ability to integrate with third party tools like JIRA and Google Apps. Enterprise customers should contact Lucidchart directly. Check them out here.

Hava

While Hava does not allow for heavy customization of your architectural diagrams, it has additional use cases which I though were exciting. Hava also allows you to sync your AWS resources from your account using read-only access in order to create a rough diagram of your architecture. Unfortunately, this is where the customization stops. You're pretty much stuck with the diagram they generate, which you can export as an image or PDF. Those who aren't talented artists that want a quick way to get a passing diagram will find this kind of automation useful.

Hava Diagram

Hava has other features outside of generating diagrams that are worth mentioning. Anyone interested in efficiency? AWS professionals writing their own CloudFormation templates can generate diagrams from the raw template JSON, allowing you to see a blueprint of what your stack will look like (and cost) without actually provisioning it. Additionally, Hava will generate a CloudFormation template from diagrams you import. What!? This means you can setup an environment in AWS that you are comfortable with, then import it into Hava. From there, you can generate a CloudFormation template from the diagram that you can use to replicate the environment going forward. Mmm...time.

Hava offers a two week free trial with limited features. Paid plans are tiered by the number of resources being visualized. A resource counts as any EC2 instance, RDS instance, Load Balancer or Elasticache. Every plan allows for an unlimited amount of users. Tiers are XS ($39/mo, 20 resources), S ($199/mo, 100 resources), M ($499/mo, 250 resources), L ($999/mo, 500 resources), and XL ($1,999/mo, 1000 resources). Again, not strictly a customized diagramming tool, but a huge value to the right customers. Check them out here.

Cheers!

Copyright © 2016-2017 Aaron Medacco