My 2 cents on technology

Entendendo - Realmente - O que é Cloud Computing

Disclaimer: This post is in Portuguese, since the presentation was made for a Brazilian networking conference…

Esta apresentação foi feita para a 13ª ERRC - Escola Regional de Redes de Computadores no dia 9 de Setembro de 2015, em Passo Fundo - RS. Meu objetivo foi passar para o público uma visão geral do que é Cloud Computing e como o mercado de TI foi revolucionado por ele. Desta forma, esta é uma palestra de nível inicial.

Endendendo realmente o que é cloud computing from Igor Gentil

Automating AWS Resources With CloudFormation - Part I

CloudFormation > This is a series of posts: Part I

One of the first things that comes to mind when I’m asked about why I love so much to work with AWS is, for sure, Cloud Formation. It’s very much like the proverbial “gentle giant”: at first sight scary, but not anymore once you get to know it.

For those of you who don’t know CloudFormation, it’s a AWS service that allows you to allocate and manage resources based on a JSON file. The resources are allocated atomically, which means that if any one fails, the whole stack is rolled back (more on that later).

This is really cool for a number of reasons, but specially because you don’t need to manually allocate resources (EC2 instances, VPCs, Subnets, Route53 domains etc) anymore and moreover, since they’re defined in files, you can keep’em in version control.

It’s a small step for a SysAdmin, but a huge leap towards Infrastructure as Code.

One thing worth noting. No matter what you read here, nothing will be more helpful than the official documentation. Trust me, I use it every day, and so should you.

Beginning with the basics

To understand CloudFront you need to first understand it’s main concepts: Templates and Stacks.

A Template is a JSON file written in the CloudFormation format that describes one or more (AWS) resources.

A Stack is the “realization” of a template. Meaning you can have one template and use it to create many stacks.

CloudFormation is smart:

  • It creates and removes dependent resources in the necessary order, like VPC -> Subnets -> EC2 Instance
  • It knows when a resource being updated needs to be re-created or not (e.g. changing rules on a security group does not require it to be re-created, but changing the CIDR on a VPC does)
  • It runs tasks in parallel when possible

What does a Template look like?

Does He Look Like A Bitch?

A bit like this:

  "AWSTemplateFormatVersion": "2010-09-09",
  "Description": "Describe your template, it's useful",
  "Parameters": {
    "VPC": {
      "Description": "The VPC for the resources described here",
      "Type": "AWS::EC2::VPC::Id"
  "Resources": {
    "EFSClientSG": {
      "Type": "AWS::EC2::SecurityGroup",
      "Properties": {
        "GroupDescription": "Allow HTTP from everywhere",
        "VpcId": { "Ref": "VPC" },
        "SecurityGroupIngress": [
            "CidrIp": "",
            "IpProtocol": "tcp",
            "FromPort": "80",
            "ToPort": "80"

Please keep in mind that this is one of the simplest forms of a CloudFormation template. You can extend it a lot.

Drilling down the template

Templates are made of sessions, namely:

  1. Format Version
  2. Description
  3. Parameters
  4. Resources
  5. Metadata
  6. Mappings
  7. Conditions
  8. Outputs

Numbers 5 through 8 will be covered in future posts, for they are more advanced.

Format Version

This session specifies the CloudFormation Template version you’re using. It’s a required parameter and is used for backward compatibility with the CloudFormation API. Imagine that (like all AWS services) CloudFormation is under constant evolution. With this control, you’re able to use old templates for a long time without risking them stop working because of changes on the service.

As of the time of this post, the only valid value is “2010-09-09”.


A short description of your template. It will appear on the CloudFormation console and can help identify you stacks and templates.


Description and type of each parameter to be passed to your template during stack creation. This is incredibly useful, since it allows you to make generic templates and reuse your code.

Each parameter must have a name, a type, and optionally a description and conditions (we’ll go into this on future posts):

  "Parameters": {
    "VPC": {
      "Description": "The VPC for the resources described here",
      "Type": "AWS::EC2::VPC::Id"
    "SourceIP": {
      "Type": "String"

Here, VPC and SourceIP are the parameter names, referenced on the rest of the template.


This is the main part of any template. It’s where the actual AWS resources are defined. A full list of supported resources can be found here.

Moving on to creating stacks

Ok, I already have my template. How do I use it? It’s simple, my young padawan. First, go to the CloudFormation section of the AWS Dashboard:

Click on Create New Stack.

Cloud Formation Dashboard

Here, you name your stack and upload the template file. If your file is invalid, the upload will fail. Only valid templates pass this step.

Create A Stack - Step 1

If your template requires any parameters, specify them here. Then click next.

Create A Stack - Step 2

Here you can specify tags for all resources defined by your stack. Tags cannot start with ‘aws:’.

Create A Stack - Step 3

Finally, the review screen shows you all information provided. If everything looks fine, proceed by clicking on Create.

Create A Stack - Step 4

After a few minutes (depending on the number of resources you template is creating), your stack should be created. During that time, please go through the tabs shown below, specially Events, Resources, Parameters and Tags.

Cloud Formation Dashboard

Wrapping up

Next post I’ll talk about mappings, outputs and (hopefully) functions ;)

And, as always. If you have any questions or suggestions, please e-mail me or tweet!

Happy Hacking!


Self-Extracting Bash Script

Extract > Update: fixed typo on Sorry about that…

Recently I stumbled upon a problem: How to ship my Puppet manifests - which were hosted on a private GitHub repository - to different servers that not necessarily were part of the same network, and hence, did not share a Puppet Master?

One option was: add a SSH deploy key to each server and clone the repo. Ok, that works, but I wanted to script the whole thing, and creating SSH keys is a bit cumbersome (IMHO).

In the end, my solution was a Self-Extracting shell script hosted on S3! To my complete surprise, it’s really easy to do this. All you need is a bit of shell and an open mind. Here’s how it’s done:

Directory structure

Say you want to self-extract directory files/ from your project. Your directory structure for building the extracting script should look like this:

 - files/

files and are explained below.

Extract, Build and Setup

The way this whole thing works is this: will be the “head” of the final self-extracting script, which will read itself starting at a specific line and pipe the output to TAR, which will extract the files to a temporary directory. Then, it will run, which should put all files in place., on the other hand, will compress the folder with the files plus and concatenate with the tar archive on a new file, your self-extracting script. Got it? Good!

FILE_MARKER=`awk '/^TAR FILE:/ { print NR + 1; exit 0; }' $0`
TMP_DIR=`mktemp -d /tmp/self-extract-bash.XXXXXX`

# Extract the file using pipe
tail -n+$FILE_MARKER $0 | tar -zx -C $TMP_DIR

# Run the setup script

# Remove the self-extracting script and the temp directory
rm -rf $FILE_DIR/$0 $TMPDIR
exit 0

#TAR File Marker

The last line on is just a marker. The exit 0 two lines before is there to ensure the script stops executing before the “TAR FILE:” line.

tar -zcf files.tar.gz files/
cat files.tar.gz > # <- Name of your self-extracting script
chmod +x

This script should copy the extracted files to wherever you need them, and/or run any other commands you require. For instance:

cp -Rv files/ /opt/my_files
chown -R /opt/my_files

Easy, right?

Once I got my head around this process, it seemed very straightforward. But if you’re having dificulties, just ping me on Twitter (@igorlgentil) and I’ll be happy to help!


I'm Back!

Back I know, I know. Last post was on march, and it wasn’t much. What can I tell… been busy. Now, the goal is one post per week, discussing projects, solutions problems and whatever else I’ve been working on.


QConSP 2015 - Day 3

QConSP Day 3 with a whole track about Continuous Delivery. Now we’re talking!!

The keynote by Thoughtworks’ Sam Newman (@samnewman) about Microservices was cool, despite some things about Docker and FP that I don’t agree very much… (more on this later).

Then we had GitHub’s Ben Lavender (@bhuba) about Chat Powered Continuous Delivery. That s%$t is just AMAZING. I’ve known about this practice for some time, but this was the first real case demonstration I saw, and what I can say is: I’m sold!! (again, will post more about this later, I have a huge backlog already ;)