7 min read

This is the second part of a post on using a Manifest of your infrastructure for automation. The first part described how to use your Cloud API to transform Application Definitions into an Infrastructure Manifest. This post will show examples of automation tools built using an Infrastructure Manifest. In particular, we’ll explore application deployment and load balancer configuration management.

Recall our example Infrastructure Manifest from Part 1:

{
  "prod": {
    "us-east-1": {
      "appserve01ea1": {
        "applications": [
          "appserve"
        ],
        "zone": "us-east-1a",
        "fqdn": "ec2-1-2-3-4.compute-1.amazonaws.com",
        "private ip": "10.9.8.7",
        "public ip": "1.2.3.4",
        "id": "i-a1234bc5"
      },
      ...
  },
  ...
}

As I mentioned previously, this Manifest can form the basis for numerous automations. Some tools my team at Signal has built on top of this concept are automated deployments, load balancing, security group management, and DNS.

Application Deployment

Let’s see how an Infrastructure Manifest can simplify application deployment.

Although we’ll use Fabric as the basis for our deployment system, the concept should work with Chef and many other push-based deployment systems as well.

from json import load as json_decode
from urllib2 import urlopen

MANIFEST = json_decode(urlopen(env.manifest))

for hostname, meta in MANIFEST.iteritems():
  for role in meta['roles']:
    env.roledefs[role].append(hostname)

Note:

For this to work, you must set the manifest URL in Fabric’s environment as env.manifest. For example, you can set this in the ~/.fabricrc file or pass it on the command-line.

manifest=http://manifest:5000/api/prod/us-east-1/manifest

That’s all Fabric really requires to know where to deploy each application! Given the manifest above, this would add the “appserve” role so that you can run tasks on these instances simultaneously. For example, to deploy the “appserve” application to all the hosts with this role:

@task
@roles('appserve')
def deploy_appserve():
  # standard Fabric deploy logic here

Now calling fab deploy_appserve will run the commands to deploy the “appserve application on each host with the “appserve” role. Easy, right?

You might want to deploy some applications to every host in your infrastructure. Instead of adding these special roles to every Application Definition, you can include them here. For example, if you have a custom monitoring application (“mymon”), then you can read the list of all hosts from the Manifest and add them to the “mymon” role.

# set up special cases for roledefs:
env.roledefs = defaultdict(list, {
  'mymon': list(MANIFEST.keys()),
})

Now, after adding a deploy_mymon task, you’ll be able to easily deploy “mymon” to all hosts in your infrastructure.

Even if you auto-deploy using a specialized git receiver, Jenkins hooks, or similar, this approach will enable you to make your deployments cloud-aware, to deploy each application to the appropriate hosts in your cloud.

That’s it! Deployments can’t be much simpler than this.

Load Balancer Configuration Management

A common challenge in cloud environments is maintaining the list of all hosts for load balancer configurations. If you don’t want to lock in to a vendor or cloud-specific solution such as Amazon ELB, you may choose an open source software load balancer such as HAProxy. However, this leaves you with the challenge of maintaining the configurations as hosts appear and disappear in your cloud-based infrastructure. This problem is amplified when you use software-based load balancers between each set of services (or each tier) in your application.

Using the Infrastructure Manifest, a first-pass solution can be quite simple. You can revision-control the configuration templates and interpolate the application ports and host information from the Manifest. Then periodically update the generated configuration files and distribute them using your existing configuration management software (such as Puppet or Chef).

Let’s say you want to generate a HAProxy configuration for your load balancer. The complete configuration file might look like this:

global
        user haproxy
        group haproxy
        daemon

frontend main_vip
        bind *:80

        # ACLs for basic name-based virtual-hosts
        acl appserve_host        hdr_beg(host) -i app.example.com
        acl uiserve_host         hdr_beg(host) -i portal.example.com

        use_backend appserve     if appserve_host
        use_backend uiserve      if uiserve_host
        default_backend uiserve

backend appserve
        balance roundrobin
        option httpclose
        option httpchk GET /hc
        http-check disable-on-404
        server appserve01ea1 10.42.1.91:8080 check
        server appserve02ea1 10.42.1.92:8080 check
        server appserve03ea1 10.42.1.93:8080 check

backend uiserve
        balance roundrobin
        option httpclose
        option httpchk GET /hc
        server uiserve01ea1 10.42.1.111:8082 check
        server uiserve02ea1 10.42.1.112:8082 check

The simplest way to produce this configuration file is to generate it from a template. There are many templating solutions from which to choose. I’m fond of Jinja2, so we’ll use that for exploring this solution in Python. We want to load the template from a file located in a “templates” directory, so we start by creating a Jinja2 loader and environment:

from jinja2 import Environment, FileSystemLoader
import os

loader = FileSystemLoader(os.path.join(os.path.dirname(__file__), 'templates'))
environment = Environment(loader=loader, lstrip_blocks=True)

The template corresponding to this output could look like this. We’ll call it ‘lb.txt’ since it’s for the lb server group.

global
        user haproxy
        group haproxy
        daemon

frontend main_vip
        bind *:80

        # ACLs for basic name-based virtual-hosts
        acl appserve_host        hdr_beg(host) -i app.example.com
        acl uiserve_host         hdr_beg(host) -i portal.example.com

        use_backend appserve     if appserve_host
        use_backend uiserve      if uiserve_host
        default_backend uiserve

backend appserve
        balance roundrobin
        option httpclose
        option httpchk GET {{ vips.appserve.healthcheck_resource }}
        http-check disable-on-404
        {%- for server in vips.appserve.servers %}
        server {{ server['name'] }} {{ server.details['private_ip'] }}:{{ vips.appserve.backend_port }} check
        {%- endfor %}

backend uiserve
        balance roundrobin
        option httpclose
        option httpchk GET {{ vips.uiserve.healthcheck_resource }}
        {%- for server in vips.uiserve.servers %}
        server {{ server['name'] }} {{ server.details['private_ip'] }}:{{ vips.uiserve.backend_port }} check
        {%- endfor %}

You can see by examining the template that it only expects a single variable: vips. This is a map of application names to their load balancer configuration. Specifically, each vip contains a backend port, a healthcheck resource (i.e., HTTP path), and a list of servers (with server name and private IP address for each). Coincidentally, all of this information is available in the Infrastructure Manifest and Application Definitions we developed in Part 1. We can easily fetch this information from the webapp.

from requests import get

def main(manifest_host, env, region, server_group):
    manifest = get('http://%s/api/%s/%s/manifest' % (manifest_host, env, region)).json()
    applications = get('http://%s/api/applications' % manifest_host).json()
    print generate_haproxy(manifest, applications, server_group)

Note: we didn’t actually add the /api/applications endpoint last week, so its left as an exercise for the reader; hint: jsonify(config()[‘APPLICATIONS’]).

Now we can dive into the meat of this tool, the generate_haproxy function. As you might guess, this uses the Jinja2 environment to render the template. But first it must merge the Application Definitions and Manifest into the vips variable that the template expects.

def generate_haproxy(manifest, applications, server_group):
    apps = {}
    for application, meta in applications.iteritems():
        app_object = {
            'servers': [],
            'frontend_port': meta['frontend'],
            'backend_port': meta['backend'],
            'healthcheck_resource': meta['healthcheck']['resource']
        }
        for server in manifest:
            if application in manifest[server]['applications']:
                app_object['servers'].append({'name': server, 'details': manifest[server]})
        app_object['servers'].sort(key=lambda e: e['name'])
        apps[application] = app_object
    return environment.get_template("%s.txt" % server_group).render(vips=apps)

There’s not much going on here. We iterate through all the applications and create a vip (app_object) with all the needed variables for each one. Then we render the server_group’s template with Jinja2.

Finally, we can call the main we created above to see this in action:

main('localhost:5000', 'prod', 'us-east-1', 'lb')

This will print the HAProxy configuration for the lb load balancer group for your production us-east-1 region. (It assumes that the Manifest webapp is running on the same host.) Depending on what hosts you have in your cloud infrastructure, this should print something like the complete HAProxy configuration file shown at the top.

To easily keep your load balancer configurations up-to-date, you could run this regularly for each environment and region. Then the generated files could be distributed using your existing configuration management system. Alternatively, if your load balancers support programmatic rule updates, that would be even cleaner than this simple first-pass approach which relies on configuration file updates.

I hope this spurs your imagination and shows the benefit of using an Infrastructure Manifest to automate all the things.

About the author

Cody A. Ray is an inquisitive, tech-savvy, entrepreneurially-spirited dude. Currently, he is a software engineer at Signal, an amazing startup in downtown Chicago, where he gets to work with a dream team that’s changing the service model underlying the Internet.

LEAVE A REPLY

Please enter your comment!
Please enter your name here