15 min read

In this article by Chandan Dutta Chowdhury, Omar Khedher, authors of the book Mastering OpenStack – Second Edition, we will cover the Deploying an OpenStack environment based on the profiled design. Although we created our design by taking care of several aspects related to scalability and performance, we still have to make it real. If you are still looking at OpenStack as a single block system.

(For more resources related to this topic, see here.)

Furthermore, in the introductory section of this article, we covered the role of OpenStack in the next generation of data centers. A large-scale infrastructure used by cloud providers with a few thousand servers needs a very different approach to set up.

In our case, deploying and operating the OpenStack cloud is not as simple as you might think. Thus, you need to make the operational task easier or, in other words, automated.

In this article, we will cover new topics about the ways to deploy OpenStack. The next part will cover the following points:

  • Learning what the DevOps movement is and how it can be adopted in the cloud
  • Knowing how to see your infrastructure as code and how to maintain it
  • Getting closer to the DevOps way by including configuration management aspects in your cloud
  • Making your OpenStack environment design deployable via automation
  • Starting your first OpenStack environment deployment using Ansible

DevOps in a nutshell

The term DevOps is a conjunction of development (software developers) and operations (managing and putting software into production). Many IT organizations have started to adopt such a concept, but the question is how and why? Is it a job? Is it a process or a practice?

DevOps is development and operations compounded, which basically defines a methodology of software development. It describes practices that streamline the software delivery process. It is about raising communication and integration between developers, operators (including administrators), and quality assurance teams. The essence of the DevOps movement lies in leveraging the benefits of collaboration. Different disciplines can relate to DevOps in different ways and bring their experiences and skills together under the DevOps banner to gain shared values.

So, DevOps is a methodology that integrates the efforts of several disciplines, as shown in the following figure:

Mastering OpenStack - Second Edition

This new movement is intended to resolve the conflict between developers and operators. Delivering a new release affects the production systems. It puts different teams in conflicting positions by setting different goals for them, for example, the development team wants the their latest code to go live while the operations team wants more time to test and stage the changes before going to production. DevOps fills the gap and streamlines the process of bringing in change by bringing in collaboration between the developers and operators.

DevOps is neither a toolkit nor a job; it is the synergy that streamlines the process of change.

Let’s see how DevOps can incubate a cloud project.

DevOps and cloud – everything is code

Let’s look at the architecture of cloud computing. While discussing a cloud infrastructure, we must remember that we are talking about a large scalable environment! The amazing switch to bigger environments requires us to simplify everything as much as possible. System architecture and software design are becoming more and more complicated. Every new release of software affords new functions and new configurations.

Administering and deploying a large infrastructure would not be possible without adopting a new philosophy: infrastructure as code.

When infrastructure is seen as code, the components of a given infrastructure are modeled as modules of code. What you need to do is to abstract the functionality of the infrastructure into discrete reusable components, design the services provided by the infrastructure as modules of code, and finally implement them as blocks of automation.

Furthermore, in such a paradigm, it will be essential to adhere to the same well-established discipline of software development as an infrastructure developer.

The essence of DevOps mandates that developers, network engineers, and operators must work alongside each other to deploy, operate, and maintain cloud infrastructure which will power our next-generation data center.

DevOps and OpenStack

OpenStack is an open source project, and its code is extended, modified, and fixed in every release. It is composed of multiple projects and requires extensive skills to deploy and operate. Of course, it is not our mission to check the code and dive into its different modules and functions. So what can we do with DevOps, then?

Deploying complex software on a large-scale infrastructure requires adopting new strategy. The ever-increasing complexity of software such as OpenStack and deployment of huge cloud infrastructure must be simplified. Everything in a given infrastructure must be automated! This is where OpenStack meets DevStack.

Breaking down OpenStack into pieces

Let’s gather what we covered previously and signal a couple of steps towards our first OpenStack deployment:

  1. Break down the OpenStack infrastructure into independent and reusable services.
  2. Integrate the services in such a way that you can provide the expected functionalities in the OpenStack environment.

It is obvious that OpenStack includes many services, What we need to do is see these services as packages of code in our infrastructure as code experience. The next step will investigate how to integrate the services and deploy them via automation.

Deploying service as code is similar to writing a software application. Here are important points you should remember during the entire deployment process:

  • Simplify and modularize the OpenStack services
  • Develop OpenStack services as building blocks which integrate with other components to provide a complete system
  • Facilitate the customization and improvement of services without impacting the complete system.
  • Use the right tool to build the services
  • Be sure that the services provide the same results with the same input
  • Switch your service vision from how to do it to what we want to do

Automation is the essence of DevOps. In fact, many system management tools are intensely used nowadays due to their efficiency of deployment. In other words, there is a need for automation!

You have probably used some of the automation tools, such as Ansible, Chef, Puppet, and many more. Before we go through them, we need to create a succinct, professional code management step.

Working with the infrastructure deployment code

While dealing with infrastructure as code, the code that abstracts, models, and builds the OpenStack infrastructure must be committed to source code management. This is required for tracking changes in our automation code and reproducibility of results. Eventually, we must reach a point where we shift our OpenStack infrastructure from a code base to a deployed system while following the latest software development best practices.

At this stage, you should be aware of the quality of your OpenStack infrastructure deployment, which roughly depends on the quality of the code that describes it.

It is important to highlight a critical point that you should keep in mind during all deployment stages: automated systems are not able to understand human error. You’ll have to go through an ensemble of phases and cycles using agile methodologies to end up with a release that is a largely bug-free to be promoted to the production environment.

On the other hand, if mistakes cannot be totally eradicated, you should plan for the continuous development and testing of code. The code’s life cycle management is shown in the following figure:

Mastering OpenStack - Second Edition

Changes can be scary! To handle changes, it is recommended that you do the following:

  • Keep track of and monitor the changes at every stage
  • Build flexibility into the code and make it easy to change
  • Refactor the code when it becomes difficult to manage
  • Test, test, and retest your code

Keep checking every point that has been described previously till you start to get more confident that your OpenStack infrastructure is being managed by code that won’t break.

Integrating OpenStack into infrastructure code

To keep the OpenStack environment working with a minimum rate of surprises and ensure that the code delivers the functionalities that are required, we must continuously track the development of our infrastructure code.

We will connect the OpenStack deployment code to a toolchain, where it will be constantly monitored and tested as we continue to develop and refine our code. This toolchain is composed of a pipeline of tracking, monitoring, testing, and reporting phases and is well known as a continuous integration and continuous development (CI-CD) process.

Continuous integration and delivery

Let’s see how continuous integration (CI) can be applied to OpenStack. The life cycle of our automation code will be managed by the following categories of tools:

  • System Management Tool Artifact (SMTA) can be any IT automation tool juju charms.
  • Version Control System (VCS) tracks changes to our infrastructure deployment code. Any version control system, such as CVS, Subversion, or Bazaar, that you are most familiar with can be used for this purpose. Git can be a good outfit for our VCS.
  • Jenkins is a perfect tool that monitors to changes in the VCS and does the continuous integration testing and reporting of results.

Take a look at the model in the following figure:

Mastering OpenStack - Second Edition

The proposed life-cycle for infrastructure as code consists of infrastructure configuration files that are recorded in a version control system and are built continuously by the means of a CI server (Jenkins, in our case). Infrastructure configuration files can be used to set up a unit test environment (a virtual environment using Vagrant, for example) and makes use of any system management tool to provision the infrastructure (Ansible, Chef, puppet, and so on). The CI server keeps listening to changes in version control and automatically propagates any new versions to be tested, and then it listens to target environments in production.

Vagrant allows you to build a virtual environment very easily; it is based on Oracle VirtualBox (https://www.virtualbox.org/) to run virtual machines, so you will need these before moving on with the installation in your test environment.

The proposed life-cycle for infrastructure code highlights the importance of a test environment before moving on to production. You should give a lot of importance to and care a good deal about the testing stage, although this might be a very time-consuming task.

Especially in our case, with infrastructure code for deploying OpenStack that is complicated and has multiple dependencies on other systems, the importance of testing cannot be overemphasized. This makes it imperative to ensure effort is made for an automated and consistent testing of the infrastructure code.

The best way to do this is to keep testing thoroughly in a repeated way till you gain confidence about your code.

Choosing the automation tool

At first sight, you may wonder what the best automation tool is that will be useful for our OpenStack production day. We have already chosen Git and Jenkins to handle our continuous integration and testing. It is time to choose the right tool for automation.

It might be difficult to select the right tool. Most likely, you’ll have to choose between several of them. Therefore, giving succinct hints on different tools might be helpful in order to distinguish the best outfit for certain particular setups. Of course, we are still talking about large infrastructures, a lot of networking, and distributed services.

Giving the chance for one or more tools to be selected, as system management parties can be effective and fast for our deployment. We will use Ansible for the next deployment phase.

Introducing Ansible

We have chosen Ansible to automate our cloud infrastructure. Ansible is an infrastructure automation engine. It is simple to get started with and yet is flexible enough to handle complex interdependent systems.

The architecture of Ansible consists of the deployment system where Ansible itself is installed and the target systems that are managed by Ansible. It uses an agentless architecture to push changes to the target systems. This is due to the use of SSH protocol as its transport mechanism to push changes to the target systems. This also means that there is no extra software installation required on the target system. The agentless architecture makes setting up Ansible very simple.

Ansible works by copying modules over SSH to the target systems. It then executes them to change the state of the target systems. Once executed, the Ansible modules are cleaned up, leaving no trail on the target system.

Although the default mechanism for making changes to the client system is an SSH-based push-model, if you feel the push-based model for delivering changes is not scalable enough for your infrastructure, Ansible also supports an agent-based pull-model.

Ansible is developed in python and comes with a huge collection of core automation modules.

The configuration files for Ansible are called Playbooks and they are written in YAML, which is just a markup language. YAML is easier to understand; it’s custom-made for writing configuration files. This makes learning Ansible automation much easier.

The Ansible Galaxy is a collection of reusable Ansible modules that can be used for your project.

Modules

Ansible modules are constructs that encapsulate a system resource or action. A module models the resource and its attributes. Ansible comes with packages with a wide range of core modules to represent various system resources; for example, the file module encapsulates a file on the system and has attributes such as owner, group, mode, and so on. These attributes represent the state of a file in the system; by changing the attributes of the resources, we can describe the required final state of the system.

Variables

While modules can represent the resources and actions on a system, the variables represent the dynamic part of the change. Variables can be used to modify the behavior of the modules.

Variables can be defined from the environment of the host, for example, the hostname, IP address, version of software and hardware installed on a host, and so on.

They can also be user-defined or provided as part of a module. User-defined variables can represent the classification of a host resource or its attribute.

Inventory

An inventory is a list of hosts that are managed by Ansible. The inventory list supports classifying hosts into groups. In its simplest form, an inventory can be an INI file. The groups are represented as article on the INI file. The classification can be based on the role of the hosts or any other system management need. It is possible to have a host to appear in multiple groups in an inventory file. The following example shows a simple inventory of hosts:

logserver1.example.com  

[controllers] 
ctl1.example.com 
ctl2.example.com  

[computes] 
compute1.example.com 
compute2.example.com 
compute3.example.com 
compute[20:30].example.com

The inventory file supports special patterns to represent large groups of hosts.

Ansible expects to find the inventory file at /etc/ansible/hosts, but a custom location can be passed directly to the Ansible command line.

Ansible also supports dynamic inventory that can be generated by executing scripts or retrieved from another management system, such as a cloud platform.

Roles

Roles are the building blocks of an Ansible-based deployment. They represent a collection of tasks that must be performed to configure a service on a group of hosts. The Role encapsulates tasks, variable, handlers, and other related functions required to deploy a service on a host. For example, to deploy a multinode web server cluster, the hosts in the infrastructure can be assigned roles such as web server, database server, load balancer, and so on.

Playbooks

Playbooks are the main configuration files in Ansible. They describe the complete system deployment plan. Playbooks are composed a series of tasks and are executed from top to bottom. The tasks themselves refer to group of hosts that must be deployed with roles. Ansible playbooks are written in YAML.

The following is an example of a simple Ansible Playbook:

---
- hosts: webservers
  vars:
    http_port: 8080
  remote_user: root
  tasks:
  - name: ensure apache is at the latest version
    yum: name=httpd state=latest
  - name: write the apache config file
    template: src=/srv/httpd.j2 dest=/etc/httpd.conf
    notify:
    - restart apache
handlers:
    - name: restart apache
      service: name=httpd state=restarted

Ansible for OpenStack

OpenStack Ansible (OSA) is an official OpenStack Big Tent project. It focuses on providing roles and playbooks for deploying a scalable, production-ready OpenStack setup. It has a very active community of developers and users collaborating to stabilize and bring new features to OpenStack deployment.

One of the unique features of the OSA project is the use of containers to isolate and manage OpenStack services. OSA installs OpenStack services in LXC containers to provide each service with an isolated environment.

LXC is an OS-level container and it encompasses a complete OS environment that includes a separate filesystem, networking stack, and resource isolation using cgroups.

OpenStack services are spawned in separate LXC containers and speak to each other using the REST APIs. The microservice-based architecture of OpenStack complements the use of containers to isolate services. It also decouples the services from the physical hardware and provides encapsulation of the service environment that forms the foundation for providing portability, high availability, and redundancy.

The OpenStack Ansible deployment is initiated from a deployment host. The deployment host is installed with Ansible and it runs the OSA playbooks to orchestrate the installation of OpenStack on the target hosts:

Mastering OpenStack - Second Edition

The Ansible target hosts are the ones that will run the OpenStack services. The target nodes must be installed with Ubuntu 14.04 LTS and configured with SSH-key-based authentication to allow login from the deployment host.

Summary

In this article, we covered several topics and terminologies on how to develop and maintain a code infrastructure using the DevOps style.

Viewing your OpenStack infrastructure deployment as code will not only simplify node configuration, but also improve the automation process.

You should keep in mind that DevOps is neither a project nor a goal, but it is a methodology that will make your deployment successfully empowered by the team synergy with different departments.

Despite the existence of numerous system management tools to bring our OpenStack up and running in an automated way, we have chosen Ansible for automation of our infrastructure.

Puppet, Chef, Salt, and others can do the job but in different ways. You should know that there isn’t one way to perform automation. Both Puppet and Chef have their own OpenStack deployment projects under the OpenStack Big Tent.

Resources for Article:


Further resources on this subject:


LEAVE A REPLY

Please enter your comment!
Please enter your name here