In this article by Rishabh Sharma, the author of the book Learning OpenStack High Availability, we will see how over the past couple of years, cloud computing has made a significant impact in transforming IT from a niche skill to being a key element of enterprise production environments. From a Infrastructure as a Service (IaaS) point of view, cloud computing has become much more advanced than mere virtualization; various industries and online businesses have started moving test, staging, and productions scenario roles to IaaS and started replacing traditional dedicated resources with a on-demand resources model.
(For more resources related to this topic, see here.)
OpenStack is one of the most popular and commonly used open-source cloud computing platforms, and it is mainly used to deploy infrastructure as a service solution. Enabling high availability in OpenStack is a required skill for cloud administrators and cloud engineers. This article will introduce you to high availability concepts and a way to measure and achieve high availability through architectural design in OpenStack.
The basic and advanced topics of network load balancing with detailed example configurations for HAproxy and keepalived. We will learn about classical clustering methods such as pacemaker cluster resources and their agents, start up order, failover and recovery, fencing mechanisms and the load balancing of HTTP REST APIs, MySQL, and AMQP clusters.
We will build highly available OpenStack services such as compute (Nova), Image (Glance), Object Storage (Swift), and dashboard (Horizon) services. We will successfully build the High Availability HAproxy load balancers, data base servers, and all other basic OpenStack services. We will implement the core services of OpenStack in a systematic manner. We will also learn the load balancing of HTTP REST APIs.
Having a highly available cloud might not be enough, if the application running on top of it does not take advantage of the principles and concepts of a resilient design. This article will mainly explain how a correct application design can help improve reliability and the uptime of end-user services; particular focus will be dedicated to micro services architectures and distributed web applications.
In this article, we are going to learn following topics:
- The principles of design features
- A sample application deployment
- An interaction of an application with OpenStack
The principles of design features
There are numerous design features and principles needed to consider for the applications that are deployed in the OpenStack cloud. The following are the design principles of the distributed web application deployment.
A sample application deployment
To facilitate the preceding design principles in a real-time cloud deployment, we consider a sample real-time fractal application. This cloud application is used to generate some fractals in which mathematical equations are used. The micro service architecture is used in this application to decouple the application’s logic functions so that we can easily handle the changes in independent functions.
An interaction of application with OpenStack
We have an assumption of access to the OpenStack cloud. Our application is running on the instances provided through the cloud infrastructure.
In this article, we concentrate in the context of recovering from different failure scenarios. This includes network partition split brain, automatic failover, and geo-replication.
In this section, we are going to learn from the different case studies of a variety of industries that are reaping the benefits of the high availability of OpenStack. After going through all these case studies, we would get a clear picture in our mind about how the infrastructure challenges of various industries are addresses by OpenStack. In this way, we will have an approach to implement the high availability of OpenStack in our organization.
The following are the categories of case studies covered in this article:
- A case study of Huawei
- A case study of Multiscale Health Networks
- A case study of eBay
A case study of Cisco WebEx
Cisco WebEx is a very critical business service and never goes down. This is one of the most popular web conferencing services adopted by many organizations across the globe.
A case study of Huawei
Huawei is a giant that provides cloud computing-based solutions, and they have in-house development environments such as virtual desktop infrastructure, which is used by more than 45,000 internal employees of Huawei. With this kind of infrastructure, Huawei could recognize the promising benefits of OpenStack for internal infrastructure as well as for client-based applications.
A case study of Multiscale Health Networks
Multiscale Health Networks is one of the most popular companies of the health and services industry, which targets the specific requirements of life sciences and health care using high performance computing, cloud computing, and virtualization-based solutions. Multiscale provides services in five western states (US) and has more than 65,000 employees.
A case study of eBay
Today eBay’s on-premise cloud offers various features such as multitenant, self-service, and multiregion. These features have come up on the cloud where all the critical applications, platforms for the development of applications. Therefore, the company needs to build a cloud that has agility, scalability, and robustness.
In this article, we learned the fundamentals of high availability and what a highly available design is meant to achieve in a production environment.
We saw a variety of the case studies of different industries that includes Cisco, Huawei, Multiscale Health Networks, and eBay.
As we saw here, all types of industries have some critical infrastructure challenges that can be easily overcome with OpenStack High Availability-based solutions.
Resources for Article:
- Using the OpenStack Dashboard [article]
- Setting up VPNaaS in OpenStack [article]
- Working with Data – Exploratory Data Analysis [article]