Tutorials

Application server clustering using various cloud providers [Tutorial]

10 min read

In this tutorial, we will illustrate how applications are clustered, using different cloud providers and frameworks. You can set up a lot of applications in a highly available way, and spread their workloads between the on-premises and cloud environments. You can also set them up in different cloud environments if the technical requirements are set. We will describe solutions to do the same and learn how to implement them to be independent of one cloud vendor and avoid a vendor login.

This tutorial is an excerpt from a book written by Florian Klaffenbach, Markus Klein, Suresh Sundaresan titled Multi-Cloud for Architects. This book will be your go-to guide to find perfect solutions at completely adapting to any Cloud and its services, no matter the size of your infrastructure.

Technical requirements for cross-cloud application servers

To design a cross-cloud application server environment, the requirements needed are:

  • Network connectivity between the different clouds
  • Single identity management solutions for all servers
  • Supported applications for georedundancy

Networking connectivity between different clouds

No matter what cloud you need to connect to, networking and networking security are always the key. This means that you will need reliable and secured networking connectivity, as there is a possibility that not all of the traffic is encrypted, depending on the software architecture for high availability.

As the Windows cluster environment basically has all of the nodes in one physical network location, each newer release can work with the nodes in physically different locations. So, for example, one part of the server nodes could be in cloud A, and the other one in cloud B. The software requirements are set for applications using different cloud vendors.

As each cloud vendor is running different connection gateways, the most common solution is to have the same network virtual appliance as a VM instance (single, or even double redundant) in each environment, and to design each cloud as an independent data center location. There is no requirement to have direct connectivity (using MPLS) or remote connectivity, but you will need to make sure that the network package round-trips are as quick as possible.

Single identity management solutions for all servers

The second pillar of cross-platform application designs is the IDM solution. As not every cloud vendor offers managed IDM solutions, the valid options are setting them up, for example, Active Directory as VMs and joining all servers in the Cloud into these domain or using a managed IDM solution like Azure AD which is not only support being used in Azure, but also works in other public cloud environments to have all servers join the one Microsoft Azure AD.

Supported applications for georedundancy

Finally, the application itself needs to support georedundancy in its node placement design. The application needs to be designed to work in low latency environments (for example, Microsoft SQL Server or Microsoft Exchange Server); it can also be designed quite easily, by placing a georedundant load balancer (for example, a web server).

A sample design of a cross-cloud virtual machine running clustered application nodes is as follows:

The preceding diagram shows Azure on the left and AWS on the right, both connected to a single-network environment and using Azure AD as a single identity management solution. Whether you choose the left one or the right one for your preferred cloud vendor, the result will be the same, and the service will be reachable from both cloud environments.

Examples of clustered application servers

Let’s take a look at some sample application servers (based on virtual machines) in a multi-cloud design.

  • Microsoft SQL Server
  • Microsoft Exchange Server
  • Web servers

Microsoft SQL Server

Microsoft SQL Server is a robust relational database application server that can run either based on Microsoft Windows Server or in Linux, as an operating system (starting with the 2017 version). Surely, you could set up a single SQL server as a virtual machine in your preferred cloud, we need to look at the high availability features of the application.

With Microsoft SQL Server, the basic keyword for HA is availability groups. Using this feature, you can set up your application server to have one or more replicas of the database itself, organized in an availability group. You can design the availability groups based on your needs, and split them between servers, no matter where the virtual machine really lives; then, you can configure database replication. One availability group supports one primary, and up to eight secondary, databases.

Keep in mind that a secondary database is not equal to a database backup, as the replica contains the same information that the primary database does.

Since the 2017 release, there have been two options for availability groups, as follows:

  • Always on for redundancy: This means that if a database replica fails, one or more are still available for the application requesting the data.
  • Always on for read optimization: This means that if an application needs to read from data, it uses its nearest database server. For write operations, the primary database replica needs to be available.

The replica operation itself can be synchronous or asynchronous, depending on the requirements and design of the application working with the database(s).

The following chart illustrates the availability group design of Microsoft SQL Servers:

As you can see in the preceding chart, there are different nodes available. Each of them can either reside in the same cloud, or a different one. Regarding risk management, a single cloud vendor can go offline without affecting the availability of your database servers themselves.

When you take a look at the SQL Server Management Studio, you will see the availability groups and their health statuses, as follows:

Microsoft Exchange Server

Microsoft Exchange Server is a groupware and email solution that is, in Microsoft cloud technology, the technical basis of the Office 365 SaaS email functionality. If you decide to run Exchange Server on your own, the most recent release is Exchange Server 2019. And, in case it is needed, there is full support to run Exchange Server as virtual machines in your cloud environment.

Of course, it is possible to run a simple VM with all Exchange Services on it, but, like with almost every company groupware solution that requires high availability, there is the possibility to set up Exchange Server environments in a multi-server environment. As Exchange Server has to support low latency network environments by default, it supports running an Exchange Server environment in different networking regions. This means that you can set up some nodes of Exchange Server in cloud A, and the others in cloud B. This feature is called availability groups, too. A typical design would look as follows:

As you can see, the preceding diagram shows two Azure AD sites with redundancy, using Database Availability Groups (DAGs).

The best practice for running Exchange Server on AWS is illustrated as follows:

There is also a best practice design for Azure in hybrid cloud environments; it looks as follows:

A primary database residing on one Exchange mailbox server could have up to 16 database copies. The placement of these copies is dependent on customer requirements.

Configuring high availability within Exchange Server is the default, and it is quite easy to handle, too, from either the Exchange Server Management Console or from PowerShell.

Supporting cross-cloud implementations using geo load balancers

If an application that needs to be redesigned for a multi-cloud environment works based on port communications, the redesign will be quite easy, as you will just need to set up a georedundant load balancer to support the different cloud targets, and route the traffic correspondingly.

A georedundant load balancer is a more complex solution; it’s like a default load balancer that just routes the traffic between different servers in one region or cloud environment. It generally works with the same technology and uses DNS name resolutions for redundancy and traffic routing, but, in comparison to DNS round robin technologies, a load balancer knows the available targets for resolving requests, and can work with technologies such as IP range mapping.

Azure Traffic Manager

Azure Traffic Manager is the Microsoft solution for georedundant traffic routing. It is available in each Azure region, and it provides transparent load balancing for services that coexist in different Azure regions, non-Microsoft clouds, or on premises. It provides the following features:

  • Flexible traffic routing
  • Reduced application downtime
  • Improve performance and content delivery
  • Distributed traffic over multiple locations
  • Support for all available cloud solutions (private and public clouds)

As you can see in the following diagram, Azure Traffic Manager is a flexible solution for traffic routing, and can point to each target that you need in your application design:

Incoming traffic is routed to the appropriate site using Traffic Manager metrics, and if a site is down or degraded, Traffic Manager routes the traffic to another available site. You can compare the Traffic Manager to an intelligent router that knows the origin of the traffic and reroutes the requests to the nearest available service.

AWS Route 53

In AWS, the Route 53 service provides an easy solution for routing traffic based on load and availability. It is a PaaS service, like Azure Traffic Manager, and works based on DNS name resolutions, too.

The technical design works as follows; it is fully integrated into the DNS service:

As you can see, the Route 53 design is quite comparable to Azure Traffic Manager.

If you need to decide which service to use in your design, it is not a technical decision at all, as the technology is nearly the same. Rather, the choice is based on other requirements, involving technology and pricing.

Managing multi-cloud virtual machines for clustered application servers

If you decide to design your applications in a multi-cloud environment, it does not facilitate designing automation and replaying configurations. Azure works with ARM templates and AWS with AWS CloudFormation. Both languages are JSON based, but they are different.

If you plan to use cloud solutions to transform your on-premises solutions, you should think about automation and ways to replay configurations. If you need to deal with two (or even more) different dialects, you will need to set up a process to create and update the corresponding templates.

Therefore, implementing another layer of templating will be required, if you do not want to rely on manual processes. There is a very small number of vendors that provide the technology to avoid relying on different dialects. A common one is Terraform, but Ansible and Puppet are other options.

Terraform works based on a language called Hashicorp Configuration Language (HCL). It is designed for human consumption, so users can quickly interpret and understand the infrastructure configurations. HCL includes a full JSON parser for machine-generated configurations. If you compare HCL to the JSON language, it looks as follows:

# An AMI
variable "ami" {
        description = "the AMI to use"
}

/* A multi line comment. */resource "aws_instance" "web" {
  ami                   = "${var.ami}"
  count                 = 2
  source_dest_check     = false

  connection {
     user = "root"
  }
}

Terraform gives us providers to translate the deployments into the corresponding cloud vendor languages. There are a lot of providers available, as you can see in the following screenshot:

A provider works as follows:

If you decide to work with Terraform to make your cloud automation processes smooth and independent of your cloud vendor, you can install it from most cloud vendor marketplaces as a virtual machine.

Troubleshooting cross-cloud application servers

If you have decided on a multi-cloud design for your applications, you will need to have a plan for troubleshooting; network connectivity and having identities between the different cloud environments could be reasons for unavailability issues. Otherwise, the troubleshooting mechanisms are the same ones that you’re already familiar with, and they are included in the application servers themselves, in general.

Summary

In this tutorial, we learned that it is quite easy to design a multi-cloud environment. In case there is a need to change the components in this solution, you can even decide to change services as a part of your solution from one cloud vendor to another.

To learn how to architect a multi-cloud solution for your organization, check out our book  Multi-Cloud for Architects.

Read Next

Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1

The 10 best cloud and infrastructure conferences happening in 2019

VMware Essential PKS: Use upstream Kubernetes to build a flexible, cost-effective cloud-native platform

Melisha Dsouza

Share
Published by
Melisha Dsouza

Recent Posts

Top life hacks for prepping for your IT certification exam

I remember deciding to pursue my first IT certification, the CompTIA A+. I had signed…

3 years ago

Learn Transformers for Natural Language Processing with Denis Rothman

Key takeaways The transformer architecture has proved to be revolutionary in outperforming the classical RNN…

3 years ago

Learning Essential Linux Commands for Navigating the Shell Effectively

Once we learn how to deploy an Ubuntu server, how to manage users, and how…

3 years ago

Clean Coding in Python with Mariano Anaya

Key-takeaways:   Clean code isn’t just a nice thing to have or a luxury in software projects; it's a necessity. If we…

3 years ago

Exploring Forms in Angular – types, benefits and differences   

While developing a web application, or setting dynamic pages and meta tags we need to deal with…

3 years ago

Gain Practical Expertise with the Latest Edition of Software Architecture with C# 9 and .NET 5

Software architecture is one of the most discussed topics in the software industry today, and…

3 years ago