11 min read

(For more resources related to this topic, see here.)

Welcome to the world of virtualization. On the next pages we will explain in simple terms what virtualization is, where it comes from, and why this technology is amazing. So let’s start.

The concept of virtualization is not really new; as a matter of fact it is in some ways an inheritance of the mainframe world. For those of you who don’t know what a mainframe, is here is a short explanation: A mainframe is a huge computer that can have from several dozen up to hundreds of processors, tons of RAM, and enormous storage space. Think of it as the super computers that international banks are using, or car manufacturers, or even aerospace entities.

These monster computers have a “core” operating system (OS), which helps in creating a logical partition of the resources to assign it to a smaller OS. In other words, the full hardware power is somehow divided into smaller chunks that have a specific purpose. As you can imagine, there are not too many companies which can afford this kind of equipment, and this is one of the reasons why the small servers became so popular. You can learn more about mainframes on the Wikipedia page at http://en.wikipedia.org/wiki/Mainframe_computer.

Starting in the 80s, small servers (mainly based on Intel© and/or AMD© processors) became quite popular, and almost anybody could buy a simple server. But mid-sized companies began to increase the number of servers. In later years the power provided by new servers was enough to fulfill the most demanding applications, and guess what, even to support virtualization.

But you will be wondering, what is virtualization? Well the virtualization concept, even if a bit bizarre, is to work as a normal application to the host OS, asking for CPU, memory, disk, network, to name the main four subsystems, but the application is creating hardware, virtualized hardware of course, that can be used to install a brand new OS. In the diagram that follows, you can see a physical server, including CPU, RAM, disk, and network. This server needs an OS on top, and from there you can install and execute programs such as Internet browsers, databases, spreadsheets, and of course a virtualization software. This virtualization software behaves the same way as any other application-it sends a request to the OS for a file stored on the disk, access to a web page, more CPU time; so for the host OS, is a standard application that demands resources. But within the virtualization application (also known as Hypervisor), some virtual hardware is created, in other words, some fake hardware is presented at the top end of the program.

At this point we can start the OS setup on this virtual hardware, and the OS can recognize the hardware and use it as if it were real.

So coming back to the original idea, virtualization is a technique, based on software, to execute several servers and their corresponding OSes on the same physical hardware. Virtualization can be implemented on many architectures, such as IBM© mainframes, many distributions of Unix© and Linux, Windows©, Apple©, and so on.

We already mentioned that the virtualization is based on software, but there are two main kinds of software you can use to virtualize your servers. The first type of software is the one that behaves as any other application installed on the server and is also known as workstation or software-based virtualization. The second one is part of the kernel on the host OS, and is enabled as a service. This type of software is also called as hardware virtualization and it uses special CPU characteristics (as Data Execution Prevention or Virtualization Support), which we will discuss in the installation section. The main difference is the performance you can have when using either of the types. On the software/workstation virtualization, the request for hardware resources has to go from the application down to the OS into the kernel in order to get the resource. In the hardware solution, the virtualization software or hypervisor layer is built into the kernel and makes extensive usage of the CPU’s virtualization capabilities, so the resource demand is faster and more reliable, as in Microsoft © Hyper-V Server 2008 R2.

Reliability and fault tolerance

By placing all the eggs in the same basket, we want to be sure that the basket is protected. Now think that instead of eggs, we have virtual machines, and instead of the basket, we have a Hyper-V server. We require that this server is up and running most of the time, rendering into reliable virtual machines that can run for a long time.

For that reason we need a fault tolerant system, that is to say a whole system which is capable of running normally even if a fault or a failure arises. How can this be achieved? Well, just use more than one Hyper-V server. If a single Hyper-V server fails, all running VMs on it will fail, but if we have a couple of Hyper-V servers running hand in hand, then if the first one becomes unavailable, its twin brother will take care of the load. Simple, isn’t it? It is, if it is correctly dimensioned and configured. This is called Live Migration.

In a previous section we discussed how to migrate a VM from one Hyper-V server to another, but using this import/export technique causes some downtime in our VMs. You can imagine how much time it will take to move all our machines in case a host server fails, and even worse, if the host server is dead, you can’t export your machines at all. Well, this is one of the reasons we should create a Cluster.

As we already stated, a fault tolerant solution is basically to duplicate everything in the given solution. If a single hard disk may fail, then we configure additional disks (as it may be RAID 1 or RAID 5), if a NIC is prone to failure, then teaming two NICs may solve the problem. Of course, if a single server may fail (dragging with it all VMs on it), then the solution is to add another server; but here we face the problem of storage space; each disk can only be physically connected to one single data bus (consider this the cable, for simplicity), and the server must have its own disk in order to operate correctly. This can be done by using a single shared disk, as it may be a directly connected SCSI storage, a SAN (Storage Area Network connected by optical fiber), or the very popular NAS (Network Attached Storage) connected by NICs.

As we can see in the preceding diagram, the red circle has two servers; each is a node within the cluster. When you connect to this infrastructure, you don’t even see the number of servers, because in a cluster there are shared resources such as the server name, IP address, and so on.

So you connect to the first available physical server, and in the event of a failure, your session is automatically transferred to the next available physical server. Exactly the same happens at the server’s backend. We can define certain resources as shared to the cluster’s resources, and then the cluster can administer which physical server will use the resources. For example, consider the preceding diagram, there are several iSCSI targets (Internet SCSI targets) defined in the NAS, and the cluster is accessing those according to the active physical node of the cluster, thus making your service (in this case, your configured virtual machines) highly available. You can see the iSCSI FAQ on the Microsoft web site (http://go.microsoft.com/fwlink/?LinkId=61375).

In order to use a failover cluster solution, the hardware must be marked as Certified for Windows Server 2008 R2 and it has to be identical (in some cases the solution may work with dissimilar hardware, but the maintenance, operation, capacity planning, to name some, will increase thus making the solution more expensive and more difficult to possess). Also the full solution has to successfully pass the Hardware Configuration Wizard when creating the cluster. The storage solution must be certified as well, and it has to be Windows Cluster compliant (mainly supporting the SCSI-3 Persistent Reservations specification), and is strongly recommended that you implement an isolated LAN exclusively for storage purposes. Remember that to have a fault tolerant solution, all infrastructure devices have to be duplicated, even networks. The configuration wizard will let us configure our cluster even if the network is not redundant, but it will display a warning notifying you of this point.

Ok, let’s get to business. To configure a fault tolerant Hyper-V cluster, we need to use Cluster Shared Volumes, which, in simple terms, will let Hyper-V be a clustered service. As we are using a NAS, we have to configure both the ends—the iSCSI initiator (on the host server) and the iSCSI terminator (on the NAS). You can see this Microsoft Technet video at http://technet.microsoft.com/en-us/video/how-to-setup-iscsi-on-windows-server-2008-11-mins.aspx or read the Microsoft article for more information on how to configure iSCSI initiators at http://technet.microsoft.com/en-us/library/ee338480(v=ws.10).aspx. To configure the iSCSI terminator on the NAS, please refer to the NAS manufacturer’s documentation. Apart from the iSCSI disk configuration we have for our virtual machines, we need to provide a witness disk (known in the past as Quorum disk). This disk (using 1 GB will do the trick) is used to orchestrate and synchronize our cluster.

Once we have our iSCSI disk configured and visible (you can check this by opening the Computer Management console and selecting Disk Management ) in one of our servers, we can proceed to configure our cluster.

To install the Failover Clustering feature, we have to open the Server Manager console, select the Roles node on the left, then select Add Roles, and finally select the Failover Clustering role (this is very similar to the procedure we used when we installed the Hyper-V role in the Requirements and Installation section). We have to repeat this step for every node participating on the cluster. At this point we should have both the Failover Clustering role and the Hyper-V role set up in the servers, so we can open the Failover Cluster Manager console from the Administrative tools and validate our configuration. Check that Failover Cluster Manager is selected and on the center pane, select Validate Configuration (a right-click can do the trick as well). Follow all the instructions and run all of the tests until no errors are shown. When this step is completed, we can proceed to create our cluster.

In the same Failover Cluster Manager console, in the center pane, select Create a Cluster (a right-click can do the trick as well). This wizard will ask you for the following:

  • All servers that will participate in the cluster (a maximum of 16 nodes and a minimum of 1, which is useless, so better go for two servers):

  • The name of the cluster (this name is how you will access the cluster and not the individual server names)
  • The IP configuration for the cluster (same as the previous point):

We still need to enable Cluster Shared Volumes. To do so, right-click the failover cluster, and then click Enable Cluster Shared Volumes. The Enable Cluster Shared Volumes dialog opens. Read and accept the terms and restrictions, and click OK. Then select Cluster Shared Volumes and under Actions(to the left), select Add Storage and select the disks (the iSCSI disks) we had previously configured.

Now the only thing we have left, is to make the VM highly available, which we created in the Quick start – creating a virtual machine in 8 steps section (or any other VMs that you have created or any new VM you want to create, be imaginative!). The OS in the virtual machine can failover to another node without almost no interruption. Note that the virtual machine cannot be running in order to make it highly available through the wizard.

  1. In the Failover Clustering Manager console, expand the tree of the cluster we just created.
  2. Select Services and Applications.
  3. In the Action pane, select Configure a Service or Application.
  4. In the Select Service or Application page, click Virtual Machine and then click Next.
  5. In the Select Virtual Machine page, check the name of the virtual machine that you want to make highly available, and then click Next.
  6. Confirm your selection and then click Next again.
  7. The wizard will show a summary and the ability to check the report.
  8. And finally, under Services and Applications , right-click the virtual machine and then click Bring this service or application online. This action will bring the virtual machine online and start it.

LEAVE A REPLY

Please enter your comment!
Please enter your name here