(For more resources related to this topic, see here.)
The Server Message Block protocol
When an enterprise starts to build a modern datacenter, the ﬁrst thing that should be done is to set up the storage. With the introduction of Windows Server 2012, a new improved version of the Server Message Block (SMB) protocol is introduced. The SMB is a ﬁle sharing protocol. This new version is 3.0 and is designed for modern datacenters. It allows administrators to create ﬁle shares and deploy critical systems on them. This is really good, because now administrators have to deal with ﬁle shares and security permissions, instead of complex connections to storage arrays. The idea is to set up one central SMB ﬁle-sharing server and attach the underlying storage to it. This SMB server initiates connection to the underlying storage. The logical disks created on the storage are attached to this SMB server. Then different ﬁle shares are created on it with different access permissions. These ﬁle shares can be used by different systems, such as Hyper-V storage space for virtual machine ﬁles, MS SQL server database ﬁles, Exchange Server database ﬁles, and so on. It is an advantage, because all of the data is stored on one location, which means easier administration of data ﬁles.
It is important to say that this is a new concept and is only available with Windows Server 2012. It comes with no performance degradation on critical systems, because SMB v3.0 was designed for this type of data trafﬁc.
Setting up security permissions on SMB ﬁle shares
SMB ﬁle shares contain sensitive data ﬁles whether they are virtual machines or SQL server database ﬁles, proper security permissions need to be applied to them in order to ensure that only authorized users and machines have access to them. Because of this, SMB File Sharing server has to be connected to the LAN part of the infrastructure as well. Security permissions are read from an Active Directory server. For example, if Hyper-V hosts have to read and write on a share, then only the computer accounts of those hosts need permissions on that share, and no one else. Another example is, if the share holds MS SQL server database ﬁles, then only the SQL Server computer accounts and SQL Server service account need permissions on that share.
Migration of virtual machines
Virtual Machine High Availability is the reason why failover clusters are deployed. High availability means that there is no system downtime or there is minimal accepted system downtime. This is different from system uptime. A system can be up and running but it may not be available. Hyper-V hosts in modern datacenters run many virtual machines, depending on the underlying hardware resources. Each of these systems is very important to the consumer. Let’s say that a Hyper-V hosts malfunctions at some bank, and let’s say that this host, hosts several critical systems and one of them may be the ATM system. If this happens, the users won’t be able to use the ATMs. This is where Virtual Machine High Availability comes into picture. It is achieved through the implementation of failover cluster. A failover cluster ensures that when a node of the cluster becomes unavailable, all of the virtual machines on that node will be safely migrated to another node of the same cluster. Users can even set rules to specify to which host the virtual machines failover should go. Migration is also useful when some maintenance tasks should be done on some of the nodes of the cluster. The node can safely be shut down and all of the virtual machines, or at least the most critical, will be migrated to another host.
Conﬁguring Hyper-V Replica
Enterprises tend to increase their system availability and deliver end user services. There are various ways how this can be done, such as making your virtual machines highly available, disaster recovery methods, and back up of critical systems. In case of system malfunction or disasters, the IT department needs to react fast, in order to minimize system downtime. Disaster recovery methods are valuable to the enterprise. This is why it is imperative that the IT department implements them. When these methods are built in the existing platform that the enterprise uses and it is easy to conﬁgure and maintain, then you have a winning combination. This is a suitable scenario for Hyper-V Replica to step up. It is easy to conﬁgure and maintain, and it is integrated with the Hyper-V 3.0, which comes with Windows Server 2012. This is why Hyper-V Replica is becoming more attractive to the IT departments when it comes to disaster recovery methods. In this article, we will learn what are the Hyper-V Replica prerequisites and conﬁguration steps for Hyper-V Replica in different deployment scenarios. Because Hyper-V Replica can be used with failover clusters, we will learn how to conﬁgure a failover cluster with Windows Server 2012. And we will introduce a new concept for virtual machine file storage called SMB.
Hyper-V Replica requirements
Before we can start with the implementation of Hyper-V Replica, we have to be sure we have met all the prerequisites. In order to implement Hyper-V Replica, we have to install Windows Server 2012 on our physical machines. Windows Server 2012 is a must, because Hyper-V Replica is functionality available only with that version of Windows Server. Next, you have to install Hyper-V on each of the physical machines. Hyper-V Replica is a built-in feature of Hyper-V 3.0 that comes with Windows Server 2012. If you plan to deploy Hyper-V on non-domain servers, you don’t require an Active Directory Domain. If you want to implement a failover cluster on your premise, then you must have Active Directory Domain.
In addition, if you want your replication trafﬁc to be encrypted, you can use self-signed certiﬁcates from local servers or import a certiﬁcate generated from a Certiﬁcate Authority (CA). This is a server running Active Directory Certiﬁcate Services, which is a Windows Server Role that should be installed on a separate server. Certiﬁcates from such CAs are imported to Hyper-V Replica-enabled hosts and associated with Hyper-V Replica to encrypt trafﬁc generated from a primary site to a replica site. A primary site is the production site of your company, and a replica site is a site which is not a part of the production site and it is where all the replication data will be stored. If we have checked and cleared all of these prerequisites, then we are ready to start with the deployment of Hyper-V Replica.
Virtual machine replication in Failover Cluster environment
Hyper-V Replica can be used with Failover Clusters, whether they reside in the primary or in the replica site. You can have the following deployment scenarios:
- Hyper-V host to a Failover Cluster
- Failover Cluster to a Failover Cluster
- Failover Cluster to a Hyper-V node
Hyper-V Replica conﬁguration when Failover Clusters are used is done with the Failover Cluster Management console. For replication to take place, the Hyper-V Replica Broker role must be installed on the Failover Clusters, whether they are in primary or replica sites. The Hyper-V Replica Broker role is installed like any other Failover Cluster roles.
In Hyper-V Replica there are three failover scenarios:
- Test failover
- Planned failover
- Unplanned failover
As the name says, this is only used for testing purposes, such as health validation and Hyper-V Replica functionality. When test failover is performed, there is no downtime on the systems in the production environment. Test failover is done at the replica site.
When test failover is in progress, a new virtual machine is created which is a copy of the virtual machine for which you are performing the test failover. It is easily distinguished because the new virtual machine has Test added to the name. It is safe for the Test Virtual Machine to be started because there is no network adapter on it. So no one can access it. It serves only for testing purposes. You can log in on it and check the application consistency. When you have ﬁnished testing, right-click on the virtual machine and choose Stop Test Failover, and then the Test virtual machine is deleted.
Planned failover is the safest and the only type that should be performed. Planned failover is usually done when Hyper-V hosts have to be shut down for various reasons such as transport or maintenance. This is similar to Live Migration. You make a planned failover so that you don’t lose virtual machine availability. The ﬁrst thing you have to do is check whether the replication process for the virtual machine is healthy. To do this, you have to start the Hyper-V Management console in the primary site. Choose the virtual machine, and then at the bottom, click on the Replication tab. If the replication health status is Healthy, then it is ﬁne to do the planned failover. If the health status doesn’t show Healthy, then you need to do some maintenance until it says Healthy.
Unplanned failover is used only as a last resort. It always results in data loss because any data that has not been replicated is lost during the failover. Although planned failover is done at the primary site, the unplanned failover is done at the replica site. When performing unplanned failover, the replica virtual machine is started. At that moment Hyper-V checks to see if the primary virtual machine is on. If it is on, then the failover process is stopped. If the primary virtual machine is off, then the failover process is continued and the replica virtual machine becomes the primary virtual machine.
What is virtualization?
Virtualization is a concept in IT that has its root back in 1960 when mainframes were used. In recent years, virtualization became more available because of different user-friendly tools, such as Microsoft Hyper-V, were introduced to customers. These tools allow the administrator to conﬁgure and administer a virtualized environment easily. Virtualization is a concept where a hypervisor, which is a type of middleware, is deployed on a physical device. This hypervisor allows the administrator to deploy many virtual servers that will execute its workload on that same physical machine. In other words, you get many virtual servers on one physical device. This concept gives better utilization of resources and thus it is cost effective.
Hyper-V 3.0 features
With the introduction of Windows Server 2008 R2, two new concepts regarding virtual machine high availability were introduced. Virtual machine high availability is a concept that allows the virtual machine to execute its workload with minimum downtime. The idea is to have a mechanism that will transfer the execution of the virtual machine to another physical server in case of node malfunctioning. In Windows Server 2008 R2, a virtual machine can be live migrated to another Hyper-V host. There is also quick migration, which allows multiple migrations from one host to another host.
In Windows Server 2012, there are new features regarding Virtual Machine Mobility. Not only can you live migrate a virtual machine but you can also migrate all of its associated ﬁ les, including the virtual machine disks to another location. Both mechanisms improve high availability. Live migration is a functionality that allows you to transfer the execution of a virtual machine to another server with no downtime. Previous versions of Windows Server lacked disaster recovery mechanisms. Disaster recovery mechanism is any tool that allows the user to conﬁgure policy that will minimize the downtime of systems in case of disasters. That is why, with the introduction of Windows Server 2012, Hyper-V Replica is installed together with Hyper-V and can be used in clustered and in non-clustered environments. Windows Failover Clustering is a Windows feature that is installed from the Add Roles and Features Wizard from Server Manager. It makes the server ready to be joined to a failover cluster. Hyper-V Replica gives enterprises great value, because it is an easy to implement and conﬁgure a Business Continuity and Disaster Recovery (BCDR) solution. It is suitable for Hyper-V virtualized environments because it is built in the Hyper-V role of Windows Server 2012. The outcome of this is for virtual machines running at one site called primary site to be easily replicated to another backup site called replica site, in case of disasters. The replication between the sites is done over an IP network, so it can be done in LAN environments or across WAN link. This BCDR solution provides efﬁcient and periodical replication. In case of disaster it allows the production servers to be failed over to a replica server. This is very important for critical systems because it reduces downtime of those systems. It also allows the Hyper-V administrator to restore virtual machines to a speciﬁc point in time regarding recovery history of a certain virtual machine.
Restricting access to Hyper-V is very important. You want only authorized users to have access to the management console of Hyper-V. When Hyper-V is installed, a local security group on the server is created. It is named Hyper-V Administrators. Every user that is member of this group can access and conﬁgure Hyper-V settings. Another way to increase security of Hyper-V is to change the default port numbers of Hyper-V Authentication. By default, Kerberos uses port number 80, and Certiﬁcate Authentication uses port number 443. Certiﬁcated also encrypts the trafﬁc generated from primary to replica site. And at last, you can create a list of authorized servers from which replication trafﬁc will be received.
There are new concepts and useful features that make the IT administrators’ life easier. Windows Server 2012 is designed for enterprises that want to deploy modern datacenters with state-of-the-art capabilities. The new user interface, the simpliﬁed conﬁguration, and all of the built-in features are what that makes Windows Server 2012 appealing to the IT administrators.
Resources for Article:
- Dynamically enable a control (Become an expert) [Article]
- Choosing the right flavor of Debian (Simple) [Article]
- So, what is Microsoft © Hyper-V server 2008 R2? [Article]