In this article by Iwan ‘e1’ Rahabok, the author of the book VMware Performance and Capacity Management, Second Edition, we will look at why a seemingly simple technology, a virtualized x86 machine, has huge ramifications for the IT industry. In fact, it is turning a lot of things upside down and breaking down silos that have existed for decades in large IT organizations. We will cover the following topics:
- Why virtualization is not what we think it is
- Virtualization versus partitioning
- A comparison between a physical server and a virtual machine
(For more resources related to this topic, see here.)
Our journey into the virtual world
A virtual machine, or simply, VM – who doesn’t know what it is? Even a business user who has never seen one knows what it is. It is just a physical server, virtualized. Nothing more.
Wise men say that small leaks sink the ship. This is a good way to explain why IT departments that manage physical servers well struggle when the same servers are virtualized.
We can also use the Pareto principle (80/20 rule). 80 percent of a VM is identical to a physical server. But it’s the 20 percent of difference that hits you. We will highlight some of this 20 percent portion, focusing on areas that impact data center management.
The change caused by virtualization is much larger than the changes brought about by previous technologies. In the past two or more decades, we transitioned from mainframes to the client/server-based model and then to the web-based model. These are commonly agreed upon as the main evolutions in IT architecture. However, all of these are just technological changes. They changed the architecture, yes, but they did not change the operation in a fundamental way. Both the client-server and web shifts did not talk about the journey. There was no journey to the client-server based model. However, with virtualization, we talk about the journey. It is a journey because the changes are massive and involve a lot of people.
In 2007, Gartner correctly predicted the impact of virtualization (http://www.gartner.com/newsroom/id/505040). More than 8 years later, we are still in the midst of the journey. Proving how pervasive the change is, here is the summary on the article from Gartner:
Notice how Gartner talks about a change in culture. Virtualization has a cultural impact too. In fact, if your virtualization journey is not fast enough, look at your organization’s structure and culture. Have you broken the silos? Do you empower your people to take risks and do things that have never been done before? Are you willing to flatten the organizational chart?
The silos that have served you well are likely your number one barrier to a hybrid cloud.
So why exactly is virtualization causing such a fundamental shift? To understand this, we need to go back to the basics, which is exactly what virtualization is. It’s pretty common that chief information officers (CIOs) have a misconception about what it is.
Take a look at the following comments. Have you seen them in your organization?
- VM is just a virtualized physical machine. Even VMware says that the guest OS is not aware it’s virtualized and that it does not run differently.
- It is still about monitoring CPU, RAM, disk, network, and other resources. No difference.
- It is a technological change. Our management process does not have to change.
- All of these VMs must still feed into our main enterprise IT management system. This is how we have run our business for decades, and it works.
If only life were that simple, we would all be 100-percent virtualized and have no headaches! Virtualization has been around for years, and yet, most organizations have not mastered it. The proof of mastering it is when you have completed the journey and have reached the highest level of the virtualization maturity model.
Not all virtualizations are equal
There are plenty of misconceptions about the topic of virtualization, especially among IT folks who are not familiar with virtualization. CIOs who have not felt the strategic impact of virtualization (be it a good or bad experience) tend to carry these misconceptions. Although virtualization looks similar to a physical system from the outside, it is completely re-architected under the hood.
So, let’s take a look at the first misconception: what exactly is virtualization?
Because it is an industry trend, virtualization is often generalized to include other technologies that are not virtualized. This is a typical strategy by IT vendors who have similar technologies. A popular technology often branded under virtualization is hardware partitioning; since it is parked under the umbrella of virtualization, both should be managed in the same way. Since both are actually different, customers who try to manage both with a single piece of management software struggle to do well.
Partitioning and virtualization are two different architectures in computer engineering, resulting in there being major differences between their functionalities. They are shown in the following screenshot:
With partitioning, there is no hypervisor that virtualizes the underlying hardware. There is no software layer separating the VM and the physical motherboard. There is, in fact, no VM. This is why some technical manuals for partitioning technology do not even use the term VM. The manuals use the term domain, partition, or container instead.
There are two variants of partitioning technology, hardware-level and OS-level partitioning, which are covered in the following bullet points:
- In hardware-level partitioning, each partition runs directly on the hardware. It is not virtualized. This is why it is more scalable and has less of a performance hit. Because it is not virtualized, it has to have an awareness of the underlying hardware. As a result, it is not fully portable. You cannot move the partition from one hardware model to another. The hardware has to be built for a purpose to support that specific version of the partition. The partitioned OS still needs all the hardware drivers and will not work on other hardware if the compatibility matrix does not match. As a result, even the version of the OS matters, as it is just like a physical server.
- In OS-level partitioning, there is a parent OS that runs directly on the server motherboard. This OS then creates an “OS partition”, where other OSes can run. We use double quotes as it is not exactly the full OS that runs inside that partition. The OS has to be modified and qualified to be able to run as a zone or container. Because of this, application compatibility is affected. This is different in a VM, where there is no application compatibility issue as the hypervisor is transparent to the guest OS.
We covered the difference between virtualization and partitioning from an engineering point of view. However, does it translate into different data center architectures and operations? We will focus on hardware partitioning since there are fundamental differences between hardware partitioning and software partitioning. The use case for both is also different. Software partitioning is typically used in native cloud applications.
With that, let’s do a comparison between hardware partitioning and virtualization. We will start with availability.
With virtualization, all VMs are protected by vSphere High Availability (vSphere HA), which provides 100 percent protection and that too without VM awareness. Nothing needs to be done at the VM layer. No shared or quorum disk and no heartbeat-network VM is required to protect a VM with basic HA.
With hardware partitioning, the protection has to be configured manually, one by one for each logical partition (LPAR) or logical domain (LDOM). The underlying platform does not provide that.
With virtualization, you can even go beyond five nines (99.999 percent) and move to 100 percent with vSphere Fault Tolerance. This is not possible in the partitioning approach as there is no hypervisor that replays CPU instructions. Also, because it is virtualized and transparent to the VM, you can turn the Fault Tolerance capability on and off on demand. Fault Tolerance is completely defined in the software.
Another area of difference between partitioning and virtualization is disaster recovery (DR). With partitioning technology, the DR site requires another instance to protect the production instance. It is a different instance, with its own OS image, hostname, and IP address. Yes, we can perform a Storage Area Network (SAN) boot, but that means another Logical Unit Number (LUN) is required to manage, zone, replicate, and so on. Disaster recovery is not scalable to thousands of servers. To make it scalable, it has to be simpler.
Compared to partitioning, virtualization takes a different approach. The entire VM fits inside a folder; it becomes like a document and we migrate the entire folder as if the folder is one object. This is what vSphere Replication or Site Recovery Manager do. They perform a replication per VM; there is no need to configure a SAN boot. The entire DR exercise, which can cover thousands of virtual servers, is completely automated and has audit logs automatically generated. Many large enterprises have automated their DR with virtualization. There is probably no company that has automated DR for their entire LPAR, LDOM, or container.
In the previous paragraph, we’re not implying LUN-based or hardware-based replication as inferior solutions. We’re merely driving the point that virtualization enables you to do things differently.
We’re also not saying that hardware partitioning is an inferior technology. Every technology has its advantages and disadvantages and addresses different use cases. Before joining VMware, the author was a Sun Microsystems sales engineer for five years, so he is aware of the benefits of UNIX partitioning. This article is merely trying to dispel the misunderstanding that hardware partitioning equals virtualization.
We’ve covered the differences between hardware partitioning and virtualization. Let’s switch gear to software partitioning.
In 2016, the adoption of Linux containers will continue its rapid rise. You can actually use both containers and virtualization, and they complement each other in some use cases. There are two main approaches to deploying containers:
- Run them directly on bare metal
- Run them inside a virtual machine
As both technologies evolve, the gap gets wider. As a result, managing a software partition is different from managing a VM. Securing a container is different to securing a VM. Be careful when opting for a management solution that claims to manage both. You will probably end up with the most common denominator. This is one reason why VMware is working on vSphere Integrated Containers and the Photon platform. Now that’s a separate topic by itself!
We hope you enjoyed the comparison and found it useful. We covered, to a great extent, the impact caused by virtualization and the changes it introduces. We started by clarifying that virtualization is a different technology compared to partitioning. We then explained that once a physical server is converted to a virtual machine, it takes on a different form and has radically different properties.
Resources for Article:
- Deploying New Hosts with vCenter [article]
- VMware vCenter Operations Manager Essentials – Introduction to vCenter Operations Manager [article]
- VMware vRealize Operations Performance and Capacity Management [article]