Virtual Machine Concepts

17 min read

(For more resources related to this topic, see here.)

The multiple instances of Windows or Linux systems that are running on an ESXi host are commonly referred to as a virtual machine (VM). Any reference to a guest operating system (OS) is an instance of Linux, Windows, or any other supported operating system that is installed on the VM.

vSphere virtual machines

At the heart of virtualization lies the virtual machine. A virtual machine is a set of virtual hardware whose characteristics are determined by a set of files; it is this virtual hardware that a guest operating system is installed on. A virtual machine runs an operating system and a set of applications just like a physical server. A virtual machine comprises a set of configuration files and is backed by the physical resources of an ESXi host. An ESXi host is the physical server that has the VMware hypervisor, known as ESXi, installed. Each virtual machine is equipped with virtual hardware and devices that provide the same functionality as having physical hardware.

Virtual machines are created within a virtualization layer, such as ESXi running on a physical server. This virtualization layer manages requests from the virtual machine for resources such as CPU or memory. It is the virtualization layer that is responsible for translating these requests to the underlying physical hardware.

Each virtual machine is granted a portion of the physical hardware. All VMs have their own virtual hardware (there are important ones to note, called the primary 4: CPU, memory, disk, and network). Each of these VMs is isolated from the other and each interacts with the underlying hardware through a thin software layer known as the hypervisor. This is different from a physical architecture in which the installed operating system interacts with installed hardware directly.

With virtualization, there are many benefits, in relation to portability, security, and manageability that aren’t available in an environment that uses a traditional physical infrastructure. However, once provisioned, virtual machines use many of the same principles that are applied to physical servers.

The preceding diagram demonstrates the differences between the traditional physical architecture (left) and a virtual architecture (right). Notice that the physical architecture typically has a single application and a single operating system using the physical resources. The virtual architecture has multiple virtual machines running on a single physical server, accessing the hardware through the thin hypervisor layer.

Virtual machine components

When a virtual machine is created, a default set of virtual hardware is assigned to it. VMware provides devices and resources that can be added and configured to the virtual machine. Not all virtual hardware devices will be available to every single virtual machine; both the physical hardware of the ESXi host and the VM’s guest OS must support these configurations. For example, a virtual machine will not be capable of being configured with more vCPUs than the ESXi host has logical CPU cores.

The virtual hardware available includes:

  • BIOS: Phoenix Technologies 6.00 that functions like a physical server BIOS. Virtual machine administrators are able to enable/disable I/O devices, configure boot order, and so on.
  • DVD/CD-ROM: NEC VMware IDE CDR10 that is installed by default in new virtual machines created in vSphere. The DVD/CD-ROM can be configured to connect to the client workstation DVD/CD-ROM, an ESXi host DVD/CD-ROM, or even an .iso file located on a datastore. DVD/CD-ROM devices can be added to or removed from a virtual machine.
  • Floppy drive: This is installed by default with new virtual machines created in vSphere. The floppy drive can be configured to connect to the client device’s floppy drive, a floppy device located on the ESXi host, or even a floppy image (.flp) located on a datastore. Floppy devices can be added to or removed from a virtual machine.
  • Hard disk: This stores the guest operating system, program files, and any other data associated with a virtual machine. The virtual disk is a large file, or potentially a set of files, that can be easily copied, moved, and backed up.
  • IDE controller: Intel 82371 AB/EB PCI Bus Master IDE Controller that presents two Integrated Drive Electronics (IDE) interfaces to the virtual machine by default. This IDE controller is a standard way for storage devices, such as floppy drives and CD-ROM drives, to connect to the virtual machine.
  • Keyboard: This mirrors the keyboard that is first connected to the virtual machine console upon initial console connection.
  • Memory: This is the virtual memory size configured for the virtual machine that determines the guest operating system’s memory size.
  • Motherboard/Chipset: The motherboard uses VMware proprietary devices that are based on the following chips:
    • Intel 440BX AGPset 82443BX Host Bridge/Controller
    • Intel 82093 AA I/O Advanced Programmable Interrupt Controller
    • Intel 82371 AB (PIIX4) PCI ISA IDE Xcelerator
    • National Semiconductor PC87338 ACPI 1.0 and PC98/99 Compliant Super I/O
  • Network adapter: ESXi networking features provide communication between virtual machines residing on the same ESXi host, between VMs residing on different ESXi hosts, and between VMs and physical machines. When configuring a VM, network adapters (NICs) can be added and the adapter type can be specified.
  • Parallel port: This is an interface for connecting peripherals to the virtual machine. Virtual parallel ports can be added to or removed from the virtual machine.
  • PCI controller: This is a bus located on the virtual machine motherboard, communicating with components such as a hard disk. A single PCI controller is presented to the virtual machine. This cannot be configured or removed.
  • PCI device: DirectPath devices can be added to a virtual machine. The devices must be reserved for PCI pass-through on the ESXi host that the virtual machine runs on. Keep in mind that snapshots are not supported with DirectPath I/O pass-through device configuration. For more information on virtual machine snapshots, see
  • Pointing device: This mirrors the pointing device that is first connected to the virtual machine console upon initial console connection.
  • Processor: This specifies the number of sockets and core for the virtual processor. This will appear as AMD or Intel to the virtual machine guest operating system depending upon the physical hardware.
  • Serial port: This is an interface for connecting peripherals to the virtual machine. The virtual machine can be configured to connect to a physical serial port, a file on the host, or over the network. The serial port can also be used to establish a direct connection between two VMs. Virtual serial ports can be added to or removed from the virtual machine.
  • SCSI controller: This provides access to virtual disks. The virtual SCSI controller may appear as one of several different types of controllers to a virtual machine, depending on the guest operating system of the VM. Editing the VM configuration can modify the SCSI controller type, a SCSI controller can be added, and a virtual controller can be configured to allocate bus sharing.
  • SCSI device: A SCSI device interface is available to the virtual machine by default. This interface is a typical way to connect storage devices (hard drives, floppy drives, CD-ROMs, and so on) to a VM. SCSI device that can be added to or removed from a virtual machine.
  • SIO controller: The Super I/O controller provides serial and parallel ports, and floppy devices, and performs system management activities. A single SIO controller is presented to the virtual machine. This cannot be configured or removed.
  • USB controller: This provides USB functionality to the USB ports managed. The virtual USB controller is a software virtualization of the USB host controller function in a VM.
  • USB device: Multiple USB devices may be added to a virtual machine. These can be mass storage devices or security dongles. The USB devices can be connected to a client workstation or to an ESXi host.
  • Video controller: This is a VMware Standard VGA II Graphics Adapter with 128 MB video memory.
  • VMCI: The Virtual Machine Communication Interface provides high-speed communication between the hypervisor and a virtual machine. VMCI can also be enabled for communication between VMs. VMCI devices cannot be added or removed.

Uses of virtual machines

In any infrastructure, there are many business processes that have applications supporting them. These applications typically have certain requirements, such as security or performance requirements, which may limit the application to being the only thing installed on a given machine. Without virtualization, there is typically a 1:1:1 ratio for server hardware to an operating system to a single application. This type of architecture is not flexible and is inefficient due to many applications using only a small percentage of the physical resources dedicated to it, effectively leaving the physical servers vastly underutilized. As hardware continues to get better and better, the gap between the abundant resources and the often small application requirements widens. Also, consider the overhead needed to support the entire infrastructure, such as power, cooling, cabling, manpower, and provisioning time. A large server sprawl will cost more money for space and power to keep these systems housed and cooled.

Virtual infrastructures are able to do more with less—fewer physical servers are needed due to higher consolidation ratios. Virtualization provides a safe way of putting more than one operating system (or virtual machine) on a single piece of server hardware by isolating each VM running on the ESXi host from any other. Migrating physical servers to virtual machines and consolidating onto far fewer physical servers means lowering monthly power and cooling costs in the datacenter. Fewer physical servers can help reduce the datacenter footprint; fewer servers means less networking equipment, fewer server racks, and eventually less datacenter floor space required. Virtualization changes the way a server is provisioned. Initially it took hours to build a cable and install the OS; now it takes only seconds to deploy a new virtual machine using templates and cloning.

VMware offers a number of advanced features that aren’t found in a strictly physical infrastructure. These features, such as High Availability, Fault Tolerance, and Distributed Resource Scheduler, help with increased uptime and overall availability. These technologies keep the VMs running or give the ability to quickly recover from unplanned outages. The ability to quickly and easily relocate a VM from one ESXi host to another is one of the greatest benefits of using vSphere virtual machines.

In the end, virtualizing the infrastructure and using virtual machines will help save time, space, and money. However, keep in mind that there are some upfront costs to be aware of. Server hardware may need to be upgraded or new hardware purchased to ensure compliance with the VMware Hardware Compatibility List (HCL). Another cost that should be taken into account is the licensing costs for VMware and the guest operating system; each tier of licensing allows for more features but drives up the price to license all of the server hardware.

The primary virtual machine resources

Virtualization decouples physical hardware from an operating system. Each virtual machine contains a set of its own virtual hardware and there are four primary resources that a virtual machine needs in order to correctly function. These are CPU, memory, network, and hard disk. These four resources look like physical hardware to the guest operating systems and applications. The virtual machine is granted access to a portion of the resources at creation and can be reconfigured at any time thereafter. If a virtual machine experiences constraint, one of the four primary resources is generally where a bottleneck will occur.

In a traditional architecture, the operating system interacts directly with the server’s physical hardware without virtualization. It is the operating system that allocates memory to applications, schedules processes to run, reads from and writes to attached storage, and sends and receives data on the network. This is not the case with a virtualized architecture. The virtual machine guest operating system still does the aforementioned tasks, but also interacts with virtual hardware presented by the hypervisor.

In a virtualized environment, a virtual machine interacts with the physical hardware through a thin layer of software known as the virtualization layer or the hypervisor; in this case the hypervisor is ESXi. This hypervisor allows the VM to function with a degree of independence from underlying physical hardware. This independence is what allows vMotion and Storage vMotion functionality. The following diagram demonstrates a virtual machine and its four primary resources:

This section will provide an overview of each of the “primary four” resources.


The virtualization layer runs CPU instructions to make sure that the virtual machines run as though accessing the physical processor on the ESXi host. Performance is paramount for CPU virtualization, and therefore will use the ESXi host physical resources whenever possible. The following image displays a representation of a virtual machine’s CPU:

A virtual machine can be configured with up to 64 virtual CPUs (vCPUs) as of vSphere 5.5. The maximum vCPUs able to be allocated depends on the underlying logical cores that the physical hardware has. Another factor in the maximum vCPUs is the tier of vSphere licensing; only Enterprise Plus licensing allows for 64 vCPUs. The VMkernel includes a CPU scheduler that dynamically schedules vCPUs on the ESXi host’s physical processors.

The VMkernel scheduler, when making scheduling decisions, considers socket-core-thread topology. A socket is a single, integrated circuit package that has one or more physical processor cores. Each core has one or more logical processors, also known as threads. If hyperthreading is enabled on the host, then ESXi is capable of executing two threads, or sets of instruction, simultaneously. Effectively, hyperthreading provides more logical CPUs to ESXi on which vCPUs can be scheduled, providing more scheduler throughput. However, keep in mind that hyperthreading does not double the core’s power. During times of CPU contention, when VMs are competing for resources, the VMkernel timeslices the physical processor across all virtual machines to ensure that the VMs run as if having a specified number of vCPUs.

VMware vSphere Virtual Symmetric Multiprocessing (SMP) is what allows the virtual machines to be configured with up to 64 virtual CPUs, which allows a larger CPU workload to run on an ESXi host. Though most supported guest operating systems are multiprocessor aware, many guest OSes and applications do not need and are not enhanced by having multiple vCPUs. Check vendor documentation for operating system and application requirements before configuring SMP virtual machines.


In a physical architecture, an operating system assumes that it owns all physical memory in the server, which is a correct assumption. A guest operating system in a virtual architecture also makes this assumption but it does not, in fact, own all of the physical memory. A guest operating system in a virtual machine uses a contiguous virtual address space that is created by ESXi as its configured memory. The following image displays a representation of a virtual machine’s memory:

Virtual memory is a well-known technique that creates this contiguous virtual address space, allowing the hardware and operating system to handle the address translation between the physical and virtual address spaces. Since each virtual machine has its own contiguous virtual address space, this allows ESXi to run more than one virtual machine at the same time. The virtual machine’s memory is protected against access from other virtual machines.

This effectively results in three layers of virtual memory in ESXi: physical memory, guest operating system physical memory, and guest operating system virtual memory. The VMkernel presents a portion of physical host memory to the virtual machine as its guest operating system physical memory. The guest operating system presents the virtual memory to the applications.

The virtual machine is configured with a set of memory; this is the sum that the guest OS is told it has available to it. A virtual machine will not necessarily use the entire memory size; it only uses what is needed at the time by the guest OS and applications. However, a VM cannot access more memory than the configured memory size. A default memory size is provided by vSphere when creating the virtual machine. It is important to know the memory needs of the application and guest operating system being virtualized so that the virtual machine’s memory can be sized accordingly.


There are two key components with virtual networking: the virtual switch and virtual Ethernet adapters. A virtual machine can be configured with up to ten virtual Ethernet adapters, called vNICs. The following image displays a representation of a virtual machine’s vNIC:

Virtual network switching is software interfacing between virtual machines at the vSwitch level until the frames hit an uplink or a physical adapter, exiting the ESXi host and entering the physical network. Virtual networks exist for virtual devices; all communication between the virtual machines and the external world (physical network) goes through vNetwork standard switches or vNetwork distributed switches.

Virtual networks operate on layer 2, data link, of the OSI model. A virtual switch is similar to a physical Ethernet switch in many ways. For example, virtual switches support the standard VLAN (802.1Q) implementation and have a forwarding table, like a physical switch. An ESXi host may contain more than one virtual switch. Each virtual switch is capable of binding multiple vmnics together in a network interface card (NIC) team, which offers greater availability to the virtual machines using the virtual switch.

There are two connection types available on a virtual switch: a port group and a VMkernel port. Virtual machines are connected to port groups on a virtual switch, allowing access to network resources. VMkernel ports provide a network service to the ESXi host to include IP storage, management, vMotion, and so on. Each VMkernel port must be configured with its own IP address and network mask. The port groups and VMkernel ports reside on a virtual switch and connect to the physical network through the physical Ethernet adapters known as vmnics. If uplinks (vmnics) are associated with a virtual switch, then the virtual machines connected to a port group on this virtual switch will be able to access the physical network.


In a non-virtualized environment, physical servers connect directly to storage, either to an external storage array or to their internal hard disk arrays to the server chassis. The issue with this configuration is that a single server expects total ownership of the physical device, tying an entire disk drive to one server. Sharing storage resources in non-virtualized environments can require complex filesystems and migration to file-based Network Attached Storage (NAS) or Storage Area Networks (SAN). The following image displays a representation of a virtual disk:

Shared storage is a foundational technology that allows many things to happen in a virtual environment (High Availability, Distributed Resource Scheduler, and so on). Virtual machines are encapsulated in a set of discrete files stored on a datastore. This encapsulation makes the VMs portable and easy to be cloned or backed up. For each virtual machine, there is a directory on the datastore that contains all of the VM’s files. A datastore is a generic term for a container that holds files as well as .iso images and floppy images. It can be formatted with VMware’s Virtual Machine File System (VMFS) or can use NFS. Both datastore types can be accessed across multiple ESXi hosts.

VMFS is a high-performance, clustered filesystem devised for virtual machines that allows a virtualization-based architecture of multiple physical servers to read and write to the same storage simultaneously. VMFS is designed, constructed, and optimized for virtualization. The newest version, VMFS-5, exclusively uses 1 MB block size, which is good for large files, while also having an 8 KB subblock allocation for writing small files such as logs. VMFS-5 can have datastores as large as 64 TB. The ESXi hosts use a locking mechanism to prevent the other ESXi hosts accessing the same storage from writing to the VMs’ files. This helps prevent corruption.

Several storage protocols can be used to access and interface with VMFS datastores; these include Fibre Channel, Fibre Channel over Ethernet, iSCSI, and direct attached storage. NFS can also be used to create a datastore. VMFS datastore can be dynamically expanded, allowing the growth of the shared storage pool with no downtime.

vSphere significantly simplifies accessing storage from the guest OS of the VM. The virtual hardware presented to the guest operating system includes a set of familiar SCSI and IDE controllers; this way the guest OS sees a simple physical disk attached via a common controller. Presenting a virtualized storage view to the virtual machine’s guest OS has advantages such as expanded support and access, improved efficiency, and easier storage management.


Please enter your comment!
Please enter your name here