16 min read

In this article by Victor Wu and Eagle Huang, authors of the book, Mastering VMware vSphere Storage, we will learn that, SAN storage is a key component of a VMware vSphere environment. We can choose different vendors and types of SAN storage to deploy on a VMware Sphere environment. The advanced settings of each storage can affect the performance of the virtual machine, for example, FC or iSCSI SAN storage. It has a different configuration in a VMware vSphere environment. Host connectivity of Fibre Channel storage is accessed by Host Bus Adapter (HBA). Host connectivity of iSCSI storage is accessed by the TCP/IP networking protocol. We first need to know the concept of storage. Then we can optimize the performance of storage in a VMware vSphere environment.

In this article, you will learn these topics:

  • What the vSphere storage APIs for Array Integration (VAAI) and Storage Awareness (VASA) are
  • The virtual machine storage profile
  • VMware vSphere Storage DRS and VMware vSphere Storage I/O Control

(For more resources related to this topic, see here.)

vSphere storage APIs for array integration and storage awareness

VMware vMotion is a key feature in vSphere hosts. An ESXi host cannot provide the vMotion feature if it is without shared SAN storage. SAN storage is a key component in a VMware vSphere environment. In large-scale virtualization environments, there are many virtual machines stored in SAN storage. When a VMware administrator executes virtual machine cloning or migrates a virtual machine to another ESXi host by vMotion, this operation allocates the resource on that ESXi host and SAN storage.

In vSphere 4.1 and later versions, it can support VAAI. The vSphere storage API is used by a storage vendor who provides hardware acceleration or offloads vSphere I/O between storage devices. These APIs can reduce the resource overhead on ESXi hosts and improve performance for ESXi host operations, for example, vMotion, virtual machine cloning, creating a virtual machine, and so on. VAAI has two APIs: the hardware acceleration API and the array thin provisioning API.

The hardware acceleration API is used to integrate with VMware vSphere to offload storage operations to the array and reduce the CPU overload on the ESXi host.

The following table lists the features of the hardware acceleration API for block and NAS:

Array integration

Features

Description

Block

Fully copy

This blocks clone or copy offloading.

Block zeroing

This is also called write same. When you provision an eagerzeroedthick VMDK, the SCSI command is issued to write zeroes to disks.

Atomic Test & Set (ATS)

This is a lock mechanism that prevents the other ESXi host from updating the same VMFS metadata.

NAS

Full file clone

This is similar to Extended Copy (XCOPY) hardware acceleration.

Extended statistics

This feature is enabled in space usage in the NAS data store.

Reserved space

The allocated space of virtual disk in thick format.

The array thin provisioning API is used to monitor the ESXi data store space on the storage arrays. It helps prevent the disk from running out of space and reclaims disk space. For example, if the storage is assigned as 1 x 3 TB LUN in the ESXi host, but the storage can only provide 2 TB of data storage space, it is considered to be 3 TB in the ESXi host. Streamline its monitoring LUN configuration space in order to avoid running out of physical space.

When vSphere administrators delete or remove files from the data store that is provisioned LUN, the storage can reclaim free space in the block level.

In vSphere 4.1 or later, it can support VAAI features.

In vSphere 5.5, you can reclaim the space on thin provisioned LUN using esxcli.

VMware VASA is a piece of software that allows the storage vendor to provide information about their storage array to VMware vCenter Server. The information includes storage capability, the state of physical storage devices, and so on. vCenter Server collects this information from the storage array using a software component called VASA provider, which is provided by the storage array vendor. A VMware administrator can view the information in VMware vSphere Client / VMware vSphere Web Client. The following diagram shows the architecture of VASA with vCenter Server. For example, the VMware administrator requests to create a 1 x data store in VMware ESXi Server.

It has three main components: the storage array, the storage provider and VMware vCenter Server.

The following is the procedure to add the storage provider to vCenter Server:

  1. Log in to vCenter by vSphere Client.
  2. Go to Home | Storage Providers.
  3. Click on the Add button.
  4. Input information about the storage vendor name, URL, and credentials.

Virtual machine storage profile

The storage provider can help the vSphere administrator know the state of the physical storage devices and the capabilities on which their virtual machines are located. It also helps choose the correct storage in terms of performance and space by using virtual machine storage policies. A virtual machine storage policy helps you ensure that a virtual machine guarantees a specified level of performance or capacity of storage, for example, the SSD/SAS/NL-SAS data store, spindle I/O, and redundancy. Before you define a storage policy, you need to specify the storage requirement for your application that runs on the virtual machine. It has two types of storage requirement, which is storage-vendor-specific storage capability and user-defined storage capability. Storage-vendor-specific storage capability comes from the storage array. The storage vendor provider informs vCenter Server that it can guarantee the use of storage features by using storage-vendor-specific storage capability. vCenter Server assigns vendor-specific storage capability to each ESXi data store. User-defined storage capability is the one that you can define and assign storage profile to each ESXi datastore.

In vSphere 5.1/5.5, the name of the storage policy is VM storage profile.

Virtual machine storage policies can include one or more storage capabilities and assign to one or more VM. The virtual machine can be checked for storage compliance if it is placed on compliant storage. When you migrate, create, or clone a virtual machine, you can select the storage policy and apply it to that machine. The following procedure shows how to create a storage policy and apply it to a virtual machine in vSphere 5.1 using user-defined storage capability:

The vSphere ESXi host requires the license edition of Enterprise Plus to enable the VM storage profile feature.

The following procedure is adding the storage profile into vCenter Server:

  1. Log in to vCenter Server using vSphere Client.
  2. Click on the Home button in the top bar, and choose the VM Storage Profiles button under Management.
  3. Click on the Manage Storage Capabilities button to create user-defined storage capability.
  4. Click on the Add button to create the name of the storage capacity, for example, SSD Storage, SAS Storage, or NL-SAS Storage. Then click on the Close button.
  5. Click on the Create VM Storage Profile button to create the storage policy.
  6. Input the name of the VM storage profile, as shown in the following screenshot, and then click on the Next button to select the user-defined storage capability, which is defined in step 4. Click on the Finish button.
  7. Assign the user-defined storage capability to your specified ESXi data store. Right-click on the data store that you plan to assign the user-defined storage capability to. This capability is defined in step 4.
  8. After creating the VM storage profile, click on the Enable VM Storage Profiles button. Then click on the Enable button to enable the profiles. The following screenshot shows Enable VM Storage Profiles:
  9. After enabling the VM storage profile, you can see VM Storage Profile Status as Enabled and Licensing Status as Licensed, as shown in this screenshot:
  10. We have successfully created the VM storage profile. Now we have to associate the VM storage profile with a virtual machine. Right-click on a virtual machine that you plan to apply to the VM storage profile, choose VM Storage Profile, and then choose Manage Profiles.
  11. From the drop-down menu of VM Storage Profile select your profile. Then you can click on the Propagate to disks button to associate all virtual disks or decide which virtual disks you want to associate with that profile by setting manually. Click on OK.
  12. Finally, you need to check the compliance of VM Storage Profile on this virtual machine. Click on the Home button in the top bar. Then choose the VM Storage Profiles button under Management. Go to Virtual Machines and click on the Check Compliance Now button. The Compliance Status will display Compliant after compliance checking, as follows:

Pluggable Storage Architecture (PSA) exists in the SCSI middle layer of the VMkernel storage stack. PSA is used to allow thirty-party storage vendors to use their failover and load balancing techniques for their specific storage array. A VMware ESXi host uses its multipathing plugin to control the ownership of the device path and LUN. The VMware default Multipathing Plugin (MPP) is called VMware Native Multipathing Plugin (NMP), which includes two subplugins as components: Storage Array Type Plugin (SATP) and Path Selection Plugin (PSP). SATP is used to handle path failover for a storage array, and PSP is used to issue an I/O request to a storage array. The following diagram shows the architecture of PSA:

This table lists the operation tasks of PSA and NMP in the ESXi host:

 

PSA

NMP

Operation tasks

Discovers the physical paths

Manages the physical path

Handles I/O requests to the physical HBA adapter and logical devices

Creates, registers, and deregisters logical devices

Uses predefined claim rules to control storage devices

Selects an optimal physical path for the request

The following is an example of operation of PSA in a VMkernel storage stack:

  1. The virtual machine sends out an I/O request to a logical device that is managed by the VMware NMP.
  2. The NMP requests the PSP to assign to this logical device.
  3. The PSP selects a suitable physical path to send the I/O request.
  4. When the I/O operation is completed successfully, the NMP reports that the I/O operation is complete. If the I/O operation reports an error, the NMP calls the SATP.
  5. The SATP fails over to the new active path.
  6. The PSP selects a new active path from all available paths and continues the I/O operation.

The following diagram shows the operation of PSA:

VMware vSphere provides three options for the path selection policy. These are Most Recently Used (MRU), Fixed, and Round Robin (RR). The following table lists the advantages and disadvantages of each path:

Path selection

Description

Advantage

Disadvantage

MRU

The ESXi host selects the first preferred path at system boot time. If this path becomes unavailable, the ESXi host changes to the other active path.

You can select your preferred path manually in the ESXi host.

The ESXi host does not revert to the original path when that l path becomes available again.

Fixed

You can select the preferred path manually.

The ESXi host can revert to the original path when the preferred path becomes available again.

If the ESXi host cannot select the preferred path, it selects an available preferred path randomly.

RR

The ESXi host uses automatic path selection.

The storage I/O across all available paths and enable load balancing across all paths.

The storage is required to support ALUA mode. You cannot know which path is preferred because the storage I/O across all available paths.

The following is the procedure of changing the path selection policy in an ESXi host:

  1. Log in to vCenter Server using vSphere Client.
  2. Go to the configuration of your selected ESXi host, choose the data store that you want to configure, and click on the Properties… button.
  3. Click on the Manage Paths… button.
  4. Select the drop-down menu and click on the Change button.

If you plan to deploy a third-party MPP on your ESXi host, you need to follow up the storage vendor’s instructions for the installation, for example, EMC PowerPath/VE for VMware that it is a piece of path management software for VMware’s vSphere server and Microsoft’s Hyper-V server. It also can provide load balancing and path failover features.

VMware vSphere Storage DRS

VMware vSphere Storage DRS (SDRS) is the placement of virtual machines in an ESX’s data store cluster. According to storage capacity and I/O latency, it is used by VMware storage vMotion to migrate the virtual machine to keep the ESX’s data store in a balanced status that is used to aggregate storage resources, and enable the placement of the virtual disk (VMDK) of virtual machine and load balancing of existing workloads. What is a data store cluster? It is a collection of ESXi’s data stores grouped together. The data store cluster is enabled for vSphere SDRS. SDRS can work in two modes: manual mode and fully automated mode. If you enable SDRS in your environment, when the vSphere administrator creates or migrates a virtual machine, SDRS places all the files (VMDK) of this virtual machine in the same data store or different a data store in the cluster, according to the SDRS affinity rules or anti-affinity rules. The VMware ESXi host cluster has two key features: VMware vSphere High Availability (HA) and VMware vSphere Distributed Resource Scheduler (DRS). SDRS is different from the host cluster DRS. The latter is used to balance the virtual machine across the ESXi host based on the memory and CPU usage. SDRS is used to balance the virtual machine across the SAN storage (ESX’s data store) based on the storage capacity and IOPS. The following table lists the difference between SDRS affinity rules and anti-affinity rules:

Name of SDRS rules

Description

VMDK affinity rules

This is the default SDRS rule for all virtual machines. It keeps each virtual machine’s VMDKs together on the same ESXi data store.

VMDK anti-affinity rules

Keep each virtual machine’s VMDKs on different ESXi data stores. You can apply this rule into all virtual machine’s VMDKs or to dedicated virtual machine’s VMDKs.

VM anti-affinity rules

Keep the virtual machine on different ESXi data stores. This rule is similar to the ESX DRS anti-affinity rules.

The following is the procedure to create a storage DRS in vSphere 5:

  1. Log in to vCenter Server using vSphere Client.
  2. Go to home and click on the Datastores and Datastore Clusters button. Right-click on the data center and choose New Datastore Cluster.
  3. Input the name of the SDRS and then click on the Next button.
  4. Choose Storage DRS mode, Manual Mode and Fully Automated Mode.

    Manual Mode: According to the placement and migration recommendation, the placement and migration of the virtual machine are executed manually by the user.
    Fully Automated Mode: Based on the runtime rules, the placement of the virtual machine is executed automatically.

  5. Set up SDRS Runtime Rules. Then click on the Next button.

    Enable I/O metric for SDRS recommendations is used to enable I/O load balancing.

    Utilized Space is the percentage of consumed space allowed before the storage DRS executes an action.

    I/O Latency is the percentage of consumed latency allowed before the storage DRS executes an action. This setting can execute only if the Enable I/O metric for SDRS recommendations checkbox is selected.

    No recommendations until utilization difference between source and destination is is used to configure the space utilization difference threshold.

    I/O imbalance threshold is used to define the aggressive of IOPs load balancing. This setting can execute only if the Enable I/O metric for SDRS recommendations checkbox is selected.

  6. Select the ESXi host that is required to create SDRS. Then click on the Next button.
  7. Select the data store that is required to join the data store cluster, and click on the Next button to complete.
  8. After creating SDRS, go to the vSphere Storage DRS panel on the Summary tab of the data store cluster. You can see that Storage DRS is Enabled.
  9. On the Storage DRS tab on the data store cluster, it displays the recommendation, placement, and reasons. Click on the Apply Recommendations button if you want to apply the recommendations.

    Click on the Run Storage DRS button if you want to refresh the recommendations.

VMware vSphere Storage I/O Control

What is VMware vSphere Storage I/O Control? It is used to control in order to share and limit the storage of I/O resources, for example, the IOPS. You can control the number of storage IOPs allocated to the virtual machine. If a certain virtual machine is required to get more storage I/O resources, vSphere Storage I/O Control can ensure that that virtual machine can get more storage I/O than other virtual machines. The following table shows example of the difference between vSphere Storage I/O Control enabled and without vSphere Storage I/O Control:

In this diagram, the VMware ESXi Host Cluster does not have vSphere Storage I/O Control. VM 2 and VM 5 need to get more IOPs, but they can allocate only a small amount of I/O resources. On the contrary, VM 1 and VM 3 can allocate a large amount of I/O resources. Actually, both VMs are required to allocate a small amount of IOPs. In this case, it wastes and overprovisions the storage resources.
In the diagram to the left, vSphere Storage I/O Control is enabled in the ESXi Host Cluster. VM 2 and VM 5 are required to get more IOPs. They can allocate a large amount of I/O resources after storage I/O control is enabled. VM 1, VM 3, and VM 4 are required to get a small amount of I/O resources, and now these three VMs allocate a small amount of IOPs. After enabling storage I/O control, it helps reduce waste and overprovisioning of the storage resources.

When you enable VMware vSphere Storage DRS, vSphere Storage I/O Control is automatically enabled on the data stores in the data store cluster.

The following is the procedure to be carried out to enable vSphere Storage I/O control on an ESXi data store, and set up storage I/O shares and limits using vSphere Client 5:

  1. Log in to vCenter Server using vSphere Client.
  2. Go to the Configuration tab of the ESXi host, select the data store, and then click on the Properties… button.
  3. Select Enabled under Storage I/O Control, and click on the Close button.
  4. After Storage I/O Control is enabled, you can set up the storage I/O shares and limits on the virtual machine. Right-click on the virtual machine and select Edit Settings.
  5. Click on the Resources tab in the virtual machine properties box, and select Disk. You can individually set each virtual disk’s Shares and Limit field.

By default, all virtual machine shares are set to Normal and with Unlimited IOPs.

Summary

In this article, you learned what VAAI and VASA are. In a vSphere environment, the vSphere administrator learned how to configure the storage profile in vCenter Server and assign to the ESXi data store. We covered the benefits of vSphere Storage I/O Control and vSphere Storage DRS.

When you found that it has a storage performance problem in the vSphere host, we saw how to troubleshoot the performance problem, and found out the root cause.

Resources for Article:


Further resources on this subject:


LEAVE A REPLY

Please enter your comment!
Please enter your name here