5 min read

(For more resources related to this topic, see here.)

View Storage Accelerator

View Storage Accelerator enables View to use the vSphere CBRC feature first introduced with vSphere 5.0. CBRC uses up to 2 GB of RAM on the vSphere host as a read-only cache for View desktop data. CBRC can be enabled for both full clone and linked clone desktop pools, with linked clone desktops having the additional option of caching only the operating system (OS) disk or both the OS disk and the persistent data disk.

When the View desktops are deployed and at configured intervals after that, CBRC analyzes the View desktop VMDK and generates a digest file that contains hash values for each block. When the View desktop performs a read operation, the CBRC filter on the vSphere host reviews the hash table and requests the smallest block required to complete the read request. This block and its associated hash key chunk are then placed in the CBRC cache on the vSphere host. Since the desktop VMDK contents are hashed at the block level, and View desktops are typically based on similar master images, CBRC can reuse cached blocks for subsequent read requests for data with the same hash value. Due to this, CBRC is actually a deduplicated cache.

The following figure shows the vSphere CBRC Filter as it sits in between the host CBRC cache, and the View desktop digest and VMDK files:

Since desktop VMDK contents are subject to change over time, View generates a new digest file of each desktop on a regular schedule. By default, this schedule is every 7 days, but that value can be changed as needed using the View Manager Admin console. Digest generation can be I/O intensive, so this operation should not be performed during periods of heavy desktop use.

View Storage Accelerator provides the most benefit during storm scenarios, such as desktop boot storms, user logon storms, or any other read-heavy desktop I/O operation initiated by a large number of desktops. As such, it is unlikely that View Storage Accelerator will actually reduce primary storage needs, but instead will ensure that desktop performance is maintained during these I/O intensive events.

Additional information about View Storage Accelerator is available in the VMware document View Storage Accelerator in VMware View 5.1 (http://www.vmware.com/files/pdf/techpaper/vmware-view-storage-accelerator-host-cachingcontent-based-read-cache.pdf). The information in the referenced document is still current, even if the version of View it references is not.

Tiered storage for View linked clones

To enable a more granular control over the storage architecture of linked clone desktops, View allows us to specify dedicated datastores for each of the following disks:

  • User persistent data disk
  • OS disk (which includes the disposable data disk, if configured)
  • Replica disk

It is not necessary to separate each of these disks, but in the following two sections we will outline why we might consider doing so.

User persistent data disk

The optional linked clone persistent data disk contains user personal data, and its contents are maintained even if the desktop is refreshed or recomposed. Additionally, the disk is associated with an individual user within View, and can be attached to a new View desktop if ever required. As such, an organization that does not back up their linked clone desktops may at the very least consider backing up the user persistent data disks

Due to the potential importance of the persistent data disks, organizations may wish to apply more protections to them than they would to the rest of the View desktop. View storage tiering is one way we can accomplish this, as we could place these disks on storage that has additional protections on it, such as replication to a secondary location, or even regular storage snapshots. These are just a sampling of the reasons an organization may want to separate the user persistent data disks.

Data replication or snapshots are typically not required for linked clone OS disks or replica disks as View does not support the manual recreation of linked clone desktops in the event of a disaster. Only the user persistent data disks can be reused if the desktop needs to be recreated from scratch.

Replica disks

One of the primary reasons an organization would want to separate linked clone replica disks onto dedicated datastores has to do with the architecture of View itself. When deploying a linked clone desktop pool, if we do not specify a dedicated datastore for the replica disk, View will create a replica disk on every linked clone datastore in the pool.

The reason we may not want a replica disk on every linked clone datastore has to do with the storage architecture. Since replica disks are shared between each desktop, their contents are often among the first to be promoted into any cache tiers that exist, particularly those within the storage infrastructure. If we had specified a single datastore for the replica, meaning that only one replica would be created, the storage platform would only need to cache data from that disk. If our storage array cache was not capable of deduplication, and we had multiple replica disks, that same array would now be required to cache the content from several View replica disks. Given that the amount of cache on most storage arrays is limited, the requirement to cache more replica disk data than is necessary due to the View linked clone tiering feature may exhaust the cache and thus decrease the array’s performance.

Using View linked clone tiering we can reduce the amount of replica disks we need, which may reduce the overall utilization of the storage array cache, freeing it up to cache other critical View desktop data.

As each storage array architecture is different, we should consult vendor resources to determine if this is the optimal configuration for the environment. As mentioned previously, if the array cache is capable of deduplication, this change may not be necessary.

VMware currently supports up to 1,000 desktops per each replica disk, although View does not enforce this limitation when creating desktop pools.

Summary

In this article, we have discussed the native features of View that impact the storage design, and how they are typically used.

Resources for Article:


Further resources on this subject:


LEAVE A REPLY

Please enter your comment!
Please enter your name here