18 min read

This article is written by Peter von Oven, the author of VMware Horizon View Essentials. In this article, we will start by taking a closer look at the design process. We will now look at the reference architecture and how we start to put together a design, building out the infrastructure for a production deployment.

Proving the technology – from PoC to production

In this section, we are going to discuss how to approach a VDI project. This is a key and very important piece of work that needs to be completed in the very early stages and is somewhat different from how you would typically approach an IT project.

Our starting point is to focus on the end users rather than the IT department. After all, these are the people that will be using the solution on a daily basis and know what tools they need to get their jobs done. Rather than giving them what you think they need, let’s ask them what they actually need and then, within reason, deliver this. It’s that old saying of don’t try and fit a square peg into a round hole. No matter how hard you try, it’s just never going to fit. First and foremost we need to design the technology around the user requirements rather than building a backend infrastructure only to find that it doesn’t deliver what the users require.

Assessment

Once you have built your business case and validated that against your EUC strategy and there is a requirement for delivering a VDI solution, the next stage is to run an assessment. It’s quite fitting that this book is entitled “Essentials”, as this stage of the project is exactly that, and is essential for a successful outcome.

We need to build up a picture of what the current environment looks like, ranging from looking at what applications are being used to the type of access devices. This goes back to the earlier point about giving the users what they need and the only way to find that out is to conduct an assessment. By doing this, we are creating a baseline. Then, as we move into defining the success criteria and proving the technology, we have the baseline as a reference point to demonstrate how we have improved current working and delivered on the business case and strategy.

There are a number of tools that can be used in the assessment phase to gather the information required, for example, Liquidware Labs Stratusphere FIT or SysTrack from Lakeside Software.

Don’t forget to actually talk to the users as well, so you are armed with the hard-and-fast facts from an assessment as well as the user’s perspective.

Defining the success criteria

The key objective in defining the success criteria is to document what a “good” solution should look like for the project to succeed and become production-ready.

We need to clearly define the elements that need to function correctly in order to move from proof of concept to proof of technology, and then into a pilot phase before deploying into production. You need to fully document what these elements are and get the end users or other project stakeholders to sign up to them. It’s almost like creating a statement of work with a clearly defined list of tasks.

Another important factor is to ensure that during this phase of the project, the criteria don’t start to grow beyond the original scope. By that, we mean other additional things should not get added to the success criteria or at least not without discussion first. It may well transpire that something key was missed; however, if you have conducted your assessment thoroughly, this shouldn’t happen.

Another thing that works well at this stage is to involve the end users. Set up a steering committee or advisory panel by selecting people from different departments to act as sponsors within their area of business. Actively involve them in the testing phases, but get them on board early as well to get their input in shaping the solution.

Too many projects fail when an end user tries something that didn’t work. However, the thing that they tried is not actually a relevant use case or something that is used by the business as a critical line of business application and therefore shouldn’t derail the project.

If we have a set of success criteria defined up front that the end users have signed up to, anything outside that criteria is not in scope. If it’s not defined in the document, it should be disregarded as not being part of what success should look like.

Proving the technology

Once the previous steps have been discussed and documented, we should be able to build a picture around what’s driving the project. We will understand what you are trying to achieve/deliver and, based upon hard-and-fast facts from the assessment phase, be able to work on what success should look like. From there, we can then move into testing some form of the technology should that be a requirement.

There are three key stages within the testing cycle to consider, and it might be the case that you don’t need all of them. The three stages we are talking about are as follows:

  • Proof of concept (PoC)
  • Proof of technology (PoT)
  • Pilot

In the next sections, we are briefly going to cover what each of these stages mean and why you might or might not need them.

Proof of concept

A proof of concept typically refers to a partial solution, typically built on any old hardware kicking about, that involves a relatively small number of users usually within the confines of the IT department acting in business roles, to establish whether the system satisfies some aspect of the purpose it was designed for.

Once proven, one or two things happen. Firstly nothing happens as it’s just the IT department playing with technology and there wasn’t a real business driver in the first place. This is usually down to the previous steps not having been defined. In a similar way, by not having any success criteria, it will also fail, as you don’t know exactly what you are setting out to prove.

The second outcome is that the project moves into a pilot phase that we will discuss in a later section. You could consider moving directly into this phase and bypassing the PoC altogether. Maybe a demonstration of the technology would suffice, and using a demo environment over a longer period would show you how the technology works.

Proof of technology

In contrast to the PoC, the objective of a proof of technology is to determine whether or not the proposed solution or technology will integrate into your existing environment and therefore demonstrate compatibility. The objective is to highlight any technical problems specific to your environment, such as how your bespoke systems might integrate.

As with the PoC, a PoT is typically run by the IT department and no business users would be involved. A PoT is purely a technical validation exercise.

Pilot

A pilot refers to what is almost a small-scale roll out of the solution in a production-style environment that would target a limited scope of the intended final solution. The scope may be limited by the number of users who can access the pilot system, the business processes affected, or the business partners involved.

The purpose of a pilot is to test, often in a production-like environment, whether the system is working, as it was designed while limiting business exposure and risk. It will also touch real users so as to gauge the feedback from what would ultimately become a live, production solution. This is a critical step in achieving success, as the users are the ones that have to interact with the system on a daily basis, and the reason why you should set up some form of working group to gather their feedback.

That would also mitigate the project from failing, as the solution may deliver everything the IT department could ever wish for, but when it goes live and the first user logs on and reports a bad experience or performance, you may as well not be bothered.

The pilot should be carefully scoped, sized, and implemented. We will discuss this in the next section.

The pilot phase

In this section, we are going to discuss the pilot phase in a bit more detail and break it down into three distinct stages. These are important, as the output from the pilot will ultimately shape the design of your production environment.

The following diagram shows the workflow we will follow in defining our project:

VMware Horizon View Essentials

Phase 1 – pilot design

The pilot infrastructure should be designed on the same hardware platforms that the production solution is going to be deployed, for example, the same servers and storage. This takes into account any anomalies between platforms and configuration differences that could affect things such as scalability or more importantly performance.

Even at pilot stage, the design is absolutely key, and you should make sure you take into account the production design even at this stage. Why? Basically because many pilot solutions end up going straight into production and more and more users get added above and beyond those scoped for the pilot.

It’s great going live with the solution and not having to go back and rebuild it, but when you start to scale by adding more users and applications, you might have some issues due to the pilot sizing. It may sound obvious, but often with a successful pilot, the users just keep on using it and additional users get added. If it’s only ever going to be a pilot, that’s fine, but keep this in mind and ask the question; if you are planning on taking the pilot straight into production design it for production.

It is always useful to work from a prerequisite document to understand the different elements that need consideration in the design. Key design elements include:

  • Hardware sizing (servers – CPU, memory, and consolidation ratios)
  • Pool design (based on user segmentation)
  • Storage design (local SSD, SAN, and acceleration technologies)
  • Image creation (rebuild from scratch and optimize for VDI)
  • Network design (load balancing and external access)
  • Antivirus considerations
  • Application delivery (delivering virtually versus installing in core image)
  • User profile management
  • Floating or dedicated desktop assignments
  • Persistent or non-persistent desktop builds (linked clone or full clone)

Once you have all this information, you can start to deploy the pilot.

Phase 2 – pilot deployment

In the deployment phase of the pilot, we are going to start building out the infrastructure, deploying the test users, building the OS images, and then start testing.

Phase 3 – pilot test

During the testing phase, the key thing during this stage is to work closely with the end users and your sponsors, showing them the solution and how it works, closely monitoring the users, and assessing the solution as it’s being used. This allows you to keep in contact with the users and give them the opportunity to continually provide real-time feedback. This also allows you to answer questions and make adjustments and enhancements on the fly rather than wait to the end of the project and then to be told it didn’t work or they just simply didn’t understand something.

This then leads us onto the last section, the review.

Phase 4 – pilot review

This final stage sometimes tends to get forgotten. We have deployed the solution, the users have been testing it, and then it ends there for whatever reason. However, there is one very important last thing to do to enable the customer to move to production.

We need to measure the user experience or the IT department’s experience against the success criteria we set out at the start of this process. We need to get customer sign off and agreement that we have successfully met all the objectives and requirements. If this is not the case, we need to understand the reasons why. Have we missed something in the use case, have the user requirements changed, or is it simply a perception issue?

Whatever the case, we need to cycle round the process again. Go back to the use case, understand and reevaluate the user requirements, (what it is that is seemingly failing or not behaving as expected), and then tweak the design or make the required changes and get them to test the solution again. We need to continue this process until we get acceptance and sign off; otherwise, we will not get to the final solution deployment phase.

When the project has been signed off after a successful pilot test, there is no reason why you cannot deploy the technology in production.

Now that we have talked about how to prove the technology and successfully demonstrated that it delivers against both our business case and user requirements, in the next sections, we are going to start looking at the design for our production environment.

Designing a Horizon 6.0 architecture

We are going to start this section by looking at the VMware reference architecture for Horizon View 6.0 before we go into more detail around the design considerations, best practice, and then sizing guidelines.

The pod and block reference architecture

VMware has produced a reference architecture model for deploying Horizon View, with the approach being to make it easy to scale the environment by adding set component pieces of infrastructure, known as View blocks. To scale the number of users, you add View blocks up to the maximum configuration of five blocks. This maximum configuration of five View blocks is called a View pod.

The important numbers to remember are that each View block supports up to a maximum of 2,000 users, and a View pod is made up of up to five View blocks, therefore supporting a maximum of 10,000 users. The View block contains all the infrastructure required to host the virtual desktop machines, so appropriately sized ESXi hosts, a vCenter Server, and the associated networking and storage requirements. We will cover the sizing aspects later on in this article. The following diagram shows an individual View block:

VMware Horizon View Essentials

Apart from having a View block that supports the virtual desktop machines, there is also a management block for the supporting infrastructure components. The management block contains the management elements of Horizon View, such as the connection servers and security servers.

These will also be virtual machines hosted on the vSphere platform but using separate ESXi hosts and vCenter servers from those being used to host the desktops. The following diagram shows a typical View management block:

VMware Horizon View Essentials

The management block contains the key Horizon View components to support the maximum configuration of 10,000 users or a View pod.

In terms of connection servers, the management block consists of a maximum of seven connection servers. This is often written as 5 + 2, which can be misleading, but what it means is you can have five connection servers and two that serve as backups to replace a failed server. Each connection server supports one of the five blocks, with the two spare in reserve in the event of a failure.

As we discussed previously, the View Security Servers are paired with one of the connection servers in order to provide external access to the users. In our example diagram, we have drawn three security servers meaning that these servers are configured for external access, while the others serve the internal users only.

In this scenario, the View Connection Servers and View Security Servers are deployed as virtual machines, and are therefore controlled and managed by vCenter. The vCenter Server can run on a virtual machine, or you can use the vCenter Virtual Appliance. It can also run on a physical Windows Server, as it’s just a Windows application.

The entire infrastructure is hosted on a vSphere cluster that’s separate, from that being used to host the virtual desktop machines.

There are a couple of other components that are not shown in the diagram, and those are the databases required for View such as the events database and for View Composer.

If we now look at the entire Horizon View pod and block architecture for up to 10,000 users, the architecture design would look something like the following diagram:

VMware Horizon View Essentials

One thing to note is that although a pod is limited to 10,000 users, you can deploy more than one pod should you need an environment that exceeds the 10,000 users. Bear in mind though that the pods do not communicate with each other and will effectively be completely separate deployments.

As this is potentially a limitation in the scalability, but more so for disaster recovery purposes, where you need to have two pods across two sites for disaster recovery, there is a feature in Horizon View 6.0 that allows you to deploy pods across sites. This is called the Cloud Pod Architecture (CPA), and we will cover this in the next section.

The Cloud Pod Architecture

The Cloud Pod Architecture, also referred to as linked-mode View (LMV) or multidatacenter View (MDCV), allows you to link up to four View pods together across two sites, with a maximum number of supported users of up to 20,000.

There are four key features available by deploying Horizon View using this architecture:

  • Scalability: This hosts more than 10,000 users on a single site
  • Multidatacenter support: This supports View across more than one data center
  • Geo roaming: This supports roaming desktops for users moving across sites
  • DR: This delivers resilience in the event of a data center failure

Let’s take a look at the Cloud Pod Architecture in the following diagram to explain the features and how it builds on the pod and block architecture we discussed previously:

VMware Horizon View Essentials

With the Cloud Pod Architecture, user information is replicated globally and the pods are linked using the View interpod API (VIPA)—the setup for which is command-line-based.

For scalability, with the Cloud Pod Architecture model, you have the ability to entitle users across pools on both different pods and sites. This means that, if you have already scaled beyond a single pod, you can link the pods together to allow you to go beyond the 10,000 user limit and also administer your users from a single location.

The pods can, apart from being located on the same site, also be on two different sites to deliver a mutlidatacenter configuration running as active/active. This also introduces DR capabilities. In the event of one of the data centers failing or losing connectivity, users will still be able to connect to a virtual desktop machine.

Users don’t need to worry about what View Connection Server they need to use to connect to their virtual desktop machine. The Cloud Pod Architecture supports a single namespace with access via a global URL. As users can now connect from anywhere, there are some configuration options that you need to consider as to how they access their virtual desktop machine and from where it gets delivered. There are three options that form part of the global user entitlement feature:

  • Any: This is delivered from any pod as part of the global entitlement
  • Site: This is delivered from any pod from the same site the user is connecting from
  • Local: This is delivered only from the local pod that the user is connected to

It’s not just the users that get the global experience; the administrators can also be segregated in this way so that you can deliver delegated management.

Administration of pods could be delegated to the local IT teams on a per region/geo basis, with some operations such as provisioning and patching performed locally on the local pods or maybe it’s so that local language support can be delivered. It is only global policy that is managed globally, typically from an organizations global HQ.

Now that we have covered some of the high-level architecture options, you should now be able to start to look at your overall design, factoring in locations and the number of users.

In the next section, we will start to look at how to size some of these components.

Sizing the infrastructure

In this section, we are going to discuss the sizing of the components previously described in the architecture section. We will start by looking at the management blocks containing the connection servers, security servers, and then the servers that host the desktops before finishing off with the desktops themselves.

The management block and the block hosting the virtual desktop machines should be run on separate infrastructure (ESXi hosts and vCenter Servers); the reason being due to the different workload patterns between servers and desktops and to avoid performance issues. It’s also easier to manage, as you can determine what desktops are and what servers are, but more importantly it’s also the way in which the products are licensed. With vSphere for desktop that comes with Horizon View, it only entitles you to run workloads that are hosting and managing the virtual desktop infrastructure.

Summary

In this article, you learned how to design a Horizon 6.0 architecture.

Resources for Article:


Further resources on this subject:


LEAVE A REPLY

Please enter your comment!
Please enter your name here