20 min read

Introduction

In this article by Anuj Modi, the author of the book Implementing CISCO UCS Solutions – Second Edition, we are given some insight into Cisco UCS products and its innovative architecture, which abstracts the underlying physical hardware and provides unified management to all devices. We will walk through the installation of some of the UCS hardware components and deployment of VSM.

(For more resources related to this topic, see here.)

UCS storage servers

To provide a solution to meet storage requirements, Cisco introduced the new C3000 family of storage-optimized UCS rack servers in 2014. A standalone server with a capacity of 360 TB can be used for any kind of data-intensive application, such as big data, Hadoop, object-oriented storage, OpenStack, and other such enterprise applications requiring higher throughput and more efficient transfer rates. This server family can be integrated with existing any B-Series, C-Series, and M-Series servers to provide them with all the required storage and backup requirements. This is an ideal solution for customers who don’t want to invest in high-end storage arrays and still want to meet the business requirements. Cisco C3000 storage servers are an optimal solution for medium and large scale-out storage requirements.

Cisco UCS 3260 is the latest edition with better density, throughput, and dual-server support, whereas the earlier UCS 3160 provides the same density with a single server. These servers can be managed through CIMC like C-Series rack servers.

UCS C3206

Cisco UCS C3206 is 4U rack server with Intel® Xeon® E5-2600 v2 processor family CPUs and up to 320 TB local storage with dual-server nodes and 4×40 Gig I/O throughput using Cisco VIC 1300. This storage can be shared across two compute nodes and provide HA solution inside the box.

UCS C3106

Cisco UCS C3106 is a 4U rack server with Intel® Xeon® E5-2600 v2 processor family CPUs and up to 320 TB local storage with a single server node and 4×10 Gig I/O throughput.

UCS M-Series modular servers

Earlier in 2014, Cisco unleashed a new line of M-Series modular servers to meet the high-density, low-power demands of massively parallelized and cloud-scale applications. These modular servers separate the infrastructure components, such as network, storage, power, and cooling, from the compute nodes and deliver compute and memory to provide scalable applications. These compute nodes disaggregate resources such as processor and memory from the subsystem to provide a dense and power-efficient platform designed to increase compute capacity with unified management capabilities through UCS Manager called UCS System Link Technology. A modular chassis and compute cartridges are the building blocks of these M-Series servers. The M4308 chassis can hold up to eight compute cartridges. Each cartridge has a single or dual CPU with two memory channels and provides two nodes in a single cartridge to support up to 16 compute nodes in a single chassis. The cartridges are hot-pluggable and can be added or removed for the system. The Cisco VIC provides connectivity to UCS Fabric Interconnects for network and management.

The UCS M-Series modular server includes the following components:

  • Modular chassis M4308
  • Modular compute cartridges—M2814, M1414, and M142

M4308

The M4308 modular chassis is a 2U chassis that can accommodate eight compute cartridges with two servers per cartridge, and this makes for 16 servers in a single chassis. The chassis can be connected to a pair of Cisco Fabric Interconnects, providing network, storage, and management capabilities.

M2814

The M2814 cartridge has dual-socket Intel® Xeon® E5-2600 v3 and v4 process family CPUs with 512 GB memory. This cartridge can be used for web-scale and small, in-memory databases.

M1414

The M1414 cartridge has a single-socket Intel® Xeon® E3-1200 v3 and v4 process family CPU with 64 GB memory. This cartridge can be used for electronic design automation and simulation.

M142

The M142 cartridge has a single-socket Intel® Xeon® E3-1200 v3 and v4 process family CPU with 128 GB memory. This cartridge can be used for content delivery, dedicated hosting, and financial modeling and business analytics.

Cisco UCS Mini

Cisco brings the power of UCS in a smaller form factor for smaller business, remote, and branch offices with the UCS Mini solution, a combination of blade and rack servers with unified management provided by Cisco UCS Manager. UCS Mini is compact version with specialized the 6324 Fabric Interconnect model embedded into the Cisco UCS blade chassis for a smaller server footprint. It provides servers, storage, network, and management capabilities similar to classic UCS. The 6324 Fabric Interconnect combines the Fabric Extender and Fabric Interconnect functions into one plugin module to provide direct connections to upstream switches. A pair of 6324 Fabric Interconnects will be directly inserted into the chassis, called the primary chassis, and provide internal connectivity to UCS blades. A Fabric Extender is not required for the primary chassis. The primary chassis can be connected to another chassis, called the secondary chassis, through a pair of 2208XP or 2204XP Fabric Extenders.

Cisco UCS Mini supports a wide variety of UCS B-Series servers, such as B200 M3, 200 M4, B420 M3, and B420 M4. These blades can be inserted into mixed form factors. UCS Mini also supports connectivity to UCS C-Series servers C220 M3, C220 M4, C240 M3 and C240M4. A maximum of seven C-Series servers can be connected to a single chassis. It can support a maximum of 20 servers with a combination of two chassis with eight half-width blades per chassis and four C-Series servers.

Here is an overview of Cisco UCS Mini:

6324

Cisco 6324 is embedded into the Cisco UCS 5108 blade server chassis. The 6324 Fabric Interconnect integrates the functions of a Fabric Interconnect and Fabric Extender and provides LAN, SAN, and management connectivity to blade servers and rack servers with embedded UCS Manager. 1G/10G SFP+ ports will be used for external network and SAN connectivity, while 40 Gig QSPF can only be used for connecting another chassis, rack server, and storage. One or two Cisco UCS 6324 FIs can be installed in UCS Mini.

The following are the features of the 6324:

  • A maximum of 20 blade/rack servers per UCS domain
  • Four 1 Gig/10 Gig SFP+ unified ports
  • One 40 Gig QSFP
  • Fabric throughput of 500 Gbps

Installing UCS hardware components

Now, as we have a better understanding of the various components of the Cisco UCS platform, we can dive into the physical installation of UCS Fabric Interconnects and chassis, including blade servers, IOM modules, fan units, power supplies, SFP+ modules, and physical cabling.

Before the physical installation of the UCS solution, it is also imperative to consider other data center design factors, including:

  • Building floor load-bearing capacity
  • Rack requirements for UCS chassis and Fabric Interconnects
  • Rack airflow, heating, and ventilation (HVAC)

Installing racks for UCS components

Any standard 19-inch rack with a minimum depth of 29 inches to a maximum of 35 inches from front to rear rails with 42U height can be an ideal rack solution for Cisco UCS equipment. However, rack size can vary based on different requirements and the data center’s landscape. The rack should provide sufficient space for power, cooling, and cabling for all the devices. It should be designed according to data center best practices to optimize its resources. The front and rear doors should provide sufficient space to rack and stack the equipment and adequate space for all types of cabling. Suitable cable managers can be used to manage the cables coming from all the devices. It is always recommended that heavier devices be placed at the bottom and lighter devices at the top. For example, a Cisco UCS B-Series chassis can be placed at the bottom and Fabric Interconnect at the top of the rack. The number of UCS or Nexus devices in a rack can vary based on the total power available for it.

The Cisco R-Series R42610 rack is certified for Cisco UCS devices and Nexus switches, which provides better reliability, space, and structural integrity to data centers.

Installing UCS chassis components

Care must be taken during the installation of all components as failure to follow installation procedures may result in component malfunction and bodily injury.

UCS chassis Don’ts:

Do not try to lift even the empty chassis alone. At least two people are required to handle a UCS chassis.

Do not handle internal components such as CPU, RAM, and mezzanine cards without an ESD field kit.

Physical installation is divided into three sections:

  • Blade server component (CPU, memory, hard drives, and mezzanine card) installation
  • Chassis component (blade servers, IOMs, fan units, and power supplies) installation
  • Fabric Interconnect installation and physical cabling

Installing blade servers

Cisco UCS blade servers are designed with industry-standard components with some enhancements. Anyone with prior server installation experience should be comfortable installing internal components using guidelines provided in the blade server’s manual and following standard safety procedures. Transient ESD charges may result in thousands of volts of charge, which can degrade or permanently damage electronic components.

The Cisco ESD training course can be found at http://www.cisco.com/web/learning/le31/esd/WelcomeP.html.

All Cisco UCS blade servers have a similar cover design, with a button at the front top of the blade, which should be pushed down. Then, there are slight variations among models in the way that cover is slid off, which can be toward the rear and up or toward yourself and up.

Installing and removing CPUs

The following is the procedure to mount a CPU into a UCS B-Series blade server:

  1. Make sure you are wearing an ESD wrist wrap grounded to the blade server cover.
  2. To release the CPU clasp, slide it down and to the side.
  3. Move the lever up and remove the CPU’s blank cover. Keep the blank in a safe place just in case you to remove the CPU later.
  4. Lift the CPU by its plastic edges and align it with the socket. CPU can only fit one way.
  5. Lower the mounting bracket with the side lever, and secure the CPU into the socket.
  6. Align the heat sink with its fins in a position allowing unobstructed airflow from front to back.
  7. Gently tighten the heat sink screws to the motherboard.

CPU removal is the reverse of the installation process. It is critical to place the blank socket cover back over the CPU socket. Damage could occur to the socket without the blank cover.

Installing and removing RAM

The following is the procedure to install RAM modules into a UCS B-Series blade server:

  1. Make sure you are wearing an ESD wrist wrap grounded to the blade server cover.
  2. Undo the clips on the side of the memory slot.
  3. Hold the memory module from both edges in an upright position and firmly push it straight down, matching the notch of the module to the socket.
  4. Close the side clips to hold the memory module.

Memory removal is the reverse of the preceding process.

Memory modules must be inserted in pairs and split equally between each CPU if all the memory slots are not populated. Refer to the server manual for identifying memory slots pairs, slot-CPU relation, and optimized memory performance.

Installing and removing internal hard disks

UCS supports SFF, serial attached SCSI (SAS), and SATA solid-state disk (SSD) hard drives. B200 M4 blade servers also support non-volatile memory express (NVMe) SFF 2.5-inch hard drives via PCI Express. The B200 M4 also provides the option of an SD card to deploy small footprint hypervisors such as ESXi.

To insert a hard disk into B200, B260, B420, and B460 blade servers, follow these steps:

  1. Make sure you are wearing an ESD wrist wrap grounded to the blade server cover.
  2. Remove the blank cover.
  3. Press the button on the catch lever on the ejector arm.
  4. Slide the hard disk completely into the slot.
  5. Push the ejector lever until it clicks to lock the hard disk.
  6. To remove a hard disk, press the button, release the catch lever, and slide the hard disk out.

Do not leave a hard disk slot empty. If you do not intend to replace the hard disk, cover it with a blank plate to ensure proper airflow.

Installing mezzanine cards

The UCS B200 supports a single mezzanine card, B260/B420 support two cards, and B460 supports four cards. The procedure for installing these cards is the same for all servers:

  1. Make sure you are wearing an ESD wrist wrap grounded to the blade server cover.
  2. Open the server’s top cover.
  3. Grab the card by the edges and align the male molex connector, the female connector, and the motherboard.
  4. Press the card gently into the slot.
  5. Once card is properly seated, secure it by tightening the screw on the top.

Removing a mezzanine card is the reverse of the preceding process.

Installing blade servers into a chassis

The installation and removal of half-width and full-width blade servers is almost identical, with the only difference being that there’s one ejector arm in a half-width blade server, whereas in a full-width blade server, there are two ejector arms. Only the B460 M4 blade server requires two full-width slots in the chassis with an additional Scalability Connector to connect two B260 M4 blade servers and allow them to function as a single server. The bottom blade in this pair serves as the master and top blade as slave. The process is as follows:

  1. Make sure you are wearing an ESD wrist wrap grounded to the chassis.
  2. Open the ejector arm for a half-width blade or both ejector arms for a full-width blade.
  3. Push the blade into the slot. Once it’s firmly in, close the ejector arm on the face of server and hand-tighten the screw.

The removal of a blade server is the reverse of the preceding process.

In order to install a full-width blade, it is necessary to remove the central divider. This can be done with a Philips-head screw driver to push two clips, one downward and the other upward, and sliding the divider out of the chassis.

Installing rack servers

Cisco UCS rack servers are designed as standalone servers; however, these can be managed or unmanaged through UCS Managers. Unmanaged servers can be connected to any standard upstream switch or Nexus switch to provide network access. However, managed servers need to connect with Fabric Interconnects directly or indirectly through Fabric Extenders to provide LAN, SAN, and management network functionality. Based on the application requirement, the model of rack server can be selected from the various options discussed in the previous chapter. All the rack servers are designed in a similar way; however, they vary in capacity, size, weight, and dimensions. These rack servers can be fit into any standard 19-inch rack with adequate air space for servicing the server. Servers should be installed on the slide rails provided with them. At least two people or a mechanical lift should be used to place the servers in the rack. Always place the heavy rack servers at the bottom of the rack, and lighter servers can be placed at the top.

Cisco UCS rack servers provide front-to-rear cooling, and the data center should maintain adequate air conditioning to dissipate the heat from these servers. A Cisco rack server’s power requirement depends on the model and can be checked in the server data sheet.

Installing Fabric Interconnects

The Cisco UCS Fabric Interconnect is a top-of-rack switch that connects the LAN, SAN, and management to underlying physical compute servers for Ethernet and Fibre Channel access. Normally, a pair of Fabric Interconnects can be installed either in a single rack or distributed across two racks to provide rack and power redundancy. Fabric Interconnects can be installed at the bottom in case the network cabling is planned under the floor. All the components in a Fabric Interconnect come installed by default from the factory; only external components such as Ethernet modules, power supplies, and fans need to be connected.

It includes two redundant hot-swappable power supply units. It is recommended that the power supply in the rack come from two different energy sources and installed with a redundant power distribution unit (PDU). It requires a maximum of 750 Watts of power. Although the Fabric Interconnect can be powered with a single power source, it is recommended you have redundant power sources.

It includes four redundant hot-swappable fans to provide the most efficient front-to-rear cooling design and is designed to work in hot aisle/cold aisle environments. It dissipates approximately 2500 BTU/hour, and adequate cooling should be available in the data center for getting better performance.

For detailed information on power, cooling, physical, and environmental specifications, check the data sheet.

UCS FI 6300 series data sheet:

http://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-b-series-blade-servers/6332-specsheet.pdf

UCS FI 6200 series data sheet:

http://www.cisco.com/c/en/us/products/collateral/servers-unified-computing/ucs-6200-series-fabric-interconnects/data_sheet_c78-675245.pdf

Fabric Interconnects provide various options to connect with networks and servers. The 6300 Fabric Interconnect series can provide 10/40 Gig connectivity, while the FI 6200 series can provide 1/10 Gig to the upstream network.

The third-generation FI 6332 has 32×40 Gig fixed ports. Ports 1-12 and 15-26 can be configured as 40 Gig QSFP+ ports or as 4×10 Gig SFP+ breakout ports, while ports 13 and 14 can be configured as 40 Gig or 1/10 Gig with QSA adapters but can’t be configured with 10 Gig SFP+ breakout cables. The last six ports, 27-32, are dedicated for 40 Gig upstream switch connectivity.

The third-generation FI 6332-16UP can be deployed where native Fibre Channel connectivity is essential. The first 16 unified ports, 1-16, can be configured as 1/10 Gig SFP+ Ethernet ports or 4/8/16 Gig Fibre Channel ports. Ports 17-34 can be configured as 40 Gig QSFP+ ports or 4×10 Gig SFP+ breakout ports. The last six ports, 35-40, are dedicated for 40 Gig upstream switch connectivity.

The second-generation FI 6248UP has 32×10 Gig fixed unified ports and one expansion module to provide an additional 16 unified ports. All unified ports can be configured as either 1/10 Gig Ethernet or 1/2/4/8 Gig Fibre Channel.

The second-generation FI 6296UP has 48×10 Gig fixed unified ports and three expansion modules to provide an additional 48 unified ports. All unified ports can be configured as either 1/10 Gig Ethernet or 1/2/4/8 Gig Fibre Channel.

Deploying VSM

Cisco has simplified VSM deployment with Virtual Switch Update Manager (VSUM). It is a virtual appliance that can be installed on a VMware ESXi hypervisor and registered as plugin with VMware vCenter. VSUM enables the SA to install, monitor, migrate, and upgrade Cisco Nexus 1000V in high availability or standalone mode. With VSUM, you can deploy multiple instances of Nexus 1000V and can manage them from a single appliance. VSUM only provides layer 3 connectivity with ESXi hosts. For layer 2 connectivity with ESXi, you still have to deploy the Nexus 1000V VSM manually and configure L2 mode in basic configuration.

VSUM architecture

VSUM has two components: backend as virtual appliance and frontend as GUI integrated into VMware vCenter Server. The virtual appliance will be configured with an IP address, similar to a VMware vCenter Management IP address subnet. Once a virtual appliance is deployed on the ESXi, it establishes connectivity with VMware vCenter Server to get access to vCenter and hosts.

VSUM and VSM installation

VSM deployment will be divided into two parts. First, we will install VSUM and register it as a plugin with VMware vCenter. The second part is importing the Nexus 1000V package into VSUM and deploying Nexus 1000V through the VSUM GUI interface.

For installing and configuring Cisco VSUM, you require one management port group, which will communicate with vCenter. However, the Nexus 1000V VSM will require two port groups for control and management. You can use just one VLAN for all of these, but it is preferred to have them configured with separate VLANs.

Download the Nexus 1000v package and VSUM from https://software.cisco.com/download/type.html?mdfid=282646785&flowid=42790, and then use the following procedure to deploy VSUM and VSM:

  1. Go to vCenter, click on the File menu, and select Deploy OVF Template.
  2. Browse to the location of the VSUM OVA file, click on Browse… to install the required OVA file, and click on Next, as shown in the following screenshot:

  3. Click on Next twice.
  4. Click on Accept for the End User License Agreement page, and then click on Next.
  5. Specify a Name for the VSUM, and select the data center where you would like it to be installed. Then, click on Next, as shown in the following screenshot:

  6. Select the Datastore object to install the VSUM.
  7. Choose Thin Provision for the virtual disk of VSUM and click on Next.
  8. Under the Destination network page, select the network that you had created earlier for Management. Then, click on Next, as shown in the following screenshot:

  9. Enter the management IP address, DNS, vCenter IP address, username and password values, as shown in the following screenshot:

  10. A summary of all your settings is then presented. If you are happy with these settings, click on Finish.
  11. Once VSUM is deployed in the vCenter, power on the VM. This will take 5 minutes for installation and registration as a vCenter plugin. To view the VSUM GUI, you need to re-login into vCenter server to activate the VSUM plugin.

  12. Click on Cisco Virtual Switch Update Manager, and select the Upload option under Image Tasks. Upload the Nexus 1000v package downloaded earlier under Upload switch image. This will open Virtual Switch Image File Uploader and select the appropriate package.

  13. Once the image is uploaded, it will show up under Manage Uploaded switch images.
  14. Click on the Nexus 1000V tab under Basic Tasks, click on Install, and select the desired data center.

  15. In Nexus 1000V installer page, go to install a new control plane VSM, and select the proper options for Nexus1000V Switch Deployment Type, VSM Version, and Choose a Port Group. Select the host, VSM Domain ID, Switch Name, IP Address, Username, and Password. Then, click on Finish. A window will appear to show the progress.

  16. Once it’s deployed, select the Nexus 1000v VM, and click on Power On in the Summary screen.
  17. Go to the VSUM GUI, and click on Dashboard under Nexus 1000V Basic Tasks. Verify that the VC Connection Status is green for your installed VSM.

  18. You need to create a port profile of the Ethernet type for uplink management. For this, log in to the Nexus 1000v switch using SSH, and type the following commands:
    port-profile type ethernet system-uplink
    
        vmware port-group
    
        switchport mode trunk
    
        switchport trunk allowed vlan 1-1000
    
        mtu 9000
    
        no shutdown
    
        system vlan 1-1000
    
        state enabled
  19. Move over to vCenter, and add your ESXi host to the Nexus environment.
  20. Open vCenter, click on Inventory, and select Networking.
  21. Expand Datacenter and select Nexus Switch. Right-click on it and select Add Host.
  22. Select one of the unused VMNICs on the host, then select the uplink port group created earlier (the one carrying the system VLAN data), and click on Next.
  23. On the Network Connectivity page, click on Next.
  24. On the Virtual Machine Networking page, click on Next. Do not select the checkbox to migrate virtual machine networking.
  25. On the Ready to complete page, click on Finish.
  26. Finally, run the vemcmd show card command to confirm whether the opaque data is now being received by the VEM.
  27. Log in to your Nexus 1000v switch and type in show module to check whether your host has been added or not.
  28. Log in to your ESXi server and type VEM Status. As you can see, VEM is now loaded on the VMNIC.

Summary

In this article, we saw the various available options for all UCS components, such as storage servers, modular servers, and UCS Mini. We also learned about rack server installation and UCS hardware component installation such as chassis, blade, I/O module, and Fabric Interconnect. We learned about the VSUM architecture and Nexus 1000V components and installed those components.

Resources for Article:


Further resources on this subject:


LEAVE A REPLY

Please enter your comment!
Please enter your name here