15 min read

In this article by Ranjit Singh Thakurratan, the author of the book, Learning VMware NSX, we have discussed some of the core components of NSX. The article begins with a brief introduction of the NSX core components followed by a detailed discussion of these core components. We will go over three different control planes and see how each of the NSX core components fit in this architecture. Next, we will cover the VXLAN architecture and the transport zones that allow us to create and extend overlay networks across multiple clusters. We will also look at NSX Edge and the distributed firewall in greater detail and take a look at the newest NSX feature of multi-vCenter or cross-vCenterNSX deployment. By the end of this article, you will have a thorough understanding of the NSX core components and also their functional inter-dependencies.

In this article, we will cover the following topics:

  • An introduction to the NSX core components
  • NSX Manager
  • NSX Controller clusters
  • VXLAN architecture overview
  • Transport zones
  • NSX Edge
  • Distributed firewall
  • Cross-vCenterNSX

(For more resources related to this topic, see here.)

An introduction to the NSX core components

The foundational core components of NSX are divided across three different planes. The core components of a NSX deployment consist of a NSX Manager, Controller clusters, and hypervisor kernel modules. Each of these are crucial for your NSX deployment; however, they are decoupled to a certain extent to allow resiliency during the failure of multiple components. For example if your controller clusters fail, your virtual machines will still be able to communicate with each other without any network disruption. You have to ensure that the NSX components are always deployed in a clustered environment so that they are protected by vSphere HA.

The high-level architecture of NSX primarily describes three different planes wherein each of the core components fit in. They are the Management plane, the Control plane, and the Data plane. The following figure represents how the three planes are interlinked with each other. The management plane is how an end user interacts with NSX as a centralized access point, while the data plane consists of north-south or east-west traffic.

Let’s look at some of the important components in the preceding figure:

  • Management plane: The management plane primarily consists of NSX Manager. NSX Manager is a centralized network management component and primarily allows a single management point. It also provides the REST API that a user can use to perform all the NSX functions and actions. During the deployment phase, the management plane is established when the NSX appliance is deployed and configured. This management plane directly interacts with the control plane and also with the data plane. The NSX Manager is then managed via the vSphere web client and CLI. The NSX Manager is configured to interact with vSphere and ESXi, and once configured, all of the NSX components are then configured and managed via the vSphere web GUI.
  • Control plane: The control plane consists of the NSX Controller that manages the state of virtual networks. NSX Controllers also enable overlay networks (VXLAN) that are multicast-free and make it easier to create new VXLAN networks without having to enable multicast functionality on physical switches. The controllers also keep track of all the information about the virtual machines, hosts, and VXLAN networks and can perform ARP suppression as well. No data passes through the control plane, and a loss of controllers does not affect network functionality between virtual machines.

Overlay networks and VXLANs can be used interchangeably. They both represent L2 over L3 virtual networks.

  • Data plane: The NSX data plane primarily consists of NSX logical switch. The NSX logical switch is a part of the vSphere distributed switch and is created when a VXLAN network is created. The logical switch and other NSX services such as logical routing and logical firewall are enabled at the hypervisor kernel level after the installation of hypervisor kernel modules (VIBs). This logical switch is the key to enabling overlay networks that are able to encapsulate and send traffic over existing physical networks. It also allows gateway devices that allow L2 bridging between virtual and physical workloads.
    The data plane receives its updates from the control plane as hypervisors maintain local virtual machines and VXLAN (Logical switch) mapping tables as well. A loss of data plane will cause a loss of the overlay (VXLAN) network, as virtual machines that are part of a NSX logical switch will not be able to send and receive data.

NSX Manager

NSX Manager, once deployed and configured, can deploy Controller cluster appliances and prepare the ESXi host that involves installing various vSphere installation bundles (VIB) that allow network virtualization features such as VXLAN, logical switching, logical firewall, and logical routing. NSX Manager can also deploy and configure Edge gateway appliances and its services.

The NSX version as of this writing is 6.2 that only supports 1:1 vCenter connectivity.

NSX Manager is deployed as a single virtual machine and relies on VMware’s HA functionality to ensure its availability. There is no NSX Manager clustering available as of this writing. It is important to note that a loss of NSX Manager will lead to a loss of management and API access, but does not disrupt virtual machine connectivity.

Finally, the NSX Manager’s configuration UI allows an administrator to collect log bundles and also to back up the NSX configuration.

NSX Controller clusters

NSX Controller provides a control plane functionality to distribute Logical Routing, VXLAN network information to the underlying hypervisor. Controllers are deployed as Virtual Appliances, and they should be deployed in the same vCenter to which NSX Manager is connected. In a production environment, it is recommended to deploy minimum three controllers. For better availability and scalability, we need to ensure that DRS ant-affinity rules are configured to deploy Controllers on a separate ESXI host. The control plane to management and data plane traffic is secured by a certificate-based authentication.

It is important to note that controller nodes employ a scale-out mechanism, where each controller node uses a slicing mechanism that divides the workload equally across all the nodes. This renders all the controller nodes as Active at all times. If one controller node fails, then the other nodes are reassigned the tasks that were owned by the failed node to ensure operational status. The VMware NSX Controller uses a Paxos-based algorithm within the NSX Controller cluster. The Controller removes dependency on multicast routing/PIM in the physical network. It also suppresses broadcast traffic in VXLAN networks.

The NSX version 6.2 only supports three controller nodes.

VXLAN architecture overview

One of the most important functions of NSX is enabling virtual networks. These virtual networks or overlay networks have become very popular due to the fact that they can leverage existing network infrastructure without the need to modify it in any way. The decoupling of logical networks from the physical infrastructure allows users to scale rapidly. Overlay networks or VXLAN was developed by a host of vendors that include Arista, Cisco, Citrix, Red Hat, and Broadcom. Due to this joint effort in developing its architecture, it allows the VXLAN standard to be implemented by multiple vendors.

VXLAN is a layer 2 over layer 3 tunneling protocol that allows logical network segments to extend on routable networks. This is achieved by encapsulating the Ethernet frame with additional UPD, IP, and VXLAN headers. Consequently, this increases the size of the packet by 50 bytes. Hence, VMware recommends increasing the MTU size to a minimum of 1600 bytes for all the interfaces in the physical infrastructure and any associated vSwitches.

When a virtual machine generates traffic meant for another virtual machine on the same virtual network, the hosts on which these source and destination virtual machines run are called VXLAN Tunnel End Point (VTEP). VTEPs are configured as separate VM Kernel interfaces on the hosts. The outer IP header block in the VXLAN frame contains the source and the destination IP addresses that contain the source hypervisor and the destination hypervisor. When a packet leaves the source virtual machine, it is encapsulated at the source hypervisor and sent to the target hypervisor. The target hypervisor, upon receiving this packet, decapsulates the Ethernet frame and forwards it to the destination virtual machine.

Once the ESXI host is prepared from NSX Manager, we need to configure VTEP. NSX supports multiple VXLAN vmknics per host for uplink load balancing features. In addition to this, Guest VLAN tagging is also supported.

A sample packet flow

We face a challenging situation when a virtual machine generates traffic—Broadcast, Unknown Unicast, or Multicast (BUM)—meant for another virtual machine on the same virtual network (VNI) on a different host. Control plane modes play a crucial factor in optimizing the VXLAN traffic depending on the modes selected for the Logical Switch/Transport Scope:

  • Unicast
  • Hybrid
  • Multicast

By default, a Logical Switch inherits its replication mode from the transport zone. However, we can set this on a per-Logical-Switch basis. Segment ID is needed for Multicast and Hybrid Modes.

The following is a representation of the VXLAN-encapsulated packet showing the VXLAN headers:

As indicated in the preceding figure, the outer IP header identifies the source and the destination VTEPs. The VXLAN header also has the Virtual Network Identifier (VNI) that is a 24-bit unique network identifier. This allows the scaling of virtual networks beyond the 4094 VLAN limitation placed by the physical switches. Two virtual machines that are a part of the same virtual network will have the same virtual network identifier, similar to how two machines on the same VLAN share the same VLAN ID.

Transport zones

A group of ESXi hosts that are able to communicate with one another over the physical network by means of VTEPs are said to be in the same transport zone. A transport zone defines the extension of a logical switch across multiple ESXi clusters that span across multiple virtual distributed switches.

A typical environment has more than one virtual distributed switch that spans across multiple hosts. A transport zone enables a logical switch to extend across multiple virtual distributed switches, and any ESXi host that is a part of this transport zone can have virtual machines as a part of that logical network. A logical switch is always created as part of a transport zone and ESXi hosts can participate in them.

The following is a figure that shows a transport zone that defines the extension of a logical switch across multiple virtual distributed switches:

NSX Edge Services Gateway

The NSX Edge Services Gateway (ESG) offers a feature rich set of services that include NAT, routing, firewall, load balancing, L2/L3 VPN, and DHCP/DNS relay. NSX API allows each of these services to be deployed, configured, and consumed on-demand.

The ESG is deployed as a virtual machine from NSX Manager that is accessed using the vSphere web client. Four different form factors are offered for differently-sized environments. It is important that you factor in enough resources for the appropriate ESG when building your environment.

The ESG can be deployed in different sizes. The following are the available size options for an ESG appliance:

  • X-Large: The X-large form factor is suitable for high performance firewall, load balancer, and routing or a combination of multiple services. When an X-large form factor is selected, the ESG will be deployed with six vCPUs and 8GB of RAM.
  • Quad-Large: The Quad-large form factor is ideal for a high performance firewall. It will be deployed with four vCPUs and 1GB of RAM.
  • Large: The large form factor is suitable for medium performance routing and firewall. It is recommended that, in production, you start with the large form factor. The large ESG is deployed with two vCPUs and 1GB of RAM.
  • Compact: The compact form factor is suitable for DHCP and DNS replay functions. It is deployed with one vCPU and 512MB of RAM.

Once deployed, a form factor can be upgraded by using the API or the UI. The upgrade action will incur an outage. Edge gateway services can also be deployed in an Active/Standby mode to ensure high availability and resiliency. A heartbeat network between the Edge appliances ensures state replication and uptime. If the active gateway goes down and the “declared dead time” passes, the standby Edge appliance takes over. The default declared dead time is 15 seconds and can be reduced to 6 seconds.

Let’s look at some of the Edge services as follows:

  • Network Address Translation: The NSX Edge supports both source and destination NAT and NAT is allowed for all traffic flowing through the Edge appliance. If the Edge appliance supports more than 100 virtual machines, it is recommended that a Quad instance be deployed to allow high performance translation.
  • Routing: The NSX Edge allows centralized routing that allows the logical networks deployed in the NSX domain to be routed to the external physical network. The Edge supports multiple routing protocols including OSPF, iBGP, and eBGP. The Edge also supports static routing.
  • Load balancing: The NSX Edge also offers a load balancing functionality that allows the load balancing of traffic between the virtual machines. The load balancer supports different balancing mechanisms including IP Hash, least connections, URI-based, and round robin.
  • Firewall: NSX Edge provides a stateful firewall functionality that is ideal for north-south traffic flowing between the physical and the virtual workloads behind the Edge gateway. The Edge firewall can be deployed alongside the hypervisor kernel-based distributed firewall that is primarily used to enforce security policies between workloads in the same logical network.
  • L2/L3VPN: The Edge also provides L2 and L3 VPNs that makes it possible to extend L2 domains between two sites. An IPSEC site-to-site connectivity between two NSX Edges or other VPN termination devices can also be set up.
  • DHCP/DNS relay: NSX Edge also offers DHCP and DNS relay functions that allows you to offload these services to the Edge gateway. Edge only supports DNS relay functionality and can forward any DNS requests to the DNS server. The Edge gateway can be configured as a DHCP server to provide and manage IP addresses, default gateway, DNS servers and, search domain information for workloads connected to the logical networks.

Distributed firewall

NSX provides L2-L4stateful firewall services by means of a distributed firewall that runs in the ESXi hypervisor kernel. Because the firewall is a function of the ESXi kernel, it provides massive throughput and performs at a near line rate. When the ESXi host is initially prepared by NSX, the distributed firewall service is installed in the kernel by deploying the kernel VIB – VMware Internetworking Service insertion platform or VSIP. VSIP is responsible for monitoring and enforcing security policies on all the traffic flowing through the data plane. The distributed firewall (DFW) throughput and performance scales horizontally as more ESXi hosts are added.

DFW instances are associated to each vNIC, and every vNIC requires one DFW instance. A virtual machine with 2 vNICs has two DFW instances associated with it, each monitoring its own vNIC and applying security policies to it. DFW is ideally deployed to protect virtual-to-virtual or virtual-to-physical traffic. This makes DFW very effective in protecting east-west traffic between workloads that are a part of the same logical network. DFW policies can also be used to restrict traffic between virtual machines and external networks because it is applied at the vNIC of the virtual machine. Any virtual machine that does not require firewall protection can be added to the exclusion list. A diagrammatic representation is shown as follows:

DFW fully supports vMotion and the rules applied to a virtual machine always follow the virtual machine. This means any manual or automated vMotion triggered by DRS does not cause any disruption in its protection status.

The VSIP kernel module also adds spoofguard and traffic redirection functionalities as well. The spoofguard function maintains a VM name and IP address mapping table and prevents against IP spoofing. Spoofguard is disabled by default and needs to be manually enabled per logical switch or virtual distributed switch port group. Traffic redirection allows traffic to be redirected to a third-party appliance that can do enhanced monitoring, if needed. This allows third-party vendors to be interfaced with DFW directly and offer custom services as needed.

Cross-vCenterNSX

With NSX 6.2, VMware introduced an interesting feature that allows you to manage multiple vCenterNSX environments using a primary NSX Manager. This allows to have easy management and also enables lots of new functionalities including extending networks and other features such as distributed logical routing. Cross-vCenterNSX deployment also allows centralized management and eases disaster recovery architectures.

In a cross-vCenter deployment, multiple vCenters are all paired with their own NSX Manager per vCenter. One NSX Manager is assigned as the primary while other NSX Managers become secondary. This primary NSX Manager can now deploy a universal controller cluster that provides the control plane. Unlike a standalone vCenter-NSX deployment, secondary NSX Managers do not deploy their own controller clusters.

The primary NSX Manager also creates objects whose scope is universal. This means that these objects extend to all the secondary NSX Managers. These universal objects are synchronized across all the secondary NSX Managers and can be edited and changed by the primary NSX Manager only. This does not prevent you from creating local objects on each of the NSX Managers.

Similar to local NSX objects, a primary NSX Manager can create global objects such as universal transport zones, universal logical switches, universal distributed routers, universal firewall rules, and universal security objects. There can be only one universal transport zone in a cross-vCenterNSX environment. After it is created, it is synchronized across all the secondary NSX Managers. When a logical switch is created inside a universal transport zone, it becomes a universal logical switch that spans layer 2 network across all the vCenters. All traffic is routed using the universal logical router, and any traffic that needs to be routed between a universal logical switch and a logical switch (local scope) requires an ESG.

Summary

We began the article with a brief introduction of the NSX core components and looked at the management, control, and the data plane. We then discussed NSX Manager and the NSX Controller clusters. This was followed by a VXLAN architecture overview discussion, where we looked at the VXLAN packet. We then discussed transport zones and NSX Edge gateway services. We ended the article with NSX Distributed firewall services and also an overview of Cross-vCenterNSX deployment.

Resources for Article:


Further resources on this subject:


LEAVE A REPLY

Please enter your comment!
Please enter your name here