Category: NSX-T

NSX-T Logical Switching 1, N-VDS

After a quick overview of NSX-T Architecture and Components we will take a deeper look into how logical switching is done in NSX-T.

N-VDS

The main component of NSX-T Logical Switching is the NSX Virtual Distributed Switch or N-VDS.  An N-VDS is created on every Transport Node* (TN) when it is prepared for NSX. N-VDS on ESXi hypervisors is based on VDS and on KVM hypervisors it is based on OVS. Like any other switch, an N-VDS has downlink or server ports which connects VM and also Uplink ports to connect the N-VDS to Physical Network Interface Cards (pNIC).  Note that N-VDS Uplinks are not the same as pNICs of a host, but these uplinks are assigned to pNICs so that the N-VDS can communicate with the outside world (other hosts, etc.)

Different types of virtual switches (VSS, VDS, N-VDS) can co-exist on a single host but a pNIC can be assigned to only one virtual switch. We can also bundle pNICs of a host and form a LAG (Link Aggregation Group) and assign N-VDS Uplinks to a LAG instead of a single interface.

* With the exception of BareMetal servers.

Teaming Policy

The teaming policy defines an N-VDS’s uplink failover and redundancy mechanism as well as how the traffic is balanced between the uplinks. Following teaming policies are available for N-VDS Uplinks

  • Failover Order (ESXi, KVM): in this mode, one uplink is active and other ones are in standby mode. If the active uplink fails the next standby uplink will become active and forward the traffic. We can define one uplink as active and put several uplinks in a standby list.

Failover Order

  • Load Balance Source (ESXi only): in this mode, a 1:1 mapping is made between the virtual interface of a VM and an uplink port. Traffic sent from that VM interface will always leave the N-VDS via the mapped uplink.

Load Balance Source

  • Load Balance Source MAC (ESXi only): in this mode, the mapping is not done between a VM interface and an uplink but between a VM MAC address and an uplink, so if a VM has more that one MAC address, each MAC address can use a separate uplink to send traffic.

Note that the teaming policy has nothing to do with the redundancy and load balancing on pNICs in a LAG. For instance, if we use Failover Order teaming policy for the Uplinks of an N-VDS, we can still assign the active and standby uplinks to LAG interfaces. In this case the active Uplink interface fails when all interfaces in the LAG are down. We are also totally free to assign one Uplink interface to a LAG and another one to a single pNIC.

LAG and Single pNIC combination

Named Teaming Policy

Teaming Policies we’ve discussed so far are default policies, on ESXi hypervisors we can overrule the default teaming policy by a so called “named policy”. With a named policy we can steer a specific traffic type (management for instance), in a different way than the default traffic flows. For instance, we configure a failover order as our default teaming policy and specify U1 as the active uplink. We can then define a named policy for our management traffic and specify U2 as the active uplink interface for this type of traffic.

  • named policies are only supported in ESXi hypervisors.
  • named policies can only apply to VLAN backed segments (read more about segments) .

Uplink Profiles

Uplink Profile is a way to ensure consistency of N-VDS uplink configuration. We create an uplink profile (template) and define the following parameters in it and the apply it to uplink ports of N-VDS switches in different hosts.

  • The format of the uplinks of an N-VDS
  • The default teaming policy applied to those uplinks
  • The transport VLAN used for overlay traffic*
  • The MTU of the uplinks
  • The Network IO Control profile (read ore about NIOC)

*  a transport VLAN is the VLAN which carries the overlay (encapsulated) traffic of an N-VDS in the physical network.

NSX-T Components and Architecture

NSX-T architecture is based on three building blocks, Management Plane, Control Plane, and Data Plane.

Management Plane holds the desired configuration. As its name implies, the whole NSX-T environment can be managed via the management plane which provides an entry point to users and APIs . The main component of the Management Plane is the NSX-T Manager.

Control Plane holds the runtime state of the NSX-T environment . Like any other network environment with a control plane, NSX-T Control Plane is responsible for maintaining peering adjacencies, populating control plane tables (routing table for instance) and also learning and re-populating information from the data plane (like MAC addresses, etc.) The main component of the NSX-T Control Plane is the Controller Cluster which we will discuss later in this post. NSX-T splits the control plane into two parts:

1- Central Control Plane (CCP). The CCP is implemented as a cluster of CCP virtual machines or CCP Nodes. These nodes are logically separated form the data plane so a failure of these nodes does not have an impact on the traffic flow .

2- Local Control Plane (LCP). The LCP is the control plane part which is implemented in Transport Nodes. The LCP is responsible for programming distributed modules inside these nodes based on the information it gets from the CCP.

Data Plane is where the data actually flows. Packets are forwarded in the Data Plane based on the tables populated by the Control Plane . Data flows inside, from, to, or through Transport Nodes. So Transport Nodes are the main components of the NSX-T Data Plane. LCP modules (control plane daemons and forwarding engine) are instantiated and run in Transport Nodes . Transport Nodes run and instance of NSX Virtual Distributed Switch or N-VDS. There are in general two types of Transport Nodes:

  • Hypervisor Transport Nodes: these are hypervisors on which workloads run. VMware ESXi and KVM are currently supported by NSX-T as hypervisors . Note that the N-VDS implementation on KVM hypervisors is based on OVS.
  • Edge Nodes: these are service appliances which run central services. Central services are the ones which cannot be distributed to hypervisors, think of NAT, VPN, etc. Edge nodes can be bare metal servers or virtual machines.
  • BareMetal Transport Nodes: these are mostly Linux based machines. An NSX agent is installed on BareMetal servers instead on and N-VDS.

NSX-T Manager Appliance

Starting NSX-T 2.4, NSX Manager appliance is deployed in a cluster of 3 nodes, instances of controller, policy manager and of course NSX manager coexist in one virtual machine, the NSX Manager Appliance.

NSX Manager Appliance

Differences in architecture and components between NSX-v and NSX-T

Plane/PlatformNSX-vNSX-T
Management PlaneOne NSX Manager.
Management and operations via vCenter
3 Manager Appliances, combined with Controller and Policy roles.
Management and operations through NSX Manager
Control PlaneController VMs with controller function onlyController and Manager functions combined in one VM (3 VMs in a cluster)
Data PlaneSupports ESXi hypervisors.
Based on vSphere Distributed Switch.
Supports ESXi and KVM.
Based on N-VDS

 

© 2025 vTransformer

Theme by Anders NorénUp ↑