The blog series will start with architectural overview, then go to deployment of NSX-T Manager and configuration of controllers, followed by basic configuration of NSX-T environment. Once we get to use NSX-T overlay network between two VMs, we will continue with exploring routers and edges.
At a high level, the architecture is made of the same 3 components as in NSX-V:
- management plane - single API entry point, user configuration, handles user queries, and performs operational tasks on all management, control, and data plane nodes in the system
- control plane - computes all ephemeral runtime state based on configuration from the management plane, disseminates topology information reported by the data plane elements, and pushes stateless configuration to forwarding engines
- data plane - stateless forwarding/transformation of packets based on tables populated by the control plane and reports topology information to the control plane
Management Plane Agent (MP Agent) is a component that runs on controllers and transport nodes (hypervisors) and is responsible for executing tasks requested by Management Plane:
- desired state of the system
- message communication to management plane (configuration, statistics, status, real time data)
Control plane is made of Central Control Plane (CCP) running on the Controller cluster and the Local Control Plane (LCP) running on transport nodes. CCP provides configuration to other controllers and LCP and it is detached from data plane (failure in CCP will not impact data plane). LCP is responsible for most of the ephemeral runtime state computation and pushing stateless configuration to forwarding engine in data plane. LCP is linked to the Transport Node hosting it.
Data plane's forwarding engine performs stateless forwarding of packets based on tables populated by control plane. Data plane maintains state for features like TCP termination, but this state is about payload manipulation and is different from the state maintained at Control Plane's level which is about how to forward packets (see MAC:IP tables)
N-VDS (NSX Managed Virtual Distributed Switch or KVM Open vSwitch)
N-VDS provides traffic forwarding. It is a component presented as a hidden distributed switch (it will not appear in vCenter Server for example). It is always created and configured by NSX manager in vSphere environments. It can be predefined in KVM. This is a major change from NSX-V. It also implies that you will need to have at least one not utilized vmnic on the ESXi host for the N-VDS.
Another change is that the encapsulation protocol used now is GENEVE (latest IETF draft here)
Virtual Tunnel Endpoint (VTEP)
VTEP is the connection point at which the encapsulation and decapsulation takes place.
Transport Node
A node capable of participating in an NSX-T Data Center overlay or NSX-T Data Center VLAN networking.
Transport Zone
Collection of transport nodes that defines the maximum span for logical switches.
Uplink Profile
Defines settings for the links from hypervisor hosts to NSX-T Data Center logical switches or from NSX Edge nodes to top-of-rack switches (active/standby links, VLAN ID, MTU).
Logical switches - provide Layer 2 connectivity across multiple hosts
Logical router - provides connectivity North-Sourh enabling access to external networks and East-West for networks within the same tenant. A logical router consists of two optional parts:
- a distributed router (DR) - one-hop distributed routing between logical switches and/or logical routers connected to it
- one or more service routers (SR) - delivers services that are not currently implemented in a distributed router
A logical router always has a DR, and it has SR if any of the following is true:
- it is a Tier-0 router, even if stateful services are not configured
- it is a Tier-1 router linked to a Tier-0 router and has stateful services configured (such as NAT)
NSX Edge - provides routing services and connectivity to external networks as well as non-distributed services.
In the next post will see how to install NSX-T manager and how all the above mentioned concepts are coming together when configuring the manager