Wednesday, June 19, 2019

NSX- T Part 2 - Going Through Changes

The pop reference in the title was in my mind coming from metal ballad world, but as I found out it can easily be hip-hop. However this is where any reference to musical world stops as the following article will focus on some of the major changes that are brought in by NSX-T.

NSX-T is definitely a game changer. Latest release (version 2.4) brings in a lot of new functionality getting to parity with its vSphere counterpart. Coming from an NSX-v background, I am interested in the main differentiators that NSX-T is bringing into the game.

1. Cross platform support and independence from vCenter Server. 

The two come together since supporting multiple platforms required decoupling NSX Manager from vCenter Server. Besides supporting vSphere, NSX-T can also support non-ESX hypervisors such as KVM. I expect to see more coming in the near future. Lastly, NSX-T supports bare metal workloads and native public clouds (see integrating AWS EC2 instances -  youtube video here )

I will add here the NSX-T container plugin (NCP) that provides integration with container orchestration platforms (Kubernetes) and container based platform as a service (OpenShift, Pivotal Cloud Foundry). 

That's a big step from vSphere environments. 

2. Encapsulation protocol

At the core of any network virtualization technology is an encapsulation protocol that allows to send L2 protocols over L3. NSX-v uses VXLAN. Geneve is the protocol used by NSX-T. At the time of writing, Generic Network Virtualization Encapsulation (Geneve) is still an IETF draft, although on the standards track. According to people much wiser than me and that were involved in the specification design for Geneve, what the new protocol brings on the table is bringing the best from other protocols such as VXLAN, NVGRE and STT. 

One of the main advantages of Geneve is that is uses a variable length header which allows to include as many options as necessary without being limited to the 24 bit header of VXLAN and NVGRE or using all the time a 64 bit STT style header. 

3. N-VDS

NSX-T virtual distributed switches are called N-VDS and they are independent of vCenter Server. For this reason they come in 2 flavors (actually 3) depending on the platform:
- ESXi - NSX's version of vswitch which is implemented as an opaque switch in vSphere
- KVM - VMware's version of OpenvSwitch
- cloud and bare metal - NSX agent

An opaque switch is a network created and managed by a separate entity outside of vSphere. In our  case logical networks that are created and managed by NSX-T. They appear in vCenter Server as opaque networks and can be used as backing for VMs. Although not a new thing , they are different from the NSX-v vswitches. This means that installing NSX-T in a vSphere only environment will still bring in the opaque networks instead of the NSX-v logical switches.

4. Routing 

Differences are introduced at routing level where a two tiered model is being introduced by NSX-T. A very interesting blog article is here and I briefly will use a picture from it:

In a very short explanation:

  • Tier-0  logical router provides a gateway service to the physical world 
  • Tier-1 logical router provides services for tenants (multi-tenancy) and cloud management platforms 
Not both tiers are needed - depending on the services needed, only one tier can be implemented. 


In a way, NSX-v could provide the same tiered model using Edge Services Gateway (ESG's), but in certain designs the routing paths were not  optimal. NSX-T delivers optimized routing, simplicity and more architectural flexibility. Also, with the new model, Distributed Logical Router (DLR) control VM has been removed.


I will let you decide which of the changes briefly presented above are more important and I will just list them below:
  • standalone (vCenter independent) 
  • multi hypervisor support
  • container integration
  • GENEVE
  • NVDS and OpenvSwitches
  • multi tiered optimized routing
However, if you are moving around the network virtualization world and haven't picked up on NSX-T, maybe it's time to start. 

Tuesday, June 11, 2019

Docker Containers and Backup - A Practical Example Using vSphere Storage for Docker

A few months ago I started looking into containers trying to understand both the technology and how it will actually relate to one of the questions I started hearing recently "Can you backup containers?".

I do not want to discourage anyone reading the post  (as it is interesting), but going further basic understanding of containers and Docker technology is required. This post will not explain what containers are. It focuses on one aspect of the Docker containers - persistent storage. Contrary to popular believe, containers can have and may need persistent storage. Docker volumes and volume plugins are the technologies for it.

Docker volumes are used to persist data to the container's writable layer. In this case the file system of the docker host. Volume plugins extend the capabilities of Docker volumes across different hosts and across different environments: for example instead of writing container data to the host's filesystem, the container will write data to an AWS EBS volume, or Azure Blob storage or a vSphere VMFS.

Let's take a step down from abstract world. We have a dockerized application: a voting application. It uses a PostgreSQL database to keep the results of the votes. The PostgreSQL DB needs a place to keep its data. We want that place to be outside the Docker host storage and since we are running in a vSphere environment, we'll use vSphere Storage for Docker. Putting it all in a picture would look like this (for simplicity, only PostgreSQL container is represented):


We'll start with the Docker host (the VM running on top of vSphere). Docker Engine is installed on the VM and it runs containers and creates volumes. The DB runs in the container and needs some storage. Let's take a 2 step approach:

First, create the volume. Docker Engine using vSphere Storage for Docker plugin (vDVS Plugin and vDVS vib) creates a virtual disks (vmdk's) on the ESXi host's datastore and maps it back to the Docker volume. Now we have a permanent storage space that we can use. 

Second step: the same Docker engine presents the volume to the container and mounts it as a file system mount point in the container. 

This makes it possible for the DB running inside the container to write in the vmdk from the vSphere datastore (of course, without knowing it does so). Pretty cool.

The vmdk is automatically attached to the Docker host (the VM). More, when the vmdk is created from Docker command line, it can be given attributes that apply to any vmdk. This means it can be created as:
  • independent persistent or dependent (very important since this affects the ability to snapshot the vmdk or not)
  • thick (eager or lazy zeroed) or thin
  • read only 
It can also be assigned a VSAN policy. The vmdk will persist data for the container and across container lifetime. The container can be destroyed, but the vmdk will keep existing on the datastore. 

Let's recap: we are using a Docker volume plugin to present vSphere datastore storage space to an application running within a Docker container. Or shorter, the PostgreSQL DB running within the container writes data to vmdk. 

Going back to the question - can we backup the container? Since the container itself is actually the runtime instance of Docker image (a template), it does not contain any persistent data. The only data that we need is actually written in vmdk. In this case, the answer is yes. We can back it up in the same way we can backup any vSphere VM. We will actually backup the vmdk attached to the docker host itself. 

Probably the hardest question to answer when talking about containers is what data to protect as the container itself is just a runtime instance. By design, containers are ephemeral and immutable. The writable space of the container can be either directly in memory (tmpfs) or on a docker volume. If we need data to persist across container lifecycles, we  need to use volumes. The volumes can be implemented by a multitude of storage technologies and this complicates the backup process. Container images represent the template from which containers are launched. They are also persistent data that could be source for backup. 


Steps for installing the vSphere Docker Volume Service and testing volumes
  • prerequisites: 
    • Docker host already exists and it has access to Docker Hub
    • download the VIB file (got mine from here)
  • logon to ESXi host, transfer and install the vib

esxcli software vib install -v /tmp/VMWare_bootbank_esx-vmdkops-service_0.21.2.8b7dc30-0.0.1.vib


  • restart hostd and check the module has been loaded
/etc/init.d/hostd restart
ps -c | grep vdmk


  • logon to the Dokcer host and install the plugin 
sudo  docker plugin install --grant-all-permissions --alias vsphere vmware/vsphere-storage-for-docker:latest

  • create a volume and inspect it - by default it will create the vmdk as independent persistent which will not allow snapshots to be taken - add option (-o attach-as=persistent) for dependent vmdks

sudo docker volume create --driver=vsphere --name=dockerVol -o size=1gb
sudo docker volume inspect dockerVol
sudo docker volume create --driver=vsphere --name=dockerVolPersistent -o size=1gb -o attach-as=persistent



  • go to vSphere client to the datastore where the Docker host is configured and check for a new folder dockvols  and for the VMDK of the volume created earlier


  • since the volumes are not used by any container, they are not attached to the Docker host VM. Create a container and attach it the dependent volume

sudo docker container run --rm -d --name devtest --mount source=dockerVolPersistent,target=/vmdk alpine sleep 1d

Lastly, create a backup job with the Docker host as source, exclude other disks and run it.