Thursday, May 13, 2021

vSphere with Tanzu - Enable Supervisor Cluster using PowerCLI

In previous post we looked at how to manually enable Supervisor cluster on a vSphere cluster. Now we'll reproduce the same steps from GUI in a small script using PowerCLI. 

PowerCLI 12.1.0 brought new cmdlets for VMware.VimAutomation.WorkloadManagement module and one of this is Enable-WMCluster. We will be using this cmdlet to enable Tanzu supervisor cluster. In the following example we'll be using NSX-T, but the cmdlet can be used with distributed switches. 

The following script is very simple .First we need to connect to vCenter Server and NSX manager

Connect-VIServer -Server vc11.my.lab
Connect-NsxtServer -Server nsxt11.my.lab

Next we define the variables (all variable that were in the UI wizard).

The cluster where we enable Tanzu, the content library and the storage policies:

$vsphereCluster = Get-Cluster "MYCLUSTER"
$contentLibrary = "Tanzu subscribed"
$ephemeralStoragePolicy = "Tanzu gold"
$imageStoragePolicy = "Tanzu silver"
$masterStoragePolicy = "Tanzu gold"

Management network info for Supervisor Cluster VMs

$mgmtNetwork = Get-VirtualNetwork "Mgmt-Network"
$mgmtNetworkMode = "StaticRange"
$mgtmNetworkStartIPAddress = "192.168.100.160"
$mgtmNetworkRangeSize = "5"
$mgtmNetworkGateway = "192.168.100.1"
$mgtmNetworkSubnet = "255.255.255.0"
$distributedSwitch = Get-VDSwitch -Name "Distributed-Switch"

DNS and NTP servers

$masterDnsSearchDomain = "my.lab"
$masterDnsServer = "192.168.100.2"
$masterNtpServer = "192.168.100.5"
$workerDnsServer = "192.168.100.2"

Tanzu details - size and external and internal IP subnets

$size = "Tiny" 
$egressCIDR = "10.10.100.0/24"
$ingressCIDR = "10.10.200.0/24"
$serviceCIDR = "10.244.0.0/23"
$podCIDR = "10.96.0.0/23"

One more parameter needs to be provided: Edge cluster ID. For this we use NSX-T manager connectivity and 

$edgeClusterSvc = Get-NsxtService -Name com.vmware.nsx.edge_clusters
$results = $edgeClusterSvc.list().results
$edgeClusterId = ($results | Where {$_.display_name -eq "tanzu-edge-cluster"}).id

Last thing is to put all the parameters together in the cmdlet and run it against the vSphere cluster object

$vsphereCluster | Enable-WMCluster `
-SizeHint $size `
-ManagementVirtualNetwork $mgmtNetwork `
-ManagementNetworkMode $mgmtNetworkMode `
-ManagementNetworkStartIPAddress $mgtmNetworkStartIPAddress `
-ManagementNetworkAddressRangeSize $mgtmNetworkRangeSize `
-ManagementNetworkGateway $mgtmNetworkGateway `
-ManagementNetworkSubnetMask $mgtmNetworkSubnet `
-MasterDnsServerIPAddress $masterDnsServer `
-MasterNtpServer $masterNtpServer `
-MasterDnsSearchDomain $masterDnsSearchDomain `
-DistributedSwitch $distributedSwitch `
-NsxEdgeClusterId $edgeClusterId `
-ExternalEgressCIDRs $egressCIDR `
-ExternalIngressCIDRs $ingressCIDR `
-ServiceCIDR $serviceCIDR `
-PodCIDRs $podCIDR `
-WorkerDnsServer $workerDnsServer `
-EphemeralStoragePolicy $ephemeralStoragePolicy `
-ImageStoragePolicy $imageStoragePolicy `
-MasterStoragePolicy $masterStoragePolicy `
-ContentLibrary $contentLibrary

And as simple as that, the cluster will be enabled (in a scripted and repeatable way). 

Tuesday, May 4, 2021

vSphere with Tanzu - Enable Supervisor Cluster

Before diving head first into how to enable supervisor cluster it's important to clarify a few aspects. There are several great posts (here and here) on how to deploy automatically Tanzu on vSphere. The reason I choose to present a step by step guide is because going through the manual steps helped me clarifying some aspects. I will not be covering the networking part. There are two ways of enabling Tanzu on vSphere - using NSX-T or using vSphere networking and a load balancer

The Supervisor Cluster is a cluster enabled for vSphere with Tanzu. There is a one to one mapping between the Supervisor Cluster and the vSphere cluster. It is important because there features that are defined at Supervisor Cluster level only and inherited at Namespace level. A vSphere Namespace represents a set of resources where vSphere Pods, Tanzu Kubernetes clusters and VMs can run. It is similar to a resource pool in the sense that it brings together the compute and storage resources that can be consumed. A Supervisor Cluster can have many Namespaces, however at the time of writing there is a limit of 500 namespaces per vCenter Server. Depending on how you map namespaces to internal organizational units this can also be important. The high level architecture and components of a supervisor cluster can seen here

Requirements

  • Configure NSX-T. Tanzu workloads need a T0 router configured on a edge cluster. All other objects (T1's, LB's, segments) are configured automatically during pod deployment. Edge recommended size is large, but it works with medium for lab deployments. Also for lab only, the edge cluster can run with a single edge node. Deploying and configuring NSX-T is not in the scope of this article
  • vCenter Server level
    • vSphere cluster with DRS and HA enabled
    • content library for Tanzu Kubernetes cluster images subscribed to https://wp-content.vmware.com/v2/latest/lib.json. In case you don't have Internet connectivity from vCenter Server you will need to download them offline and upload to the library. Check if you can have access to Internet via  a proxy and you can add the proxy in vCenter Server VAMI interface (https://vcs_fqdn:5480) 
    • storage policies - for lab purpose one policy can be created and used for all types of storage. Go to Policies and Profiles and create a new VM Storage Profile - Enable host based rules and select Storage I/O Control

  • IP's - for ingress and egress traffic (routed), pod and service (internal traffic) 
  • latest version of  vCenter Server  - 7.0 U2a (required for some of the new functionalities - vm operator and namespace self service)
  • NTP working and configured for vCenter Server  and NSX-T manager (and the rest of components) 

Enabling the Supervisor Cluster is pretty straight forward - go to workload management, clusters and add cluster. The wizard will take you through the following steps. 

First select vCenter Server and the type of networking. If you don't have NSX-T configured, then you can use vSphere Distributed Switch but first a load balancer needs to be installed (HAproxy or AVI)


Then you select the vSphere cluster where to enable the Supervisor cluster. 

Choose the size of the control plane VMs - the smaller they are the smaller the Kubernetes environment.


Map storage policies to types of storage in the Supervisor cluster


Add management network details. It is important to clarify that the supervisor VMs have 2 NIC's - one connected to vSphere distributed portgroup that has access to vCenter Server and NSX-T manager and another one connected to Kubernetes service network. Please check the "View Network Topology" in the step to have a clear picture of the configuration of the Supervisor VM. Also supervisor VMs need a range of 5 IPs free that will be use - in my case I am selecting a range from the management network.  


Next add the network details for ingress and egress networks and also for internal cluster networks (service and pod). Ingress and egress networks are used to access services inside the Kubernetes cluster  via DNAT (ingress) and by internal services to access outside world via SNAT (egress). 


In case you use the same DNS server for management and service networks, the server must be reachable over both interfaces. Service network will use the IP of the egress network to reach DNS. 

Lastly, add the content library, review the configuration and give it a run. 

 Once the cluster is deployed successfully you will see it in the ready state:


You can now create namespaces and Kubernetes guest clusters. To access the cluster you will need to connect to https://cluster_ip and download kubectl vSphere plugin. 

Since we got through all the manual steps, we can look next at automating the configuration using PowerCLI in the next post.