Before diving head first into how to enable supervisor cluster it's important to clarify a few aspects. There are several great posts (here and here) on how to deploy automatically Tanzu on vSphere. The reason I choose to present a step by step guide is because going through the manual steps helped me clarifying some aspects. I will not be covering the networking part. There are two ways of enabling Tanzu on vSphere - using NSX-T or using vSphere networking and a load balancer.
The Supervisor Cluster is a cluster enabled for vSphere with Tanzu. There is a one to one mapping between the Supervisor Cluster and the vSphere cluster. It is important because there features that are defined at Supervisor Cluster level only and inherited at Namespace level. A vSphere Namespace represents a set of resources where vSphere Pods, Tanzu Kubernetes clusters and VMs can run. It is similar to a resource pool in the sense that it brings together the compute and storage resources that can be consumed. A Supervisor Cluster can have many Namespaces, however at the time of writing there is a limit of 500 namespaces per vCenter Server. Depending on how you map namespaces to internal organizational units this can also be important. The high level architecture and components of a supervisor cluster can seen here.
Requirements
- Configure NSX-T. Tanzu workloads need a T0 router configured on a edge cluster. All other objects (T1's, LB's, segments) are configured automatically during pod deployment. Edge recommended size is large, but it works with medium for lab deployments. Also for lab only, the edge cluster can run with a single edge node. Deploying and configuring NSX-T is not in the scope of this article
- vCenter Server level
- vSphere cluster with DRS and HA enabled
- content library for Tanzu Kubernetes cluster images subscribed to https://wp-content.vmware.com/v2/latest/lib.json. In case you don't have Internet connectivity from vCenter Server you will need to download them offline and upload to the library. Check if you can have access to Internet via a proxy and you can add the proxy in vCenter Server VAMI interface (https://vcs_fqdn:5480)
- storage policies - for lab purpose one policy can be created and used for all types of storage. Go to Policies and Profiles and create a new VM Storage Profile - Enable host based rules and select Storage I/O Control
- IP's - for ingress and egress traffic (routed), pod and service (internal traffic)
- latest version of vCenter Server - 7.0 U2a (required for some of the new functionalities - vm operator and namespace self service)
- NTP working and configured for vCenter Server and NSX-T manager (and the rest of components)
No comments:
Post a Comment