Tuesday, October 19, 2021

Certifications during pandemics - Pearson VUE online proctoring

Recently I had the opportunity to take (and pass) 3 certifications and I did it using Pearson's OnVUE online proctoring. Talking to other colleagues of mine I found out there are mixed feelings about the experience. For me it was an overall good experience. So, I've decided to put together a few thoughts about how it went. 


The good 

You can schedule the exam anytime you want and you can do it from one day to another. You are home in your office, so it's a familiar space. There is no commute to the test center and back. For me these are the biggest advantages. 

The not so good

You have to clean up your desk and disconnect everything. If you have docking station, multiple monitors and other equipment it will be a bit of work to do. If you have other things around your desk (like my old film cameras  that I keep as decorations), you will need to move those too. Be prepared to use your webcam to show that cables are unplugged. 

Another thing to take care: no one is allowed to enter the room be it kid, partner or pet. This may prove an inconvenient.

The app delivering the exam is not optimized for wide monitors. That makes the questions very long and places the buttons in strange positions. But you get used to it, or better use laptop screen. 

The weird

The proctor experience can vary a bit. It was fine for 2 exams to use external monitor, not fine during another one. The weirdest thing: I was told during one exam not to look up because that is not allowed and doing it again will fail my exam (!?!). Small issue here: when I try to remember things I involuntarily look up. Luckily I managed to pass the exam without remembering too many things. 

Connectivity issues

It happened one morning to take longer to connect and get someone to enter online with me. It took me more than half hour to start the process. But after that all went well. No biggie here, just start on time.


Once you get the exam started, the experience is the same like in any test center. I am not sure that I would like to go back to a test center unless absolutely necessary (like looking up during the exam). 

 

  



Wednesday, October 6, 2021

What's new in vRealize Automation 8.5.x and 8.6

The latest releases of vRealize Automation bring in a series of interesting features. 


Cloud Resource 

Cloud Resource view was introduced back in May 2021 for vRA Cloud and allows to manage resources directly instead of managing them by resource groups (deployments). It allows now to manage all discovered, onboarded and provisioned deployments, trigger power day 2 actions on discovered resources and bulk manage multiple resources at the same time.


ABX enabled deployment for custom resources

Provisioning a custom resource allows you to track and manage the custom resource and its properties during its whole lifecycle. No dynamic types are needed for full lifecycle management. 



Cloud Templates Dynamic Input 

Use vRO Actions for dynamic external values to define different types of input values directly at the Cloud Template and bind local input to the dynamic inputs as action parameters. 







Kubernetes support in Code Stream Workspace

The Code Stream pipeline workspace now supports Docker and Kubernetes for continuous integration tasks. The Kubernetes platform manages the entire lifecycle of the container, similar to Docker. In the pipeline workspace, you can choose Docker (the default selection) or Kubernetes. In the workspace, you select the appropriate endpoint. 


The Kubernetes workspace provides:

  • the builder image to use
  • image registry
  • namespace
  • node port
  • persistent Volume Claim
  • working directory
  • environment variables
  • CPU limit
  • memory limit.

You can also choose to create a clone of the Git repository.


Multi-cloud

vRA leverages Azure provisioning capabilities, including the ability to enable/disable boot diagnostics for Azure VMs for Day 0/2, and the ability to configure the name for the Azure NIC interfaces.


Other updates and new features 
  • Native SaltStack Configuration Automation Config via modules for vSphere, VMC, and NSX
  • Leverage third-party integrations with Puppet Enterprise support for machines without a Public IP address
  • Deploy a VCD adapter for vRA
  • Onboard vSphere networks to support an additional resource type in the onboarding workflow

Thursday, August 26, 2021

VMworld 2021 - Sessions to watch


This year is going to be the second one in a row when I don't get to do my favorite autumn activity: go to VMworld in Barcelona. But I do get to do part of it - attend the virtual VMworld 2021. And to make it as close as possible to the real experience, I will most probably add some red Spanish wine and jamon on the side. 

As for the sessions I am looking forward to attend, I will leave here a few of my choices:

VMware vSAN – Dynamic Volumes for Traditional and Modern Applications [MCL1084]

I've been involved recently in projects with Tanzu and vSAN and this session with Duncan Epping and Cormac Hogan is the place to go to see how vSAN continues to evolve, to learn about new features, integration with with Tanzu and hear some of the best practices.  

The Future of VM Provisioning – Enabling VM Lifecycle Through Kubernetes [APP1564]

A session about what I think is a one of the game changers introduced by VMware this year: include VM-based workloads in modern applications using Kubernetes APIs to deploy, configure and manage them. I've been using working with VM service since its official release in May and also wrote small blog post earlier this month. 

What's New in vSphere [APP1205]

This is one the sessions I never missed. vSphere is still one of the fundamental technologies for all other transformations. I am interested in finding out what are latest capabilities, the customer challenges and real-world customer successes. 

Automation Showdown: Imperative vs Declarative [CODE2786]

There is no way to miss Luc Dekens and Kyle Rudy take on the hot topic of imperative versus declarative infrastructure and understanding when and how you can and should use each of it and see practical examples of it.

Achieving Happiness: The Quest for Something New [IC1484]

I had the honor to meet Amanda Blevins at VMUG Leaders Summit right before the world decided to  close. Her presentation wowed the crowd and it was one of the highest rated. So this is something that shouldn't be missed, especially since the pandemic has been around for 18 months and we need to achieve some happiness. 

There are hundreds of sessions and the touched areas are so diverse that you can find your picj regardless of your interests in AI, application modernization, Kubernetes, security, network, personal development or plain old virtualization. See you at VMworld 2021

Friday, August 20, 2021

vSphere with Tanzu - Create custom VM template for VM Operator

 We've seen in the previous post how to enable and use VM Operator. We've also noticed that currently there are only 2 VM images that are supported to be deployed using VM Operator. What if we need to create our own image? 

There is a way, but the way is not supported by VMware. So once going this path, you have to understand the risks. 

What is so special about the VM image deployed using VM Operator? It is using cloud-init and OVF environment variables to initialize the VM. 

Let's start with a new Linux VM template. We will install VMware Tools. Then we need to install cloud-init. Once cloud init is installed update the configuration as following:

  • in  /etc/cloud/cloud.cfg check the following value: disable_vmware_customization: true 
    • setting it to true invokes traditional Guest Operating System Customization script based workflow (GOSC); in case it is set to false, cloud-init customization will be used. 
  • create a new file /etc/cloud/cloud.cfg.d/99_vmservice.cfg and add the following line to it network: {config: disabled};
    • this will actually prevent cloud-init to configure the network; you guessed, VMware Tools will be used to configure the network
Before exporting the VM as OVF template, run cloud-init to simulate a clean instance installation. It should be run on subsequent template updates too. 

Next we'll customize the OVF file itself. We need to enable OVF environment variables to be used to transport data to cloud-init. For this to work, I just copied the configuration from VMware CentOS VM service image ovf file and updated several sections: 

In <VirtualSystem ovf:id="vm">, add the following ovf properties. Please note that you could/should change the labels and descriptions to match your template

<ProductSection ovf:required="false">
  <Info>Cloud-Init customization</Info>
  <Product>Linux distribution for VMware VM Service</Product>
  <Property ovf:key="instance-id" ovf:type="string" ovf:userConfigurable="true" ovf:value="id-ovf">
      <Label>A Unique Instance ID for this instance</Label>
      <Description>Specifies the instance id.  This is required and used to determine if the machine should take "first boot" actions</Description>
  </Property>
  <Property ovf:key="hostname" ovf:type="string" ovf:userConfigurable="true" ovf:value="centosguest">
      <Description>Specifies the hostname for the appliance</Description>
  </Property>
  <Property ovf:key="seedfrom" ovf:type="string" ovf:userConfigurable="true">
      <Label>Url to seed instance data from</Label>
      <Description>This field is optional, but indicates that the instance should 'seed' user-data and meta-data from the given url.  If set to 'http://tinyurl.com/sm-' is given, meta-data will be pulled from http://tinyurl.com/sm-meta-data and user-data from http://tinyurl.com/sm-user-data.  Leave this empty if you do not want to seed from a url.</Description>
  </Property>
  <Property ovf:key="public-keys" ovf:type="string" ovf:userConfigurable="true" ovf:value="">
      <Label>ssh public keys</Label>
      <Description>This field is optional, but indicates that the instance should populate the default user's 'authorized_keys' with this value</Description>
  </Property>
  <Property ovf:key="user-data" ovf:type="string" ovf:userConfigurable="true" ovf:value="">
      <Label>Encoded user-data</Label>
      <Description>In order to fit into a xml attribute, this value is base64 encoded . It will be decoded, and then processed normally as user-data.</Description>
  </Property>
  <Property ovf:key="password" ovf:type="string" ovf:userConfigurable="true" ovf:value="">
      <Label>Default User's password</Label>
      <Description>If set, the default user's password will be set to this value to allow password based login.  The password will be good for only a single login.  If set to the string 'RANDOM' then a random password will be generated, and written to the console.</Description>
  </Property>
</ProductSection>


In <VirtualHardwareSection ovf:transport="iso">, add the following:

<vmw:ExtraConfig ovf:required="false" vmw:key="guestinfo.vmservice.defer-cloud-init" vmw:value="ready"/>

Save the OVF file and export it to the content library. The name must by DNS compliant and must not contain any capital letters. 

Lastly, in the YAML manifest file add disable checks done by VM Operator:

metadata:
  name: my-vm-name
  labels:
    app: db-server
  annotations:
    vmoperator.vmware.com/image-supported-check: disable


Tuesday, August 10, 2021

vSphere with Tanzu - VM Operator

VM Operator is an extension to Kubernetes that implements VM management through Kubernetes. It was released officially at end of April 2021 with vCenter Server 7.0 U2a. This is a small feature pushed through a vCenter Server patch that is bringing a huge shift in the paradigm of VM management. It changes the way we are looking at VMs and at the way we are using virtualization. One could argue that Kubernetes already did that. I would say that unifying resource consumption through VMs and pods is actually a huge step forward. VM Operator brings to play not only Infrastructure as Code (IaC), but it also enables GitOps for VMs.

Let's look briefly at the two concepts. IaC represents the capability to define your infrastructure in a human readable language. A lot of tools exist that enable IaC -  Puppet, Chef, Ansible, Terraform and so on. They are complex and powerful tools, some of them used in conjunction with others. All these tools have a particularity: they have their own language - Ruby, Python, HCL. GitOps expands the IaC concept. In this case, Git repository is the only source of truth. Manifests (configuration files that describe the resource to be provisioned) are pushed to a Git repository monitored by a continuous deployment (CD) tool that ensures that changes in the repository are applied in the real world. Kubernetes enables GitOps. Kubernetes manifests are written in YAML. With introduction of VM Operator the two concepts can be used in conjunction. For example you could have a GitOps pipeline that deploys the VMs using Kubernetes manifests and then configuration management tools could actually make sure the VMs are customized to suit  their purpose - deploying an application server, monitoring agents and so on. 


In the current post we will only look at the basics of deploying a VM through VM Operator. Once these concepts are clear then you can add other tools such as Git repositories, CD tools, configuration management. 

So, what do we need to be able to provision a VM through VM Operator? 

We need vCenter Server updated to U2a and a running Supervisor cluster. 

At namespace level a storage policies needs to be configured. It is needed for both VM deployment and persistent volumes 



We need a content library uploaded with a supported VMware template (we will follow soon with a post on how to create unsupported VMware templates for VM operator). At the time of writing CentOS 8 and Ubuntu images are distributed through VMware Marketplace (https://marketplace.cloud.vmware.com/ search for "VM service Image")


The images are installed with cloud-init and configured to transport user data using OVF environment variables to cloud-init process which in turn customizes the VM operating system. 

In Workload Management, VM Service allows the configuration of additional VM classes and content libraries. VM classes and content library are assigned to the namespace to be able to provision the VMs. 


VM classes selected for a particular namespace:


Content library selected for a particular namespace:


Once or the prerequisites are in place, connect to supervisor cluster, and select the namespace you want to deploy the VM

kubectl vsphere login --verbose 5 --server=https://192.168.2.1 --insecure-skip-tls-verify  -u cloudadmin@my.lab

kubectl config use-context my-app-namespace


Check that the VM images in the content library are available 

kubectl get virtualmachineimages


Create the VM manifest file - centos-db-2.yaml 

apiVersion: vmoperator.vmware.com/v1alpha1
kind: VirtualMachine
metadata:
 name: centos-db-2
 labels:
  app: my-app-db
spec:
 imageName: centos-stream-8-vmservice-v1alpha1.20210222.8
 className: best-effort-xsmall
 powerState: poweredOn
 storageClass: tanzu-gold
 networkInterfaces:
  - networkType: nsx-t
 vmMetadata:
  configMapName: my-app-db-config
  transport: OvfEnv
---
apiVersion: v1
kind: ConfigMap
metadata:
 name: my-app-db-config
data:
 user-data: |
  I2Nsb3VkL6CiAgICBlbnMxNjA6CiAgICAgIGRoY3A0OiB0cnVlCg==
 hostname: centos-db-2

In the manifest file we've added 2 resources:
- VirtualMachine: where we specify the VM template to use, the VM class. storage policy, network type and also how to send variables to the cloud-init inside the VM (using a config map resource to keep the date in Kubernetes and OVF environment variables to transport it to the VM)
- ConfigMap: contains in our case user data (Base64 encoded user data - this is a SSH key) and the hostname of the VM; Base64 output in this post is trunked 

 To create the VM, apply the manifest. Then check its state.
kubectl apply -f centos-db-2.yaml 

kubectl get virtualmachine



Once the VM has been provisioned, it has been assigned an IP from the POD CIDR 



POD CIDRs are private subnets used for inter-pod communication. To access the VM, it needs an Ingress CIDR IP. This is a routable IP and it is implemented in NSX-T as a VIP on the load balancer. The Egress CIDR is used for communication from VM to outside world and it is implemented as SNAT rule. To define an ingress IP, we need to create a virtual machiner service resource of type load balancer:

Create the manifest file - service-ssh-centos-db-2.yaml 

apiVersion: vmoperator.vmware.com/v1alpha1
kind: VirtualMachineService
metadata:
 name: lb-centos-db-2
spec:
 selector:
  app: my-app-db
 type: LoadBalancer
 ports:
  - name: ssh
    port: 22
    protocol: TCP
    targetPort: 22

We are using the selector app: my-app-db to match the VM resource for this service. The service will be assigned an IP from Ingress network and it will forward all requests coming to that IP on SSH port to the VM IP on SSH port. 

 To create the service, apply the manifest. Then check its state.
kubectl apply -f service-ssh-centos-db-2.yaml 

kubectl get service lb-centos-db-2


The External IP displayed in the above listing is the ingress IP that you can use now to ssh to the VM:
ssh cloud-user@external_ip

Please note the user used to SSH. From it, you can then sudo and gain root privileges. 

A VM provisioned via VM Operator can only be managed through the Supervisor cluster API (Kubernetes API). In this regard, the VM cannot be any longer managed directly from the UI or other management tools. Looking at the picture below you will notice that the VM is marked in UI as "Developer Managed" and that there are no actions that can be taken on the VM



If you are this far, then well done, you've just provisioned you first VM using Kubernetes API. Now put those manifests in a git repo, install and configure a CD tool (such as ArgoCD) to monitor the repo and apply the manifests on the Supervisor cluster and you don't even need to touch kubectl command line or vCenter Server :-) 





Thursday, May 13, 2021

vSphere with Tanzu - Enable Supervisor Cluster using PowerCLI

In previous post we looked at how to manually enable Supervisor cluster on a vSphere cluster. Now we'll reproduce the same steps from GUI in a small script using PowerCLI. 

PowerCLI 12.1.0 brought new cmdlets for VMware.VimAutomation.WorkloadManagement module and one of this is Enable-WMCluster. We will be using this cmdlet to enable Tanzu supervisor cluster. In the following example we'll be using NSX-T, but the cmdlet can be used with distributed switches. 

The following script is very simple .First we need to connect to vCenter Server and NSX manager

Connect-VIServer -Server vc11.my.lab
Connect-NsxtServer -Server nsxt11.my.lab

Next we define the variables (all variable that were in the UI wizard).

The cluster where we enable Tanzu, the content library and the storage policies:

$vsphereCluster = Get-Cluster "MYCLUSTER"
$contentLibrary = "Tanzu subscribed"
$ephemeralStoragePolicy = "Tanzu gold"
$imageStoragePolicy = "Tanzu silver"
$masterStoragePolicy = "Tanzu gold"

Management network info for Supervisor Cluster VMs

$mgmtNetwork = Get-VirtualNetwork "Mgmt-Network"
$mgmtNetworkMode = "StaticRange"
$mgtmNetworkStartIPAddress = "192.168.100.160"
$mgtmNetworkRangeSize = "5"
$mgtmNetworkGateway = "192.168.100.1"
$mgtmNetworkSubnet = "255.255.255.0"
$distributedSwitch = Get-VDSwitch -Name "Distributed-Switch"

DNS and NTP servers

$masterDnsSearchDomain = "my.lab"
$masterDnsServer = "192.168.100.2"
$masterNtpServer = "192.168.100.5"
$workerDnsServer = "192.168.100.2"

Tanzu details - size and external and internal IP subnets

$size = "Tiny" 
$egressCIDR = "10.10.100.0/24"
$ingressCIDR = "10.10.200.0/24"
$serviceCIDR = "10.244.0.0/23"
$podCIDR = "10.96.0.0/23"

One more parameter needs to be provided: Edge cluster ID. For this we use NSX-T manager connectivity and 

$edgeClusterSvc = Get-NsxtService -Name com.vmware.nsx.edge_clusters
$results = $edgeClusterSvc.list().results
$edgeClusterId = ($results | Where {$_.display_name -eq "tanzu-edge-cluster"}).id

Last thing is to put all the parameters together in the cmdlet and run it against the vSphere cluster object

$vsphereCluster | Enable-WMCluster `
-SizeHint $size `
-ManagementVirtualNetwork $mgmtNetwork `
-ManagementNetworkMode $mgmtNetworkMode `
-ManagementNetworkStartIPAddress $mgtmNetworkStartIPAddress `
-ManagementNetworkAddressRangeSize $mgtmNetworkRangeSize `
-ManagementNetworkGateway $mgtmNetworkGateway `
-ManagementNetworkSubnetMask $mgtmNetworkSubnet `
-MasterDnsServerIPAddress $masterDnsServer `
-MasterNtpServer $masterNtpServer `
-MasterDnsSearchDomain $masterDnsSearchDomain `
-DistributedSwitch $distributedSwitch `
-NsxEdgeClusterId $edgeClusterId `
-ExternalEgressCIDRs $egressCIDR `
-ExternalIngressCIDRs $ingressCIDR `
-ServiceCIDR $serviceCIDR `
-PodCIDRs $podCIDR `
-WorkerDnsServer $workerDnsServer `
-EphemeralStoragePolicy $ephemeralStoragePolicy `
-ImageStoragePolicy $imageStoragePolicy `
-MasterStoragePolicy $masterStoragePolicy `
-ContentLibrary $contentLibrary

And as simple as that, the cluster will be enabled (in a scripted and repeatable way). 

Tuesday, May 4, 2021

vSphere with Tanzu - Enable Supervisor Cluster

Before diving head first into how to enable supervisor cluster it's important to clarify a few aspects. There are several great posts (here and here) on how to deploy automatically Tanzu on vSphere. The reason I choose to present a step by step guide is because going through the manual steps helped me clarifying some aspects. I will not be covering the networking part. There are two ways of enabling Tanzu on vSphere - using NSX-T or using vSphere networking and a load balancer

The Supervisor Cluster is a cluster enabled for vSphere with Tanzu. There is a one to one mapping between the Supervisor Cluster and the vSphere cluster. It is important because there features that are defined at Supervisor Cluster level only and inherited at Namespace level. A vSphere Namespace represents a set of resources where vSphere Pods, Tanzu Kubernetes clusters and VMs can run. It is similar to a resource pool in the sense that it brings together the compute and storage resources that can be consumed. A Supervisor Cluster can have many Namespaces, however at the time of writing there is a limit of 500 namespaces per vCenter Server. Depending on how you map namespaces to internal organizational units this can also be important. The high level architecture and components of a supervisor cluster can seen here

Requirements

  • Configure NSX-T. Tanzu workloads need a T0 router configured on a edge cluster. All other objects (T1's, LB's, segments) are configured automatically during pod deployment. Edge recommended size is large, but it works with medium for lab deployments. Also for lab only, the edge cluster can run with a single edge node. Deploying and configuring NSX-T is not in the scope of this article
  • vCenter Server level
    • vSphere cluster with DRS and HA enabled
    • content library for Tanzu Kubernetes cluster images subscribed to https://wp-content.vmware.com/v2/latest/lib.json. In case you don't have Internet connectivity from vCenter Server you will need to download them offline and upload to the library. Check if you can have access to Internet via  a proxy and you can add the proxy in vCenter Server VAMI interface (https://vcs_fqdn:5480) 
    • storage policies - for lab purpose one policy can be created and used for all types of storage. Go to Policies and Profiles and create a new VM Storage Profile - Enable host based rules and select Storage I/O Control

  • IP's - for ingress and egress traffic (routed), pod and service (internal traffic) 
  • latest version of  vCenter Server  - 7.0 U2a (required for some of the new functionalities - vm operator and namespace self service)
  • NTP working and configured for vCenter Server  and NSX-T manager (and the rest of components) 

Enabling the Supervisor Cluster is pretty straight forward - go to workload management, clusters and add cluster. The wizard will take you through the following steps. 

First select vCenter Server and the type of networking. If you don't have NSX-T configured, then you can use vSphere Distributed Switch but first a load balancer needs to be installed (HAproxy or AVI)


Then you select the vSphere cluster where to enable the Supervisor cluster. 

Choose the size of the control plane VMs - the smaller they are the smaller the Kubernetes environment.


Map storage policies to types of storage in the Supervisor cluster


Add management network details. It is important to clarify that the supervisor VMs have 2 NIC's - one connected to vSphere distributed portgroup that has access to vCenter Server and NSX-T manager and another one connected to Kubernetes service network. Please check the "View Network Topology" in the step to have a clear picture of the configuration of the Supervisor VM. Also supervisor VMs need a range of 5 IPs free that will be use - in my case I am selecting a range from the management network.  


Next add the network details for ingress and egress networks and also for internal cluster networks (service and pod). Ingress and egress networks are used to access services inside the Kubernetes cluster  via DNAT (ingress) and by internal services to access outside world via SNAT (egress). 


In case you use the same DNS server for management and service networks, the server must be reachable over both interfaces. Service network will use the IP of the egress network to reach DNS. 

Lastly, add the content library, review the configuration and give it a run. 

 Once the cluster is deployed successfully you will see it in the ready state:


You can now create namespaces and Kubernetes guest clusters. To access the cluster you will need to connect to https://cluster_ip and download kubectl vSphere plugin. 

Since we got through all the manual steps, we can look next at automating the configuration using PowerCLI in the next post.


Monday, April 19, 2021

vRealize Automation 8.4 Disk Management

vRealize Automation 8.4 brings some enhancements to storage management at cloud template level. Since this a topic that I am particularly interested in, I've decided to take a look at the topic. I've focused on two cases cases:

  • cloud template with predefined number of disks
  • cloud template with dynamic number of disks 


Cloud template with predefined number of disks

First I've created a template with 2 additional disks attached to it. Both disk are attached to SCSI controller 1 and their size is given as input. Both disk are thin provisioned. The template looks as following:


Let's see the code behind the template. There are 2 main sections:

  • inputs: where the input parameters are defined
  • resources: where template resources are defined. 
Inputs section contains parameters for VM image flavor (defaults to micro) and disk sizes (default to 5GB each)

Resources section has 3 resources - the VM (Cloud_Machine_1) and its 2 additional disks (Cloud_Volume_1 and Cloud_Volume_2). Each resource is defined by a type and properties. 

The disks are mapped to the VM resource using attachedDisks property. The input parameters can be seen under each resource, for example for disk capacity: ${input.flavor}, ${input.disk1Capacity} and  ${input.disk2Capacity}. Please note that in this case the SCSI controller and the unit number are given in the template. 

formatVersion1
inputs:
  flavor:
    typestring
    titleFlavor
    defaultmicro
  disk1Capacity:
    typeinteger
    titleApp Disk Capacity GB
    default5
  disk2Capacity:
    typeinteger
    titleLog Disk Capacity GB
    default5
resources:
  Cloud_Machine_1:
    typeCloud.Machine
    properties:
      imageCentOS7
      flavor'${input.flavor}'
      constraints:
        - tag'vmw:az1'
      attachedDisks:
        - source'${resource.Cloud_Volume_1.id}'
        - source'${resource.Cloud_Volume_2.id}'
  Cloud_Volume_1:
    typeCloud.Volume
    properties:
      SCSIControllerSCSI_Controller_1
      provisioningTypethin
      capacityGb'${input.disk1Capacity}'
      unitNumber0
  Cloud_Volume_2:
    typeCloud.Volume
    properties:
      SCSIControllerSCSI_Controller_1
      provisioningTypethin
      capacityGb'${input.disk2Capacity}'
      unitNumber1



Once the template is created, you can run a test to see if all constraints are met and if code will run as expected. This is a useful feature and it is similar to unit tests used in development processes. 


If tests are successful, you can deploy the template. After the resources are provisioned, you can select in the topology view any of the resources and check the details and the available day 2 actions in the right pane. 



For the disks we can find out the resource name, its capacity, its state (if it is attached or not), if it is encrypted and to what machine it is associated.



More details are displayed under custom properties such as the controller name, datastore on which the disk is placed and so on. A lot more details are displayed under custom properties such as the controller name, datastore on which the disk is placed and so on.

We can resize the disks and also remove the disks from the machine (delete). You can see below a resize action where the existing value is displayed and the new value is typed:



Cloud template with dynamic number of disks 

The first example uses a predefined number of disks in the template even though the disk size is given as an input parameter. Another use case is to let the consumer specify how many disks he needs attached to the VM (obviously with some limitations). 


In this case the code is looking a bit different. We define an array as the input for the disk sizes. The array is dynamic, but in our case limited to maximum 6 values (6 disks). This array is then used to define the Cloud.Volume resource. 

formatVersion1
inputs:
  flavor:
    typestring
    titleFlavor
    defaultmicro
  disks:
    typearray
    minItems0
    maxItems6
    items:
      typeobject
      properties:
        size:
          typeinteger
          titleSize (GB)
          minSize1
          maxSize50
resources:
  Cloud_Machine_1:
    typeCloud.Machine
    properties:
      imageCentOS7
      flavor'${input.flavor}'
      constraints:
        - tag'vmw:az1'
      attachedDisks'${map_to_object(resource.disk[*].id, "source")}'
  disk:
    typeCloud.Volume
    allocatePerInstancetrue
    properties:
      provisioningTypethin
      capacityGb'${input.disks[count.index].size}'
      count'${length(input.disks)}'



When requesting the deployment, an user can leave the default disk in the VM image or add up to 6 more disks



Details about the disks and controllers can be seen directly from vRA. In the example below all disks are placed on the same controller:




Caveats

When adding same size disks an error is displayed about "data provided already entered". Not clear at this time if it is my code or it is a limitation.


The controller type is automatically taken from the VM template (image). Being able to actually specify the controller type or even change it as a day 2 operation would be also helpful. 




Sunday, April 18, 2021

What's new in vRealize Automation 8.4

 Last Friday vRealize Automation 8.4 was released and we are going to take a look at some of the new features. 

vRA vRO Plugin

The vRO plugin for vRA is back and it seems it is here to stay for good. This is one of the long waited come backs. There are several phases of development for the plugin and what we get now is phase 1 functionalities:

  • management of vRA on-premises and vRA Cloud hosts
  • preserver authentication to the hosts and dynamic host creation
  • REST client available allowing requests to vRA





The plugin is supported in vRA 8.3, but it has to be downloaded and installed manually. There seems to be a plan for VRO especially if we look back at support added for other languages such as Node.js, Python and PowerShell.   


Storage Enhancements

At storage level there are new features that improve visibility and management:
  • specify order in which the disks are created 
  • choose SCSI controller to which the disk is connected  
  • day 2 actions on the disks part of image template

Deploy multiple disks blueprint:





A more detailed article about disk management can be found here 

Azure Provisioning Enhancements

A series of new features is available for Azure integration:
  • support for Azure shared images 
  • Azure disk encryption set - encrypt VMs and attached disks and support 3rd party KMS 
  • Azure disk snapshot - create and mange disk snapshots with Azure deployments

ITSM Integration with ServiceNow Enhancements 

Foo those of you using ServiceNow as portal new new enhancements are brought for the integration with vRA. 
  • Support for Catalog Items which has Custom Resource (without for vRO Objects)
  • Support for Catalog Items with Custom Day 2 actions
  • Ability to customize vRA Catalog by adding Edit Box and Drop down in ServiceNow.
  • Ability to add to attach a script to these fields.
  • Deployment Details on available in ServicePortal
If you are using on-premises ServiceNow the integration this is not yet validated (seems it's on the way though).

Enhancements to Configuration Management Tools

The configuration management eco-system supported with vRA also got its enhancements (Puppet, SaltStack, Ansible)

This was just a short overview of the new features brought in by vRA 8.4. The full list can be read in the release notes.

Monday, March 1, 2021

Deploy VCSA Appliance with Terraform

I am back to an older project involving VMware products and Terraform. For those of you new to the subject, Terraform is an open source infrastructure as code tool developed by HashiCorp. It allows to define the entire infrastructure in a language called HashiCorp Configuration Language (HCL) and JSON files (where HCL is not enough). 

The interest for Terraform is its ability to easily deliver infrastructure across different infrastructures: public cloud, private cloud, Kubernetes. You write your configuration files, test it (with plan) and then you apply it to the infrastructure to get your resources deployed. There are other software tools that can be used such as HashiCorp Vault which is a secret management solution that can be consumed programmatically. In my example I will be using Vault to store the passwords required for setting up VCSA. 

In this example we will use Terraform to update the VCSA JSON template with values provided in a variable file and then run the VCSA cli installer. So we are not using the vSphere provider, rather local provider for modifying the template file and null provider to run a local command. I chose this example though because it is something I struggled to get it working. 

I've used the following simple project structure:

Templates folder contains VCSA modified template. Although all .tf files could be made into one (main.tf), I prefer this way of making the code more readable (and yes, variables.tf has variables and vault.tf has the Vault provider definition and the keys to secrets)

main.tf defines 2 resources: update a template file and a command to execute 

resource "local_file" "vcsa_json" {
    content = templatefile (
            var.template_file_path, 
            { 
              vc_fqdn = var.vcenterserver,
              vc_user = var.vcenterserver_user
              vc_user_pass = data.vault_generic_secret.vcenter_auth.data["value"],
              vm_network = var.pg_mgmt,
              vdc = var.vdc,
              datastore = var.datastore,
              host = var.host,
              cluster = var.cluster,
              vcsa_name = element(split(".", var.vcsa_fqdn),0),
              vcsa_fqdn = var.vcsa_fqdn,
              vcsa_ip = var.vcsa_ip,
              prefix = var.prefix,
              gateway = var.gateway,
              dns = var.dns,
              vcsa_root_pass = data.vault_generic_secret.vcsa_root.data["value"],
              ntp_servers = var.ntp,
              sso_password = data.vault_generic_secret.vcsa_admin.data["value"]
            }
            )
    filename = var.config_file_path
}

resource "null_resource" "vcsa_install" {
  provisioner "local-exec" {
    command = "${var.installcmd_file_path}/vcsa-deploy install --accept-eula 
            --acknowledge-ceip --no-esx-ssl-verify ${var.config_file_path}"
  }
}


Local_file resource takes the template given by template_file_path variable and creates a configuration file at the path given in config_file_path variable. Null_resource executes a local command, in this case vcsa-deploy command to which we input updated configuration file. 

Within the template file you can see references to variables from variables.tf (var.something) and also to data from vault.tf (data.vault_generic_secret.some_path). Let's look at the the two files.

variables.tf 

variable "template_file_path" {
  description = "JSON template file path"
  type = string
  default = "templates/vcsa70_embedded_vCSA_on_VC.json"
}

variable "config_file_path" {
  description = "vcsa configuration JSON file path"
  type = string
  default = "/data/build/vcsa01_embedded_vCSA_on_VC.json"
}

variable "installcmd_file_path" {
  description = "command line file path"
  type = string
  default = "/data/VMware-VCSA-all-7.0.1-17491101/vcsa-cli-installer/lin64"
}

variable "vcsa_fqdn" {
  description = "vcsa hostname"
  default = "vcsa01.mylab.local"
}

variable "vcsa_ip" {
  description = "vcsa ip address"
  default = "192.168.1.10"
}

variable "prefix" {
  description = "IP prefix"
  default = "24"
}

Each variable is defined by a name and a value. It can also have a description and a type (Please note that not all variables have been posted in this listing) 

vault.tf

provider "vault" {
    address = "https://192.168.1.2:8200"
    token = "ABCD"
    skip_tls_verify = true
}

# vcsa deploy
data "vault_generic_secret" "vcsa_admin" {
    path = "kv-vmware-stgdev/administrator@vsphere.local"
}

data "vault_generic_secret" "vcsa_root" {
    path = "kv-vmware-stgdev/root"
}

The file contains the Vault provider definition and two keys for the VCSA admin and root passwords. 


template file (vcsa70_embedded_vCSA_on_VC.json) 

The values from variables.tf and vault.tf are updated in the template. To be able to update the default  template, you need to modify it first by adding keys that can be interpreted by Terraform provider. In my case I took the VCSA 7.0 embedded template and changed it as following:

{
    "__version": "2.13.0",
    "__comments": "Sample template to deploy a vCenter Server Appliance with an embedded Platform Services Controller on a vCenter Server instance.",
    "new_vcsa": {
        "vc": {
            "__comments": [
                "'datacenter' must end with a datacenter name, and only with a datacenter name. ",
                "'target' must end with an ESXi hostname, a cluster name, or a resource pool name. ",
                "The item 'Resources' must precede the resource pool name. ",
                "All names are case-sensitive. ",
                "For details and examples, refer to template help, i.e. vcsa-deploy {install|upgrade|migrate} --template-help"
            ],
            "hostname": "${vc_fqdn}",
            "username": "${vc_user}",
            "password": "${vc_user_pass}",
            "deployment_network": "${vm_network}",
            "datacenter": [
                "${vdc}"
            ],
            "datastore": "${datastore}",
            "target": [
                "${cluster}",
                "${host}"
            ]
        },
        "appliance": {
            "__comments": [
                "You must provide the 'deployment_option' key with a value, which will affect the vCenter Server Appliance's configuration parameters, such as the vCenter Server Appliance's number of vCPUs, the memory size, the storage size, and the maximum numbers of ESXi hosts and VMs which can be managed. For a list of acceptable values, run the supported deployment sizes help, i.e. vcsa-deploy --supported-deployment-sizes"
            ],
            "thin_disk_mode": true,
            "deployment_option": "small",
            "name": "${vcsa_name}"
        },
        "network": {
            "ip_family": "ipv4",
            "mode": "static",
            "system_name": "${vcsa_fqdn}",
            "ip": "${vcsa_ip}",
            "prefix": "${prefix}",
            "gateway": "${gateway}",
            "dns_servers": [
                "${dns}"
            ]
        },
        "os": {
            "password": "${vcsa_root_pass}",
            "ntp_servers": "${ntp_servers}",
            "ssh_enable": false
        },
        "sso": {
            "password": "${sso_password}",
            "domain_name": "vsphere.local"
        }
    },
    "ceip": {
        "description": {
            "__comments": [
                "++++VMware Customer Experience Improvement Program (CEIP)++++",
                "VMware's Customer Experience Improvement Program (CEIP) ",
                "provides VMware with information that enables VMware to ",
                "improve its products and services, to fix problems, ",
                "and to advise you on how best to deploy and use our ",
                "products. As part of CEIP, VMware collects technical ",
                "information about your organization's use of VMware ",
                "products and services on a regular basis in association ",
                "with your organization's VMware license key(s). This ",
                "information does not personally identify any individual. ",
                "",
                "Additional information regarding the data collected ",
                "through CEIP and the purposes for which it is used by ",
                "VMware is set forth in the Trust & Assurance Center at ",
                "http://www.vmware.com/trustvmware/ceip.html . If you ",
                "prefer not to participate in VMware's CEIP for this ",
                "product, you should disable CEIP by setting ",
                "'ceip_enabled': false. You may join or leave VMware's ",
                "CEIP for this product at any time. Please confirm your ",
                "acknowledgement by passing in the parameter ",
                "--acknowledge-ceip in the command line.",
                "++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++"
            ]
        },
        "settings": {
            "ceip_enabled": false
        }
    }
}

If you look at main.tf resource definition you will see the same keys from JSON file between {}.

Now all the code is written down and it's a simple matter of running terraform plan and terraform apply.