Showing posts with label vRealize Automation. Show all posts
Showing posts with label vRealize Automation. Show all posts

Wednesday, October 6, 2021

What's new in vRealize Automation 8.5.x and 8.6

The latest releases of vRealize Automation bring in a series of interesting features. 


Cloud Resource 

Cloud Resource view was introduced back in May 2021 for vRA Cloud and allows to manage resources directly instead of managing them by resource groups (deployments). It allows now to manage all discovered, onboarded and provisioned deployments, trigger power day 2 actions on discovered resources and bulk manage multiple resources at the same time.


ABX enabled deployment for custom resources

Provisioning a custom resource allows you to track and manage the custom resource and its properties during its whole lifecycle. No dynamic types are needed for full lifecycle management. 



Cloud Templates Dynamic Input 

Use vRO Actions for dynamic external values to define different types of input values directly at the Cloud Template and bind local input to the dynamic inputs as action parameters. 







Kubernetes support in Code Stream Workspace

The Code Stream pipeline workspace now supports Docker and Kubernetes for continuous integration tasks. The Kubernetes platform manages the entire lifecycle of the container, similar to Docker. In the pipeline workspace, you can choose Docker (the default selection) or Kubernetes. In the workspace, you select the appropriate endpoint. 


The Kubernetes workspace provides:

  • the builder image to use
  • image registry
  • namespace
  • node port
  • persistent Volume Claim
  • working directory
  • environment variables
  • CPU limit
  • memory limit.

You can also choose to create a clone of the Git repository.


Multi-cloud

vRA leverages Azure provisioning capabilities, including the ability to enable/disable boot diagnostics for Azure VMs for Day 0/2, and the ability to configure the name for the Azure NIC interfaces.


Other updates and new features 
  • Native SaltStack Configuration Automation Config via modules for vSphere, VMC, and NSX
  • Leverage third-party integrations with Puppet Enterprise support for machines without a Public IP address
  • Deploy a VCD adapter for vRA
  • Onboard vSphere networks to support an additional resource type in the onboarding workflow

Monday, April 19, 2021

vRealize Automation 8.4 Disk Management

vRealize Automation 8.4 brings some enhancements to storage management at cloud template level. Since this a topic that I am particularly interested in, I've decided to take a look at the topic. I've focused on two cases cases:

  • cloud template with predefined number of disks
  • cloud template with dynamic number of disks 


Cloud template with predefined number of disks

First I've created a template with 2 additional disks attached to it. Both disk are attached to SCSI controller 1 and their size is given as input. Both disk are thin provisioned. The template looks as following:


Let's see the code behind the template. There are 2 main sections:

  • inputs: where the input parameters are defined
  • resources: where template resources are defined. 
Inputs section contains parameters for VM image flavor (defaults to micro) and disk sizes (default to 5GB each)

Resources section has 3 resources - the VM (Cloud_Machine_1) and its 2 additional disks (Cloud_Volume_1 and Cloud_Volume_2). Each resource is defined by a type and properties. 

The disks are mapped to the VM resource using attachedDisks property. The input parameters can be seen under each resource, for example for disk capacity: ${input.flavor}, ${input.disk1Capacity} and  ${input.disk2Capacity}. Please note that in this case the SCSI controller and the unit number are given in the template. 

formatVersion1
inputs:
  flavor:
    typestring
    titleFlavor
    defaultmicro
  disk1Capacity:
    typeinteger
    titleApp Disk Capacity GB
    default5
  disk2Capacity:
    typeinteger
    titleLog Disk Capacity GB
    default5
resources:
  Cloud_Machine_1:
    typeCloud.Machine
    properties:
      imageCentOS7
      flavor'${input.flavor}'
      constraints:
        - tag'vmw:az1'
      attachedDisks:
        - source'${resource.Cloud_Volume_1.id}'
        - source'${resource.Cloud_Volume_2.id}'
  Cloud_Volume_1:
    typeCloud.Volume
    properties:
      SCSIControllerSCSI_Controller_1
      provisioningTypethin
      capacityGb'${input.disk1Capacity}'
      unitNumber0
  Cloud_Volume_2:
    typeCloud.Volume
    properties:
      SCSIControllerSCSI_Controller_1
      provisioningTypethin
      capacityGb'${input.disk2Capacity}'
      unitNumber1



Once the template is created, you can run a test to see if all constraints are met and if code will run as expected. This is a useful feature and it is similar to unit tests used in development processes. 


If tests are successful, you can deploy the template. After the resources are provisioned, you can select in the topology view any of the resources and check the details and the available day 2 actions in the right pane. 



For the disks we can find out the resource name, its capacity, its state (if it is attached or not), if it is encrypted and to what machine it is associated.



More details are displayed under custom properties such as the controller name, datastore on which the disk is placed and so on. A lot more details are displayed under custom properties such as the controller name, datastore on which the disk is placed and so on.

We can resize the disks and also remove the disks from the machine (delete). You can see below a resize action where the existing value is displayed and the new value is typed:



Cloud template with dynamic number of disks 

The first example uses a predefined number of disks in the template even though the disk size is given as an input parameter. Another use case is to let the consumer specify how many disks he needs attached to the VM (obviously with some limitations). 


In this case the code is looking a bit different. We define an array as the input for the disk sizes. The array is dynamic, but in our case limited to maximum 6 values (6 disks). This array is then used to define the Cloud.Volume resource. 

formatVersion1
inputs:
  flavor:
    typestring
    titleFlavor
    defaultmicro
  disks:
    typearray
    minItems0
    maxItems6
    items:
      typeobject
      properties:
        size:
          typeinteger
          titleSize (GB)
          minSize1
          maxSize50
resources:
  Cloud_Machine_1:
    typeCloud.Machine
    properties:
      imageCentOS7
      flavor'${input.flavor}'
      constraints:
        - tag'vmw:az1'
      attachedDisks'${map_to_object(resource.disk[*].id, "source")}'
  disk:
    typeCloud.Volume
    allocatePerInstancetrue
    properties:
      provisioningTypethin
      capacityGb'${input.disks[count.index].size}'
      count'${length(input.disks)}'



When requesting the deployment, an user can leave the default disk in the VM image or add up to 6 more disks



Details about the disks and controllers can be seen directly from vRA. In the example below all disks are placed on the same controller:




Caveats

When adding same size disks an error is displayed about "data provided already entered". Not clear at this time if it is my code or it is a limitation.


The controller type is automatically taken from the VM template (image). Being able to actually specify the controller type or even change it as a day 2 operation would be also helpful. 




Sunday, April 18, 2021

What's new in vRealize Automation 8.4

 Last Friday vRealize Automation 8.4 was released and we are going to take a look at some of the new features. 

vRA vRO Plugin

The vRO plugin for vRA is back and it seems it is here to stay for good. This is one of the long waited come backs. There are several phases of development for the plugin and what we get now is phase 1 functionalities:

  • management of vRA on-premises and vRA Cloud hosts
  • preserver authentication to the hosts and dynamic host creation
  • REST client available allowing requests to vRA





The plugin is supported in vRA 8.3, but it has to be downloaded and installed manually. There seems to be a plan for VRO especially if we look back at support added for other languages such as Node.js, Python and PowerShell.   


Storage Enhancements

At storage level there are new features that improve visibility and management:
  • specify order in which the disks are created 
  • choose SCSI controller to which the disk is connected  
  • day 2 actions on the disks part of image template

Deploy multiple disks blueprint:





A more detailed article about disk management can be found here 

Azure Provisioning Enhancements

A series of new features is available for Azure integration:
  • support for Azure shared images 
  • Azure disk encryption set - encrypt VMs and attached disks and support 3rd party KMS 
  • Azure disk snapshot - create and mange disk snapshots with Azure deployments

ITSM Integration with ServiceNow Enhancements 

Foo those of you using ServiceNow as portal new new enhancements are brought for the integration with vRA. 
  • Support for Catalog Items which has Custom Resource (without for vRO Objects)
  • Support for Catalog Items with Custom Day 2 actions
  • Ability to customize vRA Catalog by adding Edit Box and Drop down in ServiceNow.
  • Ability to add to attach a script to these fields.
  • Deployment Details on available in ServicePortal
If you are using on-premises ServiceNow the integration this is not yet validated (seems it's on the way though).

Enhancements to Configuration Management Tools

The configuration management eco-system supported with vRA also got its enhancements (Puppet, SaltStack, Ansible)

This was just a short overview of the new features brought in by vRA 8.4. The full list can be read in the release notes.

Monday, April 20, 2020

Tips & Tricks to Install and Upgrade vRA 8.x in a Small Lab

I started building my vRA 8 environment in the home lab and even if the process was a pretty smooth one, working in an environment with limited resources presented challenges that I hope this article will help you overcome easily.

For me it was a 2 step process: install vra 8.0.1 and upgrade to 8.1 within week. So I will treat each step independently. Some of the challenges of installing 8.0.1 will certainly apply to a direct installation of 8.1.

My home lab is made of ESXi hosts running some minimal hardware (4 cores and 32 GB of RAM per host). Requirementes for vRA 8 are as following:
- vRealize Identity Manager (vIDM) - minimum 2 vCPU / 6 GB RAM
- vRealize Lifecycle Manager (vRLCM) - minimum 2 vCPU / 6 GB RAM
- vRealize Automation (vRA) 8.0.1 - 8 vCPU / 32 GB RAM
- vRealize Automation (vRA) 8.1 - 12 vCPU / 40 GB RAM

As you can easily see vRA 8 shouldn't be actually installed on a 32 GB ESXi host. And if I wouldn't have started with 8.0.1, I don't think I would've have even tried to install 8.1. However, I did start with 8.0.1. I also haven't read the system requirements in the beginning..


Installation of vRA 8.0.1
  • vRA certificate 

If you have a green field like I did, then the first thing to install is vRLCM using the easy installer. At step "Identity Manager Configuration" make sure to select "Install New VMware Identity Manager". This will deploy both appliances. Now you can login to vRLM and create a new environment with vRA 8.

vRA8 needs a certificate, even if it's self signed. Go first to Locker - Certificate and generate a new certificate.

Next, fill in all required information about hostname, IP addresses, passwords, the usual stuff. The precheck should also execute successfully. Before launching the deployment, you can save the configuration as a JSON file. I recommend doing it as it may come in handy if you ever want to automate this install.

  • vRA VM configuration downgrade
The deployment is a multi staged process. In the first stage it will deploy the actual VM from OVF and try to power it on. In my case it failed as it tries to start a 8 vCPU / 32GB VM.

Open vSphere Client and change the settings of the VM - I used 4 vCPU and 30 GB of RAM. I did try with 24 GB, but that ended up with containers not being scheduled due to lack of resources:

In this case I think 30 GB is a decent compromise. Once you have modified the VM, go back to vRLCM and restart the task (Retry request). Take care not to delete the already exiting VM. At this point you only have to wait until all 8 stages are finished.
  • vRO expired password
Unfortunately, two hours later I had another error, this time during vravainitializecluster step. This is something related to 8.0.1 and it does not happen all the time. So you may not see it.


To confirm it, SSH to vRA VM and look at the log file indicated in the error /var/log/deploy.log and look for database connection error. Run also the following command to check the status of vRO containers: kubectl -n prelude get pods. If the vRO container in CrashLoopBackOff a quick search for "vro container CrashLoopBackOff"  will get you to the following KB on new installs of vRealize Orchestrator 8.x failing to install due to a POD STATUS of 'CrashLoopBackOff' (76870). The error is caused by an expired password. Apply the steps in the KB and restart deploy. It picks up where it left and soon vRA is installed and running.

I am curios how a direct install of vRA 8.1 will actually work having in mind the small resource drawback. But even if it doesn't, there is a way to get there.

A few days later 8.1 was released so it was time to upgrade.


Upgrade to vRA 8.1

For a step by step article on upgrading to 8.1 you may look here, As stated above, I will only focus on the small hick ups.


  • Binary mapping

You need to first upgrade vRLCM and vIDM. Once these two feats are done (again, pretty straight forward thanks to Lifecyle Manager) you will upgrade vRA. I've downloaded the updaterepo ISO file from VMware site and uploaded it to vRLCM to /data (using winscp). Then I  created a new binary mapping by going to Settings - Binary Mappings and adding the binary:

  • Precheck ignore
You can start the upgrade using vRLCM repository. The precheck will fail because this time there is a VM and it looks at its configuration and it does not like seeing 4 CPU and 30 GB of RAM:

Do like I did, ignore the errors and start the upgrade. One hour later you should see something similar to the following, which means the upgrade was successful

  • Snapshot 
Do not forget the upgrade process takes a snapshot of the VM that you need to delete




Now I am running vRA 8.1 in my home lab and it works decently at least the Cloud Assembly part that I've been playing with so far. I do understand that resources are required for good reasons, but you need a 12 core / 64 GB host which is not easy to get. In this case, running it with reduced performance is better than not running it at all. There are obvious impacts on the services. For example the time it takes to boot up and it's impressive that it does it in the end. The following snip is a proof of the struggle behind:





Saturday, March 21, 2020

vROps Custom Dashboard for Monitoring vRealize Automation Reservations

It's been a while since I last tried to create a custom dashboard in vRealize Operations Manager (vROps). I think it was called vCenter Operations Manager at that time and the version was 5.8. Fear not, in today's post we are talking about vROps 7.5.

The use case is pretty simple: I need a way of monitoring the capacity of the reservations in terms of memory and storage. The management pack for vRA is tenant and business group focused, which doesn't really apply in my case where I have only one tenant and multiple business groups using the same reservation.

The way the dashboard is being organized as following:


Top level is an object list widget that gets automatically updated by any selection in the other dashboards. Main information is displayed in Top-N widgets that show top 15 most utilized reservations in terms of storage and memory. On the right side I've added 2 heatmap widgets for the same metrics - allocated % storage and memory per reservation. However the heatmaps present all reservations and their size is relative to the size of the reserved resource. The bigger the drawn size, the bigger the reserved value is. Any interaction with the Top-N or Heatmap widgets will provide more details in the Object List. The interactions view has been added to vROps somewhere in 2018 and it's a great and simple way  to create interactions between widgets.

How the dashboard works: let's say we have a reservation that is 90% memory utilized displayed in Top-N widget. When selected, the Object List on top will get populated with the reservation details: which vSphere cluster is being mapped to the reservation, how much memory is actually allocated for that vSpher cluster in vRA and how much physical memory in total the cluster has. Kind of a drill down into the situation. Of course, being in vROps you can further drill down on the vSphere cluster.


In this case the selected reservation is at 81% memory usage. The top widget displays the real value - which is less than 400 GB. The heatmap on the right can be used to analyse the overall situation. Don't forget the bigger the reservation size is, the bigger the size in the heatmap. While in the Top-N list we are actually filtering the data and selecting only the ones that are critical.

Let's take a deeper look into how each widget type is configured:


  • Reservation Usage - Object List widget
Configuration is selected as Self Provider off since it receives data from other widgets. We add additional columns to display in the widget such as mapped cluster, free memory. 

To add columns, press the green plus and filter by adapter type and object type

I've also removed the widgets default columns from the view since I am not interested in collection state, collection status. 





  • VRA Reservation Memory Allocated % - Top-N widget
For this widget, select Self Provider On in configuration section. Aslo select Top-N Options as Metric analysis and Top Highest Utilisation. Enable auto refresh if you want to update the metric data. 

Once self provider is set to on, Input Data section becomes active and here add all your reservations. Data will be analysed from all reservation and only first 15 will be displayed based on the criteria selected in Output Data section. 


In Output Data, select the object type as reservation and then the metric to be processed as Memory Allocated %.



Lastly, we can add Output Filters. Let's say we don't want to see a top 15 of all the reservations' memory usage, but only the ones that are above a certain threshold like 75%. We also do not want to see in there reservations that are above the set threshold, but because they are very big they actually have sufficient resource, more than 1TB of RAM for example. In this case we would add a filter on the output data that limits the displayed info:


  • Memory Allocated % - Heatmap widget
For heatmap we use the same input data as for the Top-N: all reservation. What changes is the Output Data. We'll group the information by Reservation (but it can be Reservation Policy, or tenant or whatever grouping it suits)


Next we select Reservation as the object type. The metrics used for Size by and Color by are different since I wanted to have a representation of how big is the VRA reservation  and also of its usage. The bigger the reserved memory size, the bigger the box will be drawn. The more used the reservation is, the darker the color will be. 

Output filter can be used here also, for example if you are not interested in very small reservation or want to filter out some of them based on the naming (reservations for a test environment). Putting a little extra time to tweak the widgets to your requirements and environment will prove beneficial since the visualized data makes sense to different users based on their needs.  


Sunday, March 15, 2020

Distributed vRealize Automation 7.x Orchestrated Shutdown, Snapshot and Startup using PowerCLI

I will take a look at performing scheduled operations on vRealize Automation 7.6 (although  the article can apply to other versions). In a distributed architecture, vRA 7.6 can become a pretty big beast. Based on the requirements and the actual vSphere implementation (virtual datacenters and vCenter Servers), such a deployment can easily grow to 12-16 VMs. Scheduling operations that require restart of the environment requires careful preparation because of the dependencies between different components such as vRA server, IaaS components, MSSQL database. One of the most common and repetitive tasks is Windows patches requiring regular IaaS components reboots. But there are other activities that need to shutdown the whole environment and take a cold snapshot, for example a hot fix.

VMware documentation defines the proper way of restarting components in a vRA distribute environment. What I've done is to actually take those steps and put them in a PowerCLI script making the procedure reusable and predictable. A particular case is to detect if a VM is a placeholder VM (being a VM replica). Before going to the script itself, let's look at the whole workflow.



The first part is just a sequential shutdown and wait until the VMs poweroff to go to the next step. Then a cold snapshot is taken for all VMs. Lastly, VMs are powered on in an orchestrated sequence and wait times are implemented to allow for the services to come back up.

Getting to code part - first we define the list of VRA components, in this case proxies, DEM-workers, DEM-orchestrators, IaaS web, IaaS Managers and vRA Appliances.

# vCenter Servers
$vCSNames = ("vcssrv1", "vcssrv2", "vcssrv3","vcssrv4")

# vRA Components
$workers = @("vradem1", "vradem2","vraprx1","vraprx2", "vraprx3", "vraprx4", "vraprx5", "vraprx6")
$managerPrimary = @("vramgr1")
$managerSecondary = @("vramgr2")
$webPrimary = @("vraweb1")
$webSecondary = @("vraweb2")
$vraPrimary = @("vraapp1")
$vraSecondary = @("vraapp2")

# Snapshots
$snapName = "vra upgrade"
$snapDescription = "before 7.6 upgrade"

# Log file
$log = "coldSnapshotVra.log"

Next we define the 3 functions for shutdown, snapshot and start the VMs. Since in our environment we use SRM, I had to check for placeholder VMs when powering off and snapshotting the VMs. We'll take them one by one. First. shutdown VMs and wait for the VM to stop:


function shutdownVMandWait($vms,$log) {
    foreach ($vmName in $vms) {
        try {
            $vm = Get-VM -Name $vmName -ErrorAction Stop
            foreach ($o in $vm) {
                if($o.ExtensionData.Summary.Config.ManagedBy.Type -eq "placeholderVm") {
                    Write-Host "VM: '$($vmName)' is placeholderVm. Skipping."
                } else {
                    if (($o.PowerState) -eq "PoweredOn") {
                        $v = Shutdown-VMGuest -VM $o -Confirm:$false
                        Write-Host "Shutdown VM: '$($v.VM)' was issued"
                        Add-Content -Path $log -Value "$($v)"
                    } else {
                        Write-Host "VM '$($vmName)' is not powered on!"
                    }
                }   
            }
        } catch {
            Write-Host "VM '$($vmName)' not found!"
        }
    }
    foreach ($vmName in $vms) {
        try {
            $vm = Get-VM -Name $vmName -ErrorAction Stop
            while($vm.PowerState -eq 'PoweredOn') { 
                sleep 5
    Write-Host "VM '$($vmName)' is still on..."
                $vm = Get-VM -Name $vmName
            }
            Write-Host "VM '$($vmName)' is off!"
        } catch {
            Write-Host "VM '$($vmName)' not found!"
        }
    }
}

Next, take snapshots of the VMs


function snapshotVM($vms,$snapName,$snapDescription,$log) {
    foreach ($vmName in $vms) {
        try {
            $vm = Get-VM -Name $vmName -ErrorAction Stop
        } catch {
            Write-Host "VM '$($vmName)' not found!"
            Add-Content -Path $log -Value "VM '$($vmName)' not found!"
    
        }
        try {
            foreach ($o in $vm) {
                if($o.ExtensionData.Summary.Config.ManagedBy.Type -eq "placeholderVm") {
                    Write-Host "VM: '$($vmName)' is placeholderVm. Skipping."
                    Add-Content -Path $log -Value "VM: '$($vmName)' is placeholderVm. Skipping."
                } else {
                    New-Snapshot -VM $o -Name $snapName -Description $snapDescription -ErrorAction Stop
                }   
            }
        } catch {
            Write-Host "Could not snapshot '$($vmName)' !"
            Add-Content -Path $log -Value "Could not snapshot '$($vmName)' !"
    
        }
    }
}

And finally, power on the VMs:


function startupVM($vms,$log) {
    foreach ($vmName in $vms) {
        try {
            $vm = Get-VM -Name $vmName -ErrorAction Stop
            foreach ($o in $vm) {
                if($o.ExtensionData.Summary.Config.ManagedBy.Type -eq "placeholderVm") {
                    Write-Host "VM: '$($vmName)' is placeholderVm. Skipping."
                } else {
                    if (($o.PowerState) -eq "PoweredOff") {
                        Start-VM -VM $o -Confirm:$false -RunAsync
                    } else {
                        Write-Host "VM '$($vmName)' is not powered off!"
                    }
                }   
            }
        } catch {
            Write-Host "VM '$($vmName)' not found!"
        }
    } 
}

Last part of the script is the putting all the logic together. Connect to vCenter Server, orderly shutdown VMs, take the cold snapshots and bringing back the whole environment.


# MAIN
# Connect vCenter Server
$creds = Get-Credential
try {
    Connect-VIServer $vCSNames -Credential $creds
} catch {
    Write-Host $_.Exception.Message
}

# Stop VRA VMs
Write-Host "### Stopping DEM Workers an Proxies"
shutdownVMandWait -vms $workers -log $log
Write-Host "### Stopping Secondary Managers and Orchestrators"
shutdownVMandWait -vms $managerSecondary -log $log
Write-Host "### Stopping Primary Managers and Orchestrators"
shutdownVMandWait -vms $managerPrimary -log $log
Write-Host "### Stopping secondary Web"
shutdownVMandWait -vms $webSecondary -log $log
Write-Host "### Stopping primary Web"
shutdownVMandWait -vms $webPrimary -log $log
Write-Host "### Stopping secondary VRA"
shutdownVMandWait -vms $vraSecondary -log $log
Write-Host "### Stopping primary VRA"
shutdownVMandWait -vms $vraPrimary -log $log

# Snapshot VRA VMs
Write-Host "### Snapshotting DEM Workers an Proxies"
snapshotVM -vms $workers -snapName $snapName -snapDescription $snapDescription -log $log
Write-Host "### Snapshotting Secondary Managers and Orchestrators"
snapshotVM -vms $managerSecondary -snapName $snapName -snapDescription $snapDescription -log $log
Write-Host "### Snapshotting Primary Managers and Orchestrators"
snapshotVM -vms $managerPrimary -snapName $snapName -snapDescription $snapDescription -log $log
Write-Host "### Snapshotting secondary Web"
snapshotVM -vms $webSecondary -snapName $snapName -snapDescription $snapDescription -log $log
Write-Host "### Snapshotting primary Web"
snapshotVM -vms $webPrimary -snapName $snapName -snapDescription $snapDescription -log $log
Write-Host "### Snapshotting secondary VRA"
snapshotVM -vms $vraSecondary -snapName $snapName -snapDescription $snapDescription -log $log
Write-Host "### Snapshotting primary VRA"
snapshotVM -vms $vraPrimary -snapName $snapName -snapDescription $snapDescription -log $log

# Start VRA VMs
Write-Host "### Starting primary VRA"
startupVM -vms $vraPrimary -log $log
Write-Host  " Sleeping 5 minutes until Licensing service is registered"
Start-Sleep -s 300

Write-Host "### Starting secondary VRA"
startupVM -vms $vraSecondary -log $log
Write-Host  " Sleeping 15 minutes until ALL services are registered"
Start-Sleep -s 900

Write-Host "### Starting Web"
startupVM -vms $webPrimary -log $log
startupVM -vms $webSecondary -log $log
Write-Host  " Sleeping 5 minutes until services are up"
Start-Sleep -s 300

Write-Host "### Starting Primary manager"
startupVM -vms $managerPrimary -log $log
Write-Host  " Sleeping 3 minutes until manager is up"
Start-Sleep -s 180

Write-Host "### Starting Secondary manager"
startupVM -vms $managerSecondary -log $log
Write-Host  " Sleeping 3 minutes until manager is up"
Start-Sleep -s 180

Write-Host "### Starting DEM Workers an Proxies"
startupVM -vms $workers -log $log

Write-Host "### All components have been started"

# Disconnect vCenter 
Disconnect-VIServer * -Confirm:$false

You will notice that the orchestration logic is actually implemented here. This means you can easily add/remove/modify the VMs that the script targets. Let's say you only want to snapshot some proxies for which you don't need to bring everything down. Or you want to add external vRealize Orchestrators appliances. All changes take place in the main part by simply commenting out some steps.

This script helped a lot the nightly operations we had to do across our whole environment and I hope it will do the same for you.

Tuesday, May 15, 2018

NSX integration with vRealize Automation 7.4 - part 2

In part 1 of this post we presented the configuration at vRA level. In this post we'll see how to create a service in the Service Catalog for  programmatic NSX consumption.

First let's remember the main concepts of vRealize Automation service catalog:

  • catalog items are published in the service catalog for user consumption e.g Linux VM or 3 tier web app
  • catalog items can be grouped under different services: QA, Test&Dev, Web Apps, Linux Servers
  • a user is allowed to request a service item based on his entitlements; Entitlements define who has access to catalog items and what actions he can do 

For start, we'll create a service called Linux VMs and a new Entitlement called Allow Linux VMs. We'll entitle all users of the business group to the Linux VMs service. Using services in the entitlement instead of individual items we make sure that every new item mapped to this service will be automatically accessible to the users.

Administration > Catalog Management > Services

Administration > Catalog Management > Entitlements




Next we'll create a blueprint that deploys vSphere VMs. There are several ways to provision vSphere VMs, we will use linked clones because they are very fast and use deltas to keep the changes (which is good in labs). To use linked clones we need to create a golden image: a VM configured to the desired state.

First create the VM: deploy it from an existing template or create it from scratch. VM hostname and networking details will be configured at deployment during guest OS customization. For this to work we need VMware tools installed in the VM and a customization specification created in vCenter Server. 


No other special configuration is needed for the VM.

Optional step (vRA agent installation): if you don't plan to run scripts inside the guest OS of the vRA manged VM, you can skip this step. The installation should be pretty easy since VMware already provides a script that can handle it. Go to vRA appliance URL and download the script on your Linux VM:

 wget  https://vra_app_fqdn/software/download/prepare_vra_template_linux.tar.gz --no-check-certificate

Then extract the script from the archive and run it:

tar -xvzf prepare_vra_template_linux.tar.gz
cd prepare_vra_template_linux
./prepare_vra_template.sh  

Choose default agent type (vSphere), add the address of vRealize Appliance, manager service, accept the key fingerprints for the certificates, set the download timeout, and install JRE (if not already in the VM)




Now we have a VM with all the utils inside (VMware tools and optionally vRA agent) and we create the snapshot that will be the base for linked clones.

At this point we login to vRA portal and we start working on our service creation. Go to Design > Blueprints. Start creating a New blueprint. Type the name of the blueprint, assign an unique ID or leave the automatically generated one, limit the number of deployments per request (if you want). Add lease days to control the spawn of deployed VMs (especially for temporary environments) and add a period of time you want the item to be archived before deleting (when lease expires).

Since this is for demo, I've added a default lease of 1 day and no archival (automatic deletion after lease expires). On NSX Settings tab, choose the NSX transport zone and if you want to isolate the VMs deployed from this blueprint (allow only internal traffic between VMs).







Pressing OK button will take you to the canvas. From Machine Types category, drag and drop vSphere (vCenter) Machine


From Network&Security category, drag and drop On-Demand Routed Network


Select the vSphere__vCenter__Machine_1 components form the canvas and fill in the configuration details. Add the number of instances that can be deployed in a request 


Add build information: how the VM will be created (linked clone), where to clone it from, what customization specification to use:


Type the VM consumed resources: number of CPUs, memory, storage. Take care when configuring these values: if you allow 10 instances in a deployment, each instance with a maximum of 8 vCPU and 32 GB of RAM you may end up with a deployment using 80 vCPU and 320 GB of RAM. This is a good moment when approval workflows come into place.



Finally we need to connect the VM to the network. But first we'll configure the network component. On canvas select the On-Demand_Routed_Network_1 components and choose the parent network profile (profile that has been created in part 1


Go back to vSphere component, go to Network tab and click New. From drop down box select the network name

Lastly, add a custom property for the VM to define the operating system that is being used


At this moment we've configured how to create the VM, how to create the network and linked the VM to the network. Press Finish and then Publish the blueprint:


Once the blueprint has been published, it will appear under Administration > Catalog Management > Catalog Items. Select the new catalog item, press Configure and map it to the service created earlier at the beginning of the post. 


The service will appear in the Catalog tab and you can press Request to deploy a new instance of it. To see what is happening, go to Requests tab, select the request, press View Details and when the request details open press Execution Information


Here you will see that the vxlan has been created on demand and DLR reconfigured. Also the VM has been created and mapped to the new vxlan. The process can also be monitored in vCenter Server


After the provisioning finished successfully, the components will be displayed in Items tab from where they can be managed using day 2 operations.