Friday, May 4, 2018

NSX integration with vRealize Automation 7.4 - part 1

From time to time I find myself facing a new configuration of vRealize Automation. And since I don't do it very often I also find myself having forgotten some steps. For this reason I will write down my typical integration of NSX with vRA.

The post has two parts:

  • part 1 - describes the vRA configuration at infrastructure/tenant level
  • part 2 - focuses on creation of the service that consumes NSX


Requirements:

  • NSX  ( > 6.3.x) is installed and configured (VXLAN, distributed logical router, edge services gateways)
  • vRA 7.4 is deployed, tenant created, user directory integrated (optional) 
  • familiarity with vRA 


Goal: 

  • all vRA workloads are deployed to on-demand created networks and we do not worry about routing or virtual network creation
  • security for workloads is ensured using distributed firewall and security tags (but more on this in another post). 


First thing first, drawing a small diagram (sometimes my creative side kicks in :-)) of the desired state:


To solve L2/L3 requirements, we need the following:

  • vRA uses on demand created VXLAN's that use Distributed Logical Router (DLR) as default gateway
  • DLR is connected to Edge Services Gateway (ESG) via a transit network
  • dynamic routing protocol is running between DLR and ESG
  • ESG connects VRA workloads to the rest of the world via the External portgroup (which is a distributed portgroup)
  • ESG may also run dynamic routing protocols
(please don't judge the usage of /24 in the diagram, in real life I am subnetting)

As stated earlier, NSX is already configured, DLR and ESG deployed. Let's see how to configure vRA. 

Logon to the tenant as IaaS Administrator. Ideally, you would have IaaS Administrator + Tenant Administrator roles assigned to your account so you don't need to change between roles. We  need to create vSphere endpoint, fabric group, business group, NSX endpoint, reservation. 



Create vSphere (vCenter) Endpoint:

  • go to Infrastructure > Endpoints > Endpoints
  • New > Virtual > vSphere (vCenter)
  • on General tab: give it a name (vcenter1), add the URL of vCenter Server API (https://vcenter1.mydomain.local/sdk, add credentials and Test Connection. 
  • If all good, press OK and you have created your endpoint.  

Create NSX Endpoint:

  • go to Infrastructure > Endpoints > Endpoints
  • New > Network and Security > NSX
  • on General tab: type the name of the endpoint, add the  URL to the NSX manager and the credentials
  • on Associations tab: map NSX endpoint with the previously created vSphere endpoint (this is a step that appeared in vRA 7.3 due to changes on how NSX is integrated) 

  • press Test Connection, and then OK


Create Fabric Group:
  • go to Infrastructure > Endpoints > Fabric Group > New
  • give it a name, add the Fabric administrators (users, user groups) and select the compute resources available (the list of compute resources is based on the permissions the user has in vCenter Server)


Now the compute resources are available (Infrastructure > Compute Resources > Compute Resources). Check on the compute resource that data collection has run successfully - hover on the compute resource and from the menu choose Data Collection. 

Create Business Group:

  • go to Infrastructure > Users & Groups > Business Groups > New
  • on General tab: type in the name, add an e-mail for alerts, test
  • on Members tab: add the users/user groups for the following roles - Group manager role, Support role, Shared access role,User role and press ok
Starting with vRA 7.3 there is a new role for Business group - Shared access role which can use and run actions on resources deployed by other users in the business group. It is a good addition since I remember a client wanting this back in 2015.  

Create Network Profiles
  • go to Infrastructure > Reservations > Network Profiles
  • New > External 
  • on General tab: for Transit VXLAN: add the name, subnet mask and gateway IP address 
  • on DNS tab: add DNS details
  • on Network Ranges tab: add the IP range that is usable (do not forget that there are 2 IPs already used by ESG and DLR) 
  • New > Routed 
  • on General tab: use the external network profile created previously, and add some subnetting details:
    • subnet mask (could be /24 - 254 IPs) - the whole range given to vRA for its workloads
    • range subnet mask (can be /29 - 6 IPs ) - for each application/group of application deployed
  • on Network Ranges tab: press Generate Ranges
At least the following two network profiles will be displayed:




Create Reservation:
  • go to Infrastructure > Reservations 
  • New >  vSphere (vCenter)
  • on General tab: type in the Name of the reservation, tenant name (if multiple tenants exist), business group for which the reservation is created, priority (in case multiple reservations exist for the same business group)
  • on Resources tab: select the compute resource, put in a machine quota (if needed), select the size of the memory, select the datastores and the storage quota and select the Resource pool (if one has been defined in vCenter Server)
  • on Network tab: select network adapter (transit VXLAN), transport zone, DLR and network profile
  • finalize the task by pressing OK
And we are set: we have compute, storage and network resource available for consumption. In the next post we will create a service and see how we consume NSX on demand. 

Saturday, April 28, 2018

PowerCLI - Batch migrate VM network adapter

Back to VMware basic operational tasks: I had to migrate VMs from one network to another, which roughly translates to changing the portgoups Pretty simple and straightforward. Since the task was for several VMs, I automatically excluded clicking through the UI. That sent me to PowerCLI and now the simple task became a bit more complicated because instead on relying in real time on my hand-eye coordination for changing a portgroup, I would have to rely on an input file.

The input file is in CSV format and has four columns: VM name, source portgroup, destination portgroup and reboot (boolean value to see if reboot the VM or not). You may ask why using the source portgroup as input - I am using it to check that the VM actually has one network adapter connected to that portgroup and I am not randomly migrating everything I find on that VM.

The CSV file looks like this:
vmName,srcPg,dstPg,reboot
vm-1abc*,pg-prod-101,pg-prod-110,false
vm-2def*,pg-prod-101,pg-prod-110,false

I am also using wildcards in the name of the VMs. The reason behind this is the VMs have a very long and randomly generated name which is different from the hostname. For example, actual VM name is vm-1abc-yetcg-93763-andbv-34781, while hostname is vm-1abc. How I am checking that the wildcard does not match multiple VMs, you will see in the description below.

The migration script does the following:
  1. takes as input the CSV file and tries to load and read the CSV file; if it fails, it will exit the execution
  2. for each line in the CSV file it searches the VM in vCenter server inventory
  3. once the VM has been uniquely identified, it searches for a network adapter connected to source portgroup as defined in input file; it doesn't find it or finds multiple portgroups, it does not process the VM
  4. once the source portgroup is identified it changes the network adapter mapping to destination portgroup (as defined in input file)
  5. if reboot is required, it will issue a soft reboot to the VM
  6. and it goes to the next VM in the list

And now the code: as usual, please use it carefully as it has not been tested for all situations. Also, PowerCLI session from where the script will be run needs to be connected to vCenter Server as the script does not handle this. 



 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
param(
 [Parameter(Mandatory=$true)][string]$csvVmList
)

function VmChangeNetwork($vmName,$srcPg,$dstPg,$reboot){
  Write-Host "processing:" $vmName
  Write-Host "  src PG:" $srcPg "dst PG:" $dstPg "reboot required:" $reboot

  Try {
      $v = Get-VM | Where {$_.Name -like "$vmName"}
  }
  Catch {
    Write-Host $_.Exception.Message $_.Exception.ItemName
  }

  if ($v.Count -eq 1){
    $srcPgExist = $v  | Get-NetworkAdapter | Where {$_.NetworkName -eq $srcPg}
    if ($srcPgExist.Count -eq 1) {
      $v  | Get-NetworkAdapter | Where {$_.NetworkName -eq $srcPg} | Set-NetworkAdapter -NetworkName $dstPg -Confirm:$false
      if (($reboot.ToLower() -match "true") -and ($v.PowerState -match "PoweredOn")){
        Write-Host " rebooting VM"
        Restart-VM -VM $v -RunAsync -Confirm $False
      }
    } elseif ($srcPgExist.Count -eq 0) {
      Write-Host " no adapters connected to" $srcPg "found"
    } else {
      Write-Host " multiple adapters connected to" $srcPg "found"
    }


  } elseif ($v.Count -eq 0) {
    Write-Host " "$vmName "was not found"
  } else {
    Write-Host " "$v.Count "VMs found with name" $vmName
  }
  Write-Host ""
}

# load CSV file
Try {
  $vmList = Import-Csv $csvVmList
}
Catch {
  Write-Host -foregroundcolor red " File is not accessible"
 exit
}

# process VMs
foreach ($vm in $vmList){
  VmChangeNetwork -vmName $vm.vmName -srcPg $vm.srcPg -dstPg $vm.dstPg -reboot $vm.reboot
}

Wednesday, April 11, 2018

Veeam Backup and Replication - Infinidat Integration

Starting with Update 3 (U3), Veeam Backup & Replication (VBR) offers a built-in integration framework for storage systems called Universal Storage Integration API. Storage vendors can use the API to develop plugins and integrate their storage systems with VBR. This is a huge step into extending the ecosystem of storage vendors that offer advanced functionality with VBR.

As of the writing of this article,  the following storage systems are already supported:
  • IBM Spectrum Virtualize (since December)
  • INFINIDAT InfiniBox (since March)
  • Pure Storage FlashArray (since April) 
(Later edit: first storage vendor supported via the API was IBM, but it is integrated in U3 release)
The following article presents the integration with Infinidat InfiniBox.

I will not go into the details of installation and configuration of InfiniBox. The post covers only Veeam part

First, download the Infinidat plugin from Veeam site. Login to VBR server, close VBR console, make sure processes/jobs are not running, extract the zipped file and run the installer. If you connect remotely to VBR, then run the installer on the machines from where you connect, too. It's a next, next, next process (as seen in the following series of pictures):





Now, getting to the fun part. Open VBR console, go to Storage Infrastructure > Add Storage - there it is, Infinidat Infinibox is available:



Add storage hostname (or IP address):

Add storage credentials:

Select the protocol to use (in lab I am using only iSCSI), the volumes to scan and the proxies to use (I have left everything on automatic):

Review the summary, press Finish and wait for the successful installation:


It is time to test the newly configured storage. First, let's create a job that uses Backup from Storage Snapshots. Since using integrated storage snapshots is active by default, there is no need to configure anything special for this job. Just select the VMs from InfiniBox datastore and run the job. Looking at the logs we see that "storage snapshot" is being used for backup:

Another type of job is to use only snapshots, although this is not a proper backup solution (since both the source VM and the backup reside on the same storage). To make a backup job snapshot only, you need to select as destination repository the InfiniBox storage (not a VBR repository):

This time when the job runs, it will create a snapshot directly on the storage:

The snapshots will appear under Home - Backups:

As well as under Storage Infrastructure:

Now, it's time for you to test the recovery :-)

Wednesday, April 4, 2018

Veeam ONE Custom Reports

I was recently asked if Veeam ONE could also create a custom reports, such as an inventory list of the VMs and their configuration.

Veeam ONE does come pre-loaded with a lot of reports. But if none of those reports are satisfying, then you can create your own.

To do this, first we login to Veeam ONE Reporter and go to Workspace tab. In the left pane, under My Reports, we create a folder (not mandatory, but a good practice to keep things separated).


Next we see a preexisting folder, Custom reports. We open folder and select "Custom Infrastructure" report which allows to define our own parameters.


The selected scope is Virtual Infrastructure and allows to select from all the objects. We are interested in vSphere Virtual Machine, but it could be any vSphere (or Hyper-V) inventory object.


Next we select the parameters to display, for our case: VM name, number of vcpus, memory size, disk size and IP address. The window allows for real time filtering of the available parameters (makes life easier than scrolling through a long list):


In case we are looking for something specific (let's say VMs that have the letters "vbr" in their name and less than 32 GB of RAM) we can actually create a custom filter and only those VMs will be displayed.


Cool, right? To make it even more flexible we can group the VMs based on one of the properties (for example memory size) and we can also choose to sort the lists. After we finished tweaking with the report, it's time to save it (to the folder created earlier).


After the report is saved, we go to the location of the report and run it (of course it can be edited, copied, deleted or scheduled to run periodically and sent as attachment to an e-mail address)


And the result of running the report is:


This is just an example on how to use custom reports. The power of Veeam ONE comes from letting you choose any parameter from the monitored infrastructure (virtual and backup) and use it in your own custom report.

Thursday, August 17, 2017

vRealize Business update checking hangs

Trying to update vRealize Business 7.2 I noticed that it hangs during update check with the message "Checking for available updates...".

No matter what I did - changed the update options, reboot the appliance - didn't help. Looking on communities.vmware.com I found the post 477322 which presented the same behavior, only for vCenter Server Appliance and it was from 2014. So I tried (a bit skeptically) to start the update from CLI as suggested in the post:

  • ssh to vRB appliance 
  • /opt/vmware/bin/vamicli update --check
  • /opt/vmware/bin/vamicli --install latest --accepteula
And it worked:

I suppose this workaround can be applied to most of the appliances. 

Monday, August 14, 2017

New and cool features in vRealize Automation 7.3 - parameterized blueprints

vRealize Automation 7.3 has been released for a few months, but only last week I got time to update the lab and take a more serious look at it. And I really liked what I saw.

One of the first features that I saw made me smile because I remembered the times when clients requested it and I was going through the processes of explaining that it is possible, but it will need some customization and some workflow development. Now, in 7.3 there are parameterized blueprints which allows to define t-shirt sizes for VMs, It also provides image parameters - how that image is being built. This way the configuration of the VM can be hidden from the service consumer and using image parameters organization policies can be implemented without the need to create new blueprints. You could have only one Windows template that provides 2012, 2012 R2 and 2016 as simple as selecting from a drop down menu. 

In the end, the user gets the possibility to select whatever OS version is entitled to and the t-shirt size of that deployment.   

Let's see how we can get to this nice item request screen.

First we define the parameters. In vRA portal, go to Administration -> Property Dictionary -> Component Profiles. There are two component profiles already defined: Image and Size. 

We need to edit each one. By default, the component profiles have no values defined. Edit Size profile where we will configure CPU, memory, storage for vSphere virtual machines to be used in blueprints. Go to Value sets tab and press New:

Type in a display name (what the service consumer will see in the request form), a description (optional), configure the values for CPU, memory and storage and select the status (by default it is active and it can be used in blueprints). Press Save if you want to add more value sets or Finish to save and exit. Once a value set has been defined, it can be edited, deactivated or deleted.

Now let's define value sets for Image component profile and configure the build information for vSphere VMs. It is the same build information that traditionally configured at each blueprint's level, but this time it is defined as a series of parameters. Edit Image component profile, go to Value Sets tab and press New:

Type in the name display name, a description and then select how to build the VM. In my case, I've  selected Linked Clone and filled in the necessary parameters: VMware template to clone from, the snapshot to use, customization specification name. All other options existing in a blueprint still available:  create, clone, NetApp flex clone. 

Once we have defined the value sets and made them active, we can use them in blueprints. 

Go to Design - Blueprints and there is the choice to either modify and existing blue print or create a new one. I will modify an existing blueprint, since I want to reduce blueprint sprawl :-) 
Go to Design - Blueprints - Edit, in the blueprint select the vSphere machine component and on the tab go to  Profiles.

By default, no component profiles are selected, press Add and select which component profiles to use: Size, Image or both. 

Press OK and select from each component profile the value sets to use for this particular blueprint. I've used only two of the t-shirt sizes (large and xlarge) and selected large to be the default one:

Press Finish to save and exit. Since the blueprint was already published, we can go directly to Catalog and request the Item using the new t-shirt sizes. For a new item, you need first to publish it, map it to a service and ensure the users are entitled to request the item. 

Thursday, May 4, 2017

Virtual Machine Encryption

A new security feature introduced in vSphere 6.5 is virtual machine encryption. The encryption is VM agnostic as it takes place at hypervisor level before the I/O is stored to disk. It uses vSphere APIs for I/O filtering framework that allows interception of VM I/Os in the virtual SCSI emulation (vSCSI) layer. It encrypts virtual machine files (nvram, vswp), virtual disk files and core dump files. However it does not support log files, VM configuration files or virtual disk descriptor files since these are considered to contain non-sensitive data.

How it works

There are several components necessary to implement VM encryption. The process uses two different sets of keys - key encryption keys (KEKs) and data encryption keys (DEKs). The components are:
  • external key management server (KMS) - generates and stores key encryption keys (KEKs) 
  • vCenter Server - requests KEKs from KMS and distributes them to ESXi hosts; Key Management Interoperability Protocol (KMIP) v1.1 is supported
  • ESXi hosts - generates data encryption keys (DEKs) and encrypts them with KEK; encrypted DEKs are stored in configuration files. DEKs are used to encrypt/decrypt virtual machine files. KEKs need to be in ESXi memory for a VM to be powered on.

Since KEKs are only stored in KMS and they are used to encrypt/decrypt DEKs, KMS should be made highly available. Loosing KMS generated keys means DEKs cannot be decrypted and access to VM data is gone.

Another important aspect is VM encryption uses the data block's address to protect against snooping by generating different encrypted data for identical data blocks. However it does not provide protection against data corruption.

How to configure

First we need to configure KMS solution. For demo purpose I've used the docker container created by William Lam. Please note that the keys for this KMS are held in memory and they will be lost on restart. To configure it, logon to your docker host and start the docker image with KMS by running the following commands:
docker pull lamw/vmwkmip
docker run -d -p 5696:5696 law/lamw/vmwkmip

Check the container is running by executing the following command on the docker host: docker ps. 

Next configure vCenter Server. Login to web client, select the vCenter Server in Hosts view, go to configure tab, Key Management Servers and press Add KMS. In the window add the KMS cluster name, server alias, server address, TCP port and optionally proxy details:

Press Yes to set the KMS as your default KMS cluster:

Trust the certificate presented by KMS:

vCenter Server is now configured to use KMS and the details are displayed in the web client:

To encrypt a VM becomes a matter of applying the correct storage policy to the VM. Before applying the encryption policy make sure the VM is powered off, otherwise you will get the following error:


To change the storage policy, in web client right click the VM you want to encrypt, go to VM Policies -> Edit VM Storage Policies. Change the default policy with VM Encrpyption Policy (which is the default encryption policy) and press OK button:

The encryption process will take some time. You can monitor it in events log:


Once the process is finished you can power on the VM. On the summary tab of the VM you can also check that the VM is encrypted:

Access control
Since cryptography is used when one needs to restrict access to certain data, the question is do we need all admins to have access to cryptographic functions in vCenter Server. To restrict access, a new role has been introduced - No cryptography administrator. It does not have the following privileges: 

  • Cryptographic Operations
  • Global.Diagnostics
  • Host.Inventory.Add host to cluster
  • Host.Inventory.Add standalone host
  • Host.Local operations.Manage user groups

To further restrict the access, the role can be cloned and modified accordingly.

Interoperability
The are restrictions and limitations when VM encryption is being used. One of the most important is that backup solutions using VMware vSphere Storage API - Data Protection are restricted to hot add backup and NBD-SSL network transport mode. SAN backup is not supported.

VMs with existing snapshots cannot by encrypted. All snapshots must be first consolidated. Guest memory cannot be saved during a snapshot of encrypted VMs.

Performance
If you are looking for details on performance impact of VM encryption, there is a performance study from VMware. Dedicated encryption hardware is not necessary, but using a processor that support AES-NI instruction set will speed up encryption and decryption.