Monday, December 31, 2018

Role Based Access for VMware in Veeam Backup & Replication 9.5 Update 4

One of the cool features that Veeam Backup & Replication 9.5 Update 4 comes with is integration with vCenter Server role based access. What does it mean? It allows to delegate permissions to users and groups in Veeam Enterprise Manager based on their permissions in vCenter Server.

A user or group of users is now able to monitor and control the backup and restore of their own VMs in a vSphere environment based on a predefined policy. A policy can be defined through vSphere tags, a role in vCenter or as granular as a single permission in vCenter Server. Delegation is done through the self service portal in  Enterprise Manager.

Cool thing is the integration is actually extending vCenter Server access control by adding vSphere tags as control mechanisms. For example DBA's want to do their backup and restore, then just assign a tag to the VMs and create the policy in Enterprise Manager. It's that simple.

Since my environment uses tags, we will test the following scenario: all developers will have access to development VMs which are tagged with "SLA3" vSphere tags.

First, make sure tags exist in vCenter Server and are assigned to the VMs in scope.

Next, you need to install and configure VBR and Enterprise Manager (EM). This is not in the scope of the current article.

Once EM is installed and configured  login to the EM portal (https://em-address:9443/) and go to Configuration. Check VBR and vCenter Server are available and reachable.

On Self Service tab you will see the default configuration for all Domain Users group (my lab is AD integrated).

For the test we will create a new configuration.

Let's look at the Delegation Mode - what mechanism we use to define access:

By default, it uses VM privilege and the selected privilege is VirtualMachine.Interact.Backup, but you can choose to use any privilege available in vCenter Server. Once you need more flexibility, you can define roles in vCenter Server and use the delegation based on those roles (a set of privileges applied to an object). Finally, you can use vSphere tags and allow access based on the tags. Once the preferred method of delegation is chosen, it will apply to all self service configurations. So be careful when changing the method. 

Now open, the default self service configuration and let's take a look at it. We see it assigns a repository and a quota on that repository. The quota can be global (for all users in group) or individual (per each user). 

You also define the default settings for advanced job settings such as compression , deduplication, scheduling of full backups and so on. The settings can be copied from Veeam default or from an existing job. 

There are 4 job scheduling options available from allowing full access to the scheduler to no access to the scheduler. We will use the default one (full access to job scheduling). Choose wisely what you want your users to do.

vSphere tags drop down list appeared because I chose the Delegation Method as vSphere tags, but it's left empty. 

Let's create a new self service configuration for developers group. Press Add and then:
1. select the Type - user or group and search it in AD
2. select the repository and define the quota
3. select job scheduling options
4. select the vSphere tag
5. configure the advanced settings

Press Save and open the self service portal (https://em-address:9443/backup/). Login with one of the users from the group (Developers in my case). Since user is member of 2 configurations, select which configuration to logon:

Once logged in, the portal displays 5 tabs: Dashboards, Jobs, VMs, Files and Items

Go to Jobs and create a new backup job. A process similar to VBR console job creation will start. First give the job a name, description and decide how many restore points to keep:

Then add the VMs you want to backup. Only VMs with SLA3 tag will be displayed.

If required, enable application aware processing and add credentials for guest processing:

Schedule the job (in this configuration you are allowed):

Lastly, enable notifications (if you want to be alerted about the status of the job)

Now that the job has been created, you can run it:

Meanwhile, we can take a look at how things look in VBR console

You'll notice the running job named with the following convention: Domain\GroupName_Domain\UserName_JobName. So, all is good and running smoothly.

Back in the self service portal, pressing on the job statistics we can see what happened:

There you go, just had a user define a backup job and backup its own VM using a simple vSphere tag and no other settings at either vCenter Server or VBR level. Next time we'll take a look at restore options.

Tuesday, December 4, 2018

Configure Veeam Backup for Microsoft Office 365

In the previous post, we've gone through the steps to install VBO365 on a Windows Core VM. Now we'll look at configuration.

After installation, a default backup proxy and default backup repository are created on the server. Proxies are responsible for handling backup traffic from Office 365 to repository and restore traffic from repository to Veeam explorers. For small deployments the same server (management server) can be used as proxy and repo server. However it is recommended to use external proxies and repositories.

One very important aspect of the repository is that it defines the retention policy. This means that all backup jobs pointing to one repository will get the retention defined at that repository level.

More important, in VBO365 retention policy is defined as the number of years/days since the object has been changed. In the case of a 3 year retention policy, only e-mail that have been touched in the past 3 years will be backed up. In case all emails are required for a certain mailbox, keep forever policy can be selected.

Next we will add the O365 organization that we want to backup

Once the organization has been added, we can create backup jobs. Although it is possible to backup everything in one job, it is recommended to create separate backup jobs for Exchange, SharePoint , Archive and OneDrive and point them to different repositories. 

Next we'll create a backup job for emails of only 4 users in the organization. Start the wizard and give the job a name:

Select the users you want to backup:

Select the items to backup for the users (by default All, we'll select only e-mails):

Add exclusions (in our case not required):

Choose the proxy and repository (remember the repo sets the retention):\

Confirm the job execution schedule:

The job will be created and stopped. Right click on it and start to backup. 

Once the backup finishes, you can right click on the job and open Exchange explorer

Tuesday, November 27, 2018

Enable RDP in SLES 12

Haven't been working with SuSE Linux for a while and when this new project came along I got my hands on SLES 12 SP3.

I've created an account on, downloaded the ISO, created the VM and installed from scratch. It went pretty smoothly. The suse account gives a 60 day trial activation key which will allow to update the server. After the 60 days, no more update, but it's just for a lab setup.

The one thing I need from the server is to provide terminal services. So I installed xrdp from using Software Manager in Yast 2

Next, enable xrdp service and start it:
sudo systemctl enable xrdp
sudo systemctl start xrdp

Last thing to do is to configure the firewall. It is enabled by default and will not allow connection to the server. Check that the xrdp service definition exists. If not create a file that looks like this:

Configure the firewall to accept xrdp (and if you are at it, add ssh also). Open the file /etc/sysconfig/SuSEfirewall2 and change the following lines to include your services:

Save the changes and restart the firewall service:
sudo systemctl restart SuSEfirewall2

Now, if all went good open your preferred RPD client and give it a try:

Friday, September 21, 2018

VCP 6.5 Delta Exam experience

The dreadful reminder of imminent VCP expiration came again. I had already decided to take the VCP delta exam called 2V0-622, without actually doing it. But this time I was prepared. Having happily paid exam fee (using my VMUG Advantage discount), I was waiting for the day to come. It came and passed pretty fast and I did pass the exam.

Must admit I was a bit scared, but it is a delta exam and if you are using the technology on a daily basis then you can't have a lot of surprises. It didn't seem to be a difficult exam to take. Maybe having delivered the ICM 6.5 a few times in my life, had helped. However, I would like to point out several aspects about how I prepared for the exam:

  • never forget configuration maximums - I failed one question because I did not read them properly
  • it is a delta exam - so what's new is important especially since 6.5 was more like a major release bringing in VMFS6, security features, NFS 4.1 and other goodies. If you don't use the features in your production environment, stand up a nested ESXi and play with them a bit (of course, always read the manual)
  • features you don't use in production on day to day basis (let's say VSAN), read the manual
  • use a practice mock exam to help you find your weaknesses and study more (this one from Simon Long is pretty cool)
  • lastly, if you don't know an answer, start and use your logic and what you know to eliminate the wrong answers. This method may lead to finding the good answer
For all of you thinking about  taking the exam, good luck! 

Saturday, September 15, 2018

Secure Boot and Acceptance Levels of Hosts and VIBs

Acceptance levels of VIB  provide an information on the amount of certification the software package has undergone. There are four levels of acceptance for VIBs:

  • VMwareCertified - most stringent requirements equivalent to VMware in-house Quality Assurance testing. Only I/O Vendor Program (IOVP) program drivers are published at this level.VMware takes support calls for VIBs with this acceptance level
  • VMwareAccepted - partner runs the tests and VMware verifies the result. VMware takes support calls for VIBs with this acceptance level.
  • PartnerSupported - The partner performs all testing. VMware does not verify the results. VMware directs support calls for VIBs with this acceptance level to the partner's support organization.
  • CommunitySupported - is for VIBs created by individuals or companies outside of VMware partner programs. They are not supported by VMware or partners.

Why is this interesting for us? Mostly because of the relationship between secure boot and the acceptance level. Secure boot does not allow to set the acceptance level to CommunitySuported. This makes perfect sens, why would you want to install a VIB that is created by someone outside the trusted partner program. Two answers come to mind: home labs and testing.

With secure boot enabled (which is default for VMs created with UEFI in vSphere 6.5 U1) you will notice the following behavior when trying to set the acceptance level:

In order to be able to set the desired acceptance level, you should disable secure boot. If it's a physical server, then you need to do this in UEFI. For VMs, it can be done at VM level, but still needs a power off and then select the VM and edit settings. On VM Options tab, under Boot Options you will find the setting for Secure Boot.

If you are connecting to vSphere 6.5, use web client since HTML one does not show the option.

Wednesday, September 5, 2018

Installing Veeam Backup for Microsoft Office 365 on Windows Server 2016 Core

Windows Server 2016 Core is a minimal installation that has a smaller footprint which translates to a smaller attack surface. Having security in mind, it make perfect sense to use Core for deploying server components on it.

Veeam Backup for Microsoft Office 365 (VBO365) allows to back up and recover Microsoft Office 365 and on-premises Exchange and SharePoint organizations data containing Microsoft Exchange items, Microsoft SharePoint items, and OneDrive documents.

VBO365 is made of several components:
  • VBO365 server 
  • console
  • SharePoint explorer
  • Exchange explorer
  • PowerShell extensions
Since VBO365 is modular and since it needs Internet access, it can be installed on separate machines. In our case, we'll install server and PowerShell extension on a Windows Server 2016 Core situated in DMZ and the console and explorers on the admin workstation in the management area. Repository space will be provisioned on the local disk of VBO365. 

First, we start by deploying a virtual machine with Windows Server 2016 Core installed on it. The VM is deployed from a template created previously. To create the template do the follwing:
  • configure a VM with 1 vCPU, 512 MB of RAM and 32 GB of disk
  • upload the Windows iso file on a datastore
  • attach the VM CDROM to the iso file
  • start the VM and follow the installation steps - basic Windows install, just choose the Windows install without Desktop Experience
  • enter the license key and wait for the install to finish
For a step-by-step guide, you may look at this blog post

Remember that Core installation is minimal and restrictive. Hence you would need to enable remote administration and file sharing. This can be done either by using netsh commands or PowerShell.

Enable-NetFirewallRule -DisplayGroup "Remote Desktop"
Enable-NetFirewallRule -DisplayGroup "File and Printer Sharing"

netsh advfirewall firewall set rule group=”Remote Desktop” new enable=Yes
netsh advfirewall firewall set rule group=”File and Printer Sharing” new enable=Yes

Once the VM has been installed and configured, convert it to a template. Create a customization specification in vCenter Server and deploy a VM from the template. Before powering on the new VM, resources as demanded by VBO - in a very strict lab environment you could start with 2 vCPU and 4 GB of RAM. 

It's now time to install VBO365. The distribution package is made of 3 msi files:
  • Veeam Backup for Microsoft Office 365
  • Veeam Explorer for Microsoft Exchange
  • Veeam Explorer for Microsoft SharePoint
Since we will install separately the server and the explorers, we need to transfer only server msi file. Copy the file to a share on the virtual machine. Then logon to the server and in cmd prompt change location to where the installer has been copied. To install VBO365 server from command line, run the following command:

msiexec /i "Veeam.Backup365_2.0.0.567.msi" /qn ADDLOCAL=BR_OFFICE365,PS_OFFICE365 /L*V "vbo365.log"

This will install only server (BR_OFFICE365) and PowerShell extension (PS_OFFICE365). The install will not show any prompt (/qn) and it will log everything to vbo365.log file in the same folder with the install file.

During installation monitor the log file. Once the install is completed successfully, you will see the following lines at the end of the file:

MSI (c) (E8:C0) [22:10:14:349]: MainEngineThread is returning 0
=== Verbose logging stopped: 8/29/2018  22:10:14 ===

We need to do two more tasks before moving to installation on admin workstation:
  • check services are runing
  • open firewall ports
To check the services are running, type the following command in PowerShell:

Get-Services "Veeam*" | Format-List

You should see the following output:

If any service is not running, you may enable and start it:

Set-Service -Name "Veeam.Archiver.RestFul.Service" -StartupType Automatic 
Start-Service -Name "Veeam.Archiver.RestFul.Service"

To enable firewall ports for the 3 services, run the following:

New-NetFirewallRule -DisplayName "Open Port 9191" -Direction Inbound -LocalPort 9191 -Protocol TCP -Action Allow
New-NetFirewallRule -DisplayName "Open Port 9194" -Direction Inbound -LocalPort 9194 -Protocol TCP -Action Allow
New-NetFirewallRule -DisplayName "Open Port 4443" -Direction Inbound -LocalPort 4443 -Protocol TCP -Action Allow

It's time to logon to admin workstation where console and explorers will be installed. Transfer all 3 msi files to admin workstation, open a command prompt, change folder to msi path and run the following commands one by one (not all at the same time):

msiexec /i "Veeam.Backup365_2.0.0.567.msi" /qn ADDLOCAL=CONSOLE_OFFICE365,PS_OFFICE365 /L*V "vbo365-console.log"
msiexec /i "VeeamExplorerForExchange_9.6.3.567.msi" /qn ADDLOCAL=BR_EXCHANGEEXPLORER,PS_EXCHANGEEXPLORER /L*V "vbo365-vex.log"
msiexec /i "VeeamExplorerForSharePoint_9.6.3.568.msi" /qn ADDLOCAL=BR_SHAREPOINTEXPLORER,PS_SHAREPOINTEXPLORER /L*V "vbo365-vsp.log"

We are ready to connect to VBO365 and start configuring it. But this, in the next post.

UPDATE 2018/12/4

Updating VBO365 from command line. Latest VBO patch comes in the form of msp file. To update VBO installation you would simply need to copy the file on the server and run the following command:

msiexec /update VBO2.0-KB2765.msp /qb /log patch.log

You can view the log after the install. Remote proxies can be upgraded from the console. 

Saturday, August 18, 2018

Adding Realtek 8111 driver to vSphere 6.7 image

While reinstalling home lab with vSphere 6.7, I was remembered (the hard way) that my on-board NIC is based on Realtek 8111 chipset which is not included in the default vSphere installation media.

I had to go accept the challenge of finding the drivers and creating a new bootable vSphere ISO. Nothing I haven't done before, but since it is not often I've decided to make it a blog post.

First I needed to find the drivers. Using a bit of google foo I found the blog of a long time vExpert (thank you) which has also a collection of drivers. I've downloaded the net55-r8168 offline bundle . From VMware site I've downloaded offline bundle for vSphere 6.7. Placed both of them in the same folder and opened a PowerCLI prompt.

First, create a new software depot using the two bundles:

Add-EsxSoftwareDepot "C:\7_KIT\VMW\", "C:\7_KIT\VMW\"

Next, create a new image profile: see what profiles exist, clone one and change its acceptance level to "community" (because the driver I am about to load is community signed):

New-EsxImageProfile -CloneProfile ESXi-6.7.0-8169922-standard -name ESXi-6.7.0-8169922-standard-RTL8111 -Vendor Razz 
Set-EsxImageProfile -ImageProfile ESXi-6.7.0-8169922-standard-RTL8111 -AcceptanceLevel CommunitySupported

Add the driver to image profile:

Get-EsxSoftwarePackage | Where {$_.Vendor -eq "Realtek"}
Add-EsxSoftwarePackage -ImageProfile ESXi-6.7.0-8169922-standard-RTL8111 -SoftwarePackage net55-r8168

Lastly, generate the vSphere 6.7 ISO containing the driver:

Export-EsxImageProfile -ImageProfile ESXi-6.7.0-8169922-standard-RTL8111 -ExportToIso -filepath C:\7_KIT\VMW\VMware-ESXi-6.7.0-8169922-RTL8111.iso

In one picture, it looks like this:

One more step is needed. Since we have the ISO, we just need to write on a bootable USB. To do this, I've downloaded Rufus, portable version. Run the software, select destination a USB stick (it will be overwritten so better not having any useful data on it), selected the source my new ISO and pressed start.

If during the creation of the bootable stick you are asked to update menu.c32, press YES. After it finishes I've plugged the stick in my physical box and happily installed ESXi.

File download test reached 800 Mbps, a normal value keeping in mind the connectivity between my laptop and the ESXi host.

Friday, August 3, 2018

vExpert Program "You're in!"

I have the bad habit of waking up and immediately reading e-mails on my phone. This time I was in for a great surprise that put a smile on my face. I got accepted to vExpert program.

I consider it to be both an honor and a responsibility. I know my blogging is not as frequent as I wish to. And together with VMUG co-leaders we are striving to make VMUG meetings fun and interesting. The only thing I can do is to keep on giving back to community and make it better each time and hoping to see more and more people at VMUG Romania meetings.

So, thank you for this amazing ride (so far). And a special thanks to Constantin Ghioc (Titi) and Jorge de la Cruz for determining me to apply to the program.

My profile on vExpert Directory is here .

Saturday, June 30, 2018

VMware BIOS UUID, vCloud Director and Veeam Agent

Looking at the title it may seem a long and complicated story. Actually is a very simple one. While working in lab which is hosted by a vCloud Director (vCD) instance, I was testing a SQL Always ON Cluster and the Veeam Agent for Windows. I've created my 3 node cluster, installed the database, configured availability groups.  All went well until I tried to install the Veeam agent. This stopped with the error that  the same UUID was being utilized by all 3 nodes.

Well, all 3 VMs were deployed from the same vCD catalog VM. According to this VMware KB article, by default vCD keeps the same UUID value for cloned VM's. Since I did not have access to vCD to change the settings, the only choice was to actually modify the UUID of the VMs.

There are several ways of doing in it, from manual to programmatic (as can be seen in this KB article)

I've chosen PowerCLI variant. The steps are pretty straight forward:
  • shutdown the VM
  • get the current UUID
  • change the UUID
  • power on the VM

And the code to do it below. Do not forget to change the $newUuid (I used to modify the last digits from the current one).

$vmName = "myClonedVm"
$vm = Get-VM -Name $vmName

$newUuid = "00112233-4455-6677-8899-aabbccddeeff"

$vm | Shutdown-VMGuest
While ((Get-VM -Name $vmName).PowerState -ne "PoweredOff") {
  Write-Host -foreground yellow "... waiting for" $vmName "to power off"
  sleep 5

$spec = New-Object VMware.Vim.VirtualMachineConfigSpec
$spec.uuid = $newUuid

Start-VM -VM $vmName -RunAsync

Thursday, June 14, 2018

Automating NSX with PowerShell and RESTful API

Often, there are situations when one needs to do the same actions over and over again. System admins solved this repetitive tasks by scripting them. Scripting languages were however limited to the operating system or in some cases to a middleware application. New applications provide RESTful APIs that extend the power of scripting languages from the OS level to virtually every application available in the infrastructure. Using RESTful APIs one can create complex workflows that execute orchestrated actions on different applications. This brings a new word into the vocabulary: automation.

Let's take a simple example - new developer is hired and needs a dedicated dev and test environment. We are using VMware vSphere and NSX to provide that environment. Because we don't want to interfere with other environments, we want to isolate it. For isolating environments at network level we can use Edge Services Gateway (ESG) and dedicated VXLAN. At vSphere level we need to control resource consumption. If there is vRealize Automation or vCloud Director, these tasks can be easily automated from the GUI. What is there is no Cloud Management Portal, or the one the exists is not integrated with NSX and vSphere. Then we need to go back to scripting language.

In this post we'll take a look at how to call NSX RESTful API using PowerShell. There are languages that can be used: ruby, perl, python, JavaScript, depends on the preference. There is also a PowerShell extension for NSX called PowerNSX. The scope of the post is to see how we can use PowerShell to consume RESTful APIs (as the principles will apply to any API) using NSX.

Before we begin, a few words about the environment. I am using PowerShell 6.0.1, downloaded latest stable version from GitHub. The reason for doing this is that in version prior to 6.x there are issues with getting PowerShell methods to trust self signed SSL certificates. In 6.x you can add  -SkipCertificateCheck parameter to both Invoke-RestMethod and Invoke-WebRequest to accept self signed certificates.

Any RESTful API connection needs to be authenticated. Let's create the authentication header that will be sent with the request. We need to get the username and password and encode it to Base 64 string. The password can be simply put as a string in code, but I would like to complicate things and request the password from the user as a secure string (characters masked with star symbols)

$username = "nsxAdmin"
$securedValue = Read-Host "Enter password" -AsSecureString

After we get the password, we need to decrypt the secure string from unmanaged memory and write it in a string variable. We use the following methods: PtrToStringAuto and SecureStringToBSTR.

$password = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto([System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($securedValue))
$userpass  = $username + ":" + $password

We encode the username and password to Base64 and create the header.

$bytes= [System.Text.Encoding]::UTF8.GetBytes($userpass)
$authheader = "Basic " + $encodedlogin
$headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"

Lastly we create the URI (will get existing edges) and run the query. The output will be sent to a XML file.

$uri = "https://nsx-manager/api/4.0/edges"
$outXml = $baseFilePath + 'allEsg.xml'
Invoke-RestMethod -Uri $uri -Headers $headers -Method 'GET' -OutFile $outXml -SkipCertificateCheck

We have the XML file with all existing edges. Let's get vnic configuration for each edge. First we load the XML file and look for edge child nodes - pagedEdgeList.edgePage.edgeSummary. For each edge we find, we use the objectId to query the API for its vnics and put that result into a file.

$esgObjectArray = @()
[xml]$config = Get-Content $outXml
foreach ($esg in $config.pagedEdgeList.edgePage.edgeSummary) {
  $esgObject = New-Object PSObject -property @{Id=$esg.objectId;Name=$}

  $uri = 'https://nsx.eudemo.veeam.local/api/4.0/edges/' + $esg.objectId + '/vnics'
  $outXml = $baseFilePath + $esg.objectId + '-vnics.xml'
  try {
    Invoke-RestMethod -Uri $uri -Headers $headers -Method 'GET' -OutFile $outXml -ea stop -SkipCertificateCheck
  catch {
    Write-Host $_ -foreground green

  $esgObjectArray += $esgObject

Within the foreach loop we are creating an array ($esgObjectArray) that contains the edge id and the edge name. $esgObject object is used to get the parameters for each iteration of the loop and add them to the array. This will be used to get the IP addresses for each edge and export all in a CSV file.

The variable $baseFilePath can be any destination on your local system where you want to place output files (e.g "D:\scripts\myEsgConfig\").

Finally we'll export edge ID, edge name and its IP addresses to a csv file. We'll loop through $esgObjectArray and get the information from the saved xml configuration files for each edge (-vnics.xml files):

$esgVnics = @()
foreach ($esg in $esgObjectArray) {
  $outXml = $baseFilePath + $esg.Id + '-vnics.xml'
  if (Test-Path -Path $outXml) {
    $vnics = ""
    $item = ""
    [xml]$config = Get-Content $outXml
    foreach ($interface in $config.vnics.vnic) {
      $vnics += "," + $interface.addressGroups.addressGroup.primaryAddress
    $item = $esg.Id + "," + $esg.Name + $vnics
    Write-Host $item
  } else { continue}
  $esgVnics += $item

$esgVnics | foreach { Add-Content -Path  $csvFile -Value $_ }

In the end we'll get a CSV file with the following format:

As you've seen, we used Invoke-RestMethod to query the API and saved the results in XML files for later use. You can use other REST methods like PUT, POST or DELETE to manage NSX environment. In the end it is all about manipulating creating and manipulating XML/JSON content to do the operations from command line .

Friday, June 1, 2018

PowerCLI - Get the sizing of virtual machines

Any backup project needs an answer to at least the following questions: what is the total number of VMs, what is total backup size, what is the number of virtual disks per VM. It doesn't mean that it is that easy, since there are more questions that need answers, but these three are the base. Sometimes there is a fast answer to them, but there are situations when the information is unknown, or more difficult to find. To make life easier, I've put together a small script that is based on PowerCLI commandlets Get-VM and Get-HardDisk.

The script runs against vCenter Server, retrieves each VM, parses the data and sends the needed information to a CSV file. The file has the following structure: VM name, number of disks attached to the VM, used space without swap file (in GB), total used space (in GB), provisioned space (in GB).

A short explanation on the 3 different values for space:

  • provisioned space is the space requested by the VM on the datastore. For thick provisioned VMs the allocated (used) space is equal to requested space. Thin provisioned ones do not receive the space unless they consume it. Hence the difference between provisioned space and used space (allocated) columns
  • disk space reported in used column contains other files than the VM disks (vmdk). One of this files is the VM swap file, which is equal to the difference between the VM memory and the amount of VM reserved memory. This means that if there are no reservations on the VM memory, each VM will have the swap space equal to the size of memory. This file is not part of the backup and can be excluded. If we do a math exercise - 150 VMs with 8 GB each means more than 1 TB of data which we can ignore from calculations
  • finally, looking at vm10 in the image above, we see the 2 columns (used and used without swap) being equal. Since vm10 is powered on, this means it has a memory reservation equal to the whole memory. For powered off VMs, the 2 columns will always be equal since swap file is created only for running VMs.
Let's take a look at the code. First we define a function that takes as input a VM object and returns a custom made power shell object with the required properties of the VM

function GetVMData($v) {
 $vmResMemory = [math]::Round($v.ExtensionData.ResourceConfig.MemoryAllocation.Reservation/1024,2)
 $vmMem = [math]::Round($v.MemoryMB/1024,2)
 $vmUsedSpace = [math]::Round($v.UsedSpaceGB,2)
 if ($v.PowerState -match "PoweredOn") {
  $vmUsedSpaceNoSwap = $vmUsedSpace - $vmMem + $vmResMemory # removing swap space from calculations
 } else {
  $vmUsedSpaceNoSwap = $vmUsedSpace
 $vmProvSpace = [math]::Round($v.ProvisionedSpaceGB,2) # swap space included
 $vmName = $v.Name
 $vmNoDisks = ($v | Get-HardDisk).count

 $hash = New-Object PSObject -property @{Vm=$v.Name;NoDisks=$vmNoDisks;UsedSpaceNoSwap=$vmUsedSpaceNoSwap;UsedSpace=$vmUsedSpace;ProvSpace=$vmProvSpace}
 return $hash


We look at several parameters of the VM: memory reservation, allocated memory, used space, number of disks. We take the parameters and create a new PowerShell object that the function returns as a result.

The main body of the script is pretty simple. First, we define the format of the output CSV file and the we take every VM from vCenter Server (you need to be connected to vCenter Server before running this script) and process it:

$vmData = @('"Name","NoDisks","UsedSpaceGB(noSwap)","UsedSpaceGB","ProvisionedSpaceGB"')
$csvFile = ($MyInvocation.MyCommand.Path | Split-Path -Parent)+"\vmData.csv"

foreach ($v in get-vm) {
        $hash = GetVMData -v $v
 $item = $hash.Vm + "," + $hash.NoDisks + "," + $hash.UsedSpaceNoSwap + "," + $hash.UsedSpace + "," + $hash.ProvSpace
 $vmData += $item
$vmData | foreach { Add-Content -Path  $csvFile -Value $_ }

If you want to look only at VMs that are powered on, then you need to add an IF clause that checks the VM power state in the FOR loop before processing each VM:

if ($v.PowerState -match "PoweredOn") {
# process $v

Not it's time to look at the output CSV file and start sizing for that backup solution.

Monday, May 21, 2018

Automate Veeam protection groups using PowerShell

Veeam Backup & Replication U3 adds management for Veeam agent for Linux and Windows. Provided capabilities include automated deployment of agents, centralized configuration and management of backup jobs for protected computers and centralized management of backups created by agents.

To handle the management of computers in VBR inventory,  protection groups are utilized. Protection groups are containers in the inventory used to manage computers of the same type: for example Windows laptops or CentOS servers. Protection groups are used to automate the deployment and management of agents since they allow performing tasks at the group level, not at the individual computer level. At protection group level you define scheduling options for protected computers discovery, the distribution server from where agent binaries can be downloaded, agent installation.

In the current post we'll explore an automated way of creating protection groups and adding computers to them using Veeam PowerShell extension.

The script creates a new protection group and adds a list of computers to it. It takes as input the following parameters:
  • protection group name
  • computer list
  • rescan policy type: daily or periodically
  • rescan hour for daily rescans
  • rescan period in hours for periodically rescans
  • automatically reboot computers if necessary
Before running the script, make sure you have connected to VBR server using Connect-VBRServer commandlet. During script run, it will prompt for entering credentials for the user that will install the agent

If the protection group already exist, the following message will be displayed and execution stopped:
"Protection group: Protection_Group_Name already exists. Use another name for protection group."

After the successful execution, the newly created protection group is displayed in VBR console, Inventory view, under Physical & Cloud Infrastructure. Right click on it and select Properties. In the first tab you'll that the group has been created by PowerShell

On Computers tab, select one computer and pressing Set User will display the credentials entered during the script run. The credentials are also commented that have been added by PowerShell:

Finally, on Options tab you can see that the parameters configured at the start of the script have been applied. In this case periodically rescan every 6 hours and automatic reboot:

The script configures automatic agent installation, and if the computers are reachable and credentials entered are valid and have appropriate rights, then the status of the computers displayed in VBR console is "Installed".

Finally, the code listing

# parameters
$protectionGroupName = "All Linux Servers"
$newComputers = @("","","")
$rescanPolicyType = "periodically" # other value: "periodically"; any other value defaults to "daily"
$rescanTime = "16:30"
$rescanPeriod = 1 # rescan period in hours for "periodically" - can be 1, 2, 3, 4, 6, 8, 12, 24
$rebootComputer = "false" # any other value will not set -RebootIfRequired flag

# function definition
function NewProtectionGroup($protectionGroupName, $newComputers, $rescanTime, $rescanPolicyType, $rescanPeriod, $rebootComputer) {
  Write-Host -foreground yellow "Enter credentials for computers in protection group " $protectionGroupName
  $creds = Get-Credential
  $newCreds = Add-VBRCredentials -Credential $creds -Description "powershell added creds for $protectionGroupName" -Type Linux
  $newComputersCreds = $newComputers | ForEach { New-VBRIndividualComputerCustomCredentials -HostName $_ -Credentials $newCreds}
  $newContainer = New-VBRIndividualComputerContainer -CustomCredentials $newComputersCreds
 if ($rescanPolicyType -eq "daily") {
  $dailyOptions = New-VBRDailyOptions -Type Everyday -Period $rescanTime
  $scanSchedule = New-VBRProtectionGroupScheduleOptions -PolicyType Daily -DailyOptions  $dailyOptions
 } elseif ($rescanPolicyType -eq "periodically") {
  $periodicallyOptions = New-VBRPeriodicallyOptions -PeriodicallyKind Hours -FullPeriod $rescanPeriod
  $scanSchedule = New-VBRProtectionGroupScheduleOptions -PolicyType Periodically -PeriodicallyOptions  $periodicallyOptions
 } else {
  Write-host -foreground red "Uknown rescan policy type" $rescanPolicyType
  Write-host -foreground red "Using daily "
  $dailyOptions = New-VBRDailyOptions -Type Everyday -Period $rescanTime
  $scanSchedule = New-VBRProtectionGroupScheduleOptions -PolicyType Daily -DailyOptions  $dailyOptions
 if ($rebootComputer -eq "true") {
  $deployment = New-VBRProtectionGroupDeploymentOptions -InstallAgent -UpgradeAutomatically -RebootIfRequired
 } else {
  $deployment = New-VBRProtectionGroupDeploymentOptions -InstallAgent -UpgradeAutomatically
  $protectionGroup = Add-VBRProtectionGroup -Name $protectionGroupName -Container $newContainer -ScheduleOptions $scanSchedule -DeploymentOptions $deployment
  # rescan and install
  Rescan-VBREntity -Entity $protectionGroup -Wait

# Script body
if (Get-VBRProtectionGroup -Name $protectionGroupName -ErrorAction SilentlyContinue) {
 Write-Host -foreground red "Protection group:" $protectionGroupName "already exists. Use another name for protection group."
} else {
 NewProtectionGroup -protectionGroupName $protectionGroupName -newComputers $newComputers -rescanTime $rescanTime -rescanPolicyType $rescanPolicyType -rescanPeriod $rescanPeriod -rebootComputer $rebootComputer

Tuesday, May 15, 2018

NSX integration with vRealize Automation 7.4 - part 2

In part 1 of this post we presented the configuration at vRA level. In this post we'll see how to create a service in the Service Catalog for  programmatic NSX consumption.

First let's remember the main concepts of vRealize Automation service catalog:

  • catalog items are published in the service catalog for user consumption e.g Linux VM or 3 tier web app
  • catalog items can be grouped under different services: QA, Test&Dev, Web Apps, Linux Servers
  • a user is allowed to request a service item based on his entitlements; Entitlements define who has access to catalog items and what actions he can do 

For start, we'll create a service called Linux VMs and a new Entitlement called Allow Linux VMs. We'll entitle all users of the business group to the Linux VMs service. Using services in the entitlement instead of individual items we make sure that every new item mapped to this service will be automatically accessible to the users.

Administration > Catalog Management > Services

Administration > Catalog Management > Entitlements

Next we'll create a blueprint that deploys vSphere VMs. There are several ways to provision vSphere VMs, we will use linked clones because they are very fast and use deltas to keep the changes (which is good in labs). To use linked clones we need to create a golden image: a VM configured to the desired state.

First create the VM: deploy it from an existing template or create it from scratch. VM hostname and networking details will be configured at deployment during guest OS customization. For this to work we need VMware tools installed in the VM and a customization specification created in vCenter Server. 

No other special configuration is needed for the VM.

Optional step (vRA agent installation): if you don't plan to run scripts inside the guest OS of the vRA manged VM, you can skip this step. The installation should be pretty easy since VMware already provides a script that can handle it. Go to vRA appliance URL and download the script on your Linux VM:

 wget  https://vra_app_fqdn/software/download/prepare_vra_template_linux.tar.gz --no-check-certificate

Then extract the script from the archive and run it:

tar -xvzf prepare_vra_template_linux.tar.gz
cd prepare_vra_template_linux

Choose default agent type (vSphere), add the address of vRealize Appliance, manager service, accept the key fingerprints for the certificates, set the download timeout, and install JRE (if not already in the VM)

Now we have a VM with all the utils inside (VMware tools and optionally vRA agent) and we create the snapshot that will be the base for linked clones.

At this point we login to vRA portal and we start working on our service creation. Go to Design > Blueprints. Start creating a New blueprint. Type the name of the blueprint, assign an unique ID or leave the automatically generated one, limit the number of deployments per request (if you want). Add lease days to control the spawn of deployed VMs (especially for temporary environments) and add a period of time you want the item to be archived before deleting (when lease expires).

Since this is for demo, I've added a default lease of 1 day and no archival (automatic deletion after lease expires). On NSX Settings tab, choose the NSX transport zone and if you want to isolate the VMs deployed from this blueprint (allow only internal traffic between VMs).

Pressing OK button will take you to the canvas. From Machine Types category, drag and drop vSphere (vCenter) Machine

From Network&Security category, drag and drop On-Demand Routed Network

Select the vSphere__vCenter__Machine_1 components form the canvas and fill in the configuration details. Add the number of instances that can be deployed in a request 

Add build information: how the VM will be created (linked clone), where to clone it from, what customization specification to use:

Type the VM consumed resources: number of CPUs, memory, storage. Take care when configuring these values: if you allow 10 instances in a deployment, each instance with a maximum of 8 vCPU and 32 GB of RAM you may end up with a deployment using 80 vCPU and 320 GB of RAM. This is a good moment when approval workflows come into place.

Finally we need to connect the VM to the network. But first we'll configure the network component. On canvas select the On-Demand_Routed_Network_1 components and choose the parent network profile (profile that has been created in part 1

Go back to vSphere component, go to Network tab and click New. From drop down box select the network name

Lastly, add a custom property for the VM to define the operating system that is being used

At this moment we've configured how to create the VM, how to create the network and linked the VM to the network. Press Finish and then Publish the blueprint:

Once the blueprint has been published, it will appear under Administration > Catalog Management > Catalog Items. Select the new catalog item, press Configure and map it to the service created earlier at the beginning of the post. 

The service will appear in the Catalog tab and you can press Request to deploy a new instance of it. To see what is happening, go to Requests tab, select the request, press View Details and when the request details open press Execution Information

Here you will see that the vxlan has been created on demand and DLR reconfigured. Also the VM has been created and mapped to the new vxlan. The process can also be monitored in vCenter Server

After the provisioning finished successfully, the components will be displayed in Items tab from where they can be managed using day 2 operations.