Monday, May 21, 2018

Automate Veeam protection groups using PowerShell

Veeam Backup & Replication U3 adds management for Veeam agent for Linux and Windows. Provided capabilities include automated deployment of agents, centralized configuration and management of backup jobs for protected computers and centralized management of backups created by agents.

To handle the management of computers in VBR inventory,  protection groups are utilized. Protection groups are containers in the inventory used to manage computers of the same type: for example Windows laptops or CentOS servers. Protection groups are used to automate the deployment and management of agents since they allow performing tasks at the group level, not at the individual computer level. At protection group level you define scheduling options for protected computers discovery, the distribution server from where agent binaries can be downloaded, agent installation.

In the current post we'll explore an automated way of creating protection groups and adding computers to them using Veeam PowerShell extension.

The script creates a new protection group and adds a list of computers to it. It takes as input the following parameters:
  • protection group name
  • computer list
  • rescan policy type: daily or periodically
  • rescan hour for daily rescans
  • rescan period in hours for periodically rescans
  • automatically reboot computers if necessary
Before running the script, make sure you have connected to VBR server using Connect-VBRServer commandlet. During script run, it will prompt for entering credentials for the user that will install the agent

If the protection group already exist, the following message will be displayed and execution stopped:
"Protection group: Protection_Group_Name already exists. Use another name for protection group."

After the successful execution, the newly created protection group is displayed in VBR console, Inventory view, under Physical & Cloud Infrastructure. Right click on it and select Properties. In the first tab you'll that the group has been created by PowerShell

On Computers tab, select one computer and pressing Set User will display the credentials entered during the script run. The credentials are also commented that have been added by PowerShell:

Finally, on Options tab you can see that the parameters configured at the start of the script have been applied. In this case periodically rescan every 6 hours and automatic reboot:


The script configures automatic agent installation, and if the computers are reachable and credentials entered are valid and have appropriate rights, then the status of the computers displayed in VBR console is "Installed".

Finally, the code listing



 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
# parameters
$protectionGroupName = "All Linux Servers"
$newComputers = @("192.168.1.2","192.168.1.3","192.168.1.4")
$rescanPolicyType = "periodically" # other value: "periodically"; any other value defaults to "daily"
$rescanTime = "16:30"
$rescanPeriod = 1 # rescan period in hours for "periodically" - can be 1, 2, 3, 4, 6, 8, 12, 24
$rebootComputer = "false" # any other value will not set -RebootIfRequired flag

# function definition
function NewProtectionGroup($protectionGroupName, $newComputers, $rescanTime, $rescanPolicyType, $rescanPeriod, $rebootComputer) {
  Write-Host -foreground yellow "Enter credentials for computers in protection group " $protectionGroupName
  $creds = Get-Credential
  $newCreds = Add-VBRCredentials -Credential $creds -Description "powershell added creds for $protectionGroupName" -Type Linux
  $newComputersCreds = $newComputers | ForEach { New-VBRIndividualComputerCustomCredentials -HostName $_ -Credentials $newCreds}
  $newContainer = New-VBRIndividualComputerContainer -CustomCredentials $newComputersCreds
 if ($rescanPolicyType -eq "daily") {
  $dailyOptions = New-VBRDailyOptions -Type Everyday -Period $rescanTime
  $scanSchedule = New-VBRProtectionGroupScheduleOptions -PolicyType Daily -DailyOptions  $dailyOptions
 } elseif ($rescanPolicyType -eq "periodically") {
  $periodicallyOptions = New-VBRPeriodicallyOptions -PeriodicallyKind Hours -FullPeriod $rescanPeriod
  $scanSchedule = New-VBRProtectionGroupScheduleOptions -PolicyType Periodically -PeriodicallyOptions  $periodicallyOptions
 } else {
  Write-host -foreground red "Uknown rescan policy type" $rescanPolicyType
  Write-host -foreground red "Using daily "
  $dailyOptions = New-VBRDailyOptions -Type Everyday -Period $rescanTime
  $scanSchedule = New-VBRProtectionGroupScheduleOptions -PolicyType Daily -DailyOptions  $dailyOptions
 }
 if ($rebootComputer -eq "true") {
  $deployment = New-VBRProtectionGroupDeploymentOptions -InstallAgent -UpgradeAutomatically -RebootIfRequired
 } else {
  $deployment = New-VBRProtectionGroupDeploymentOptions -InstallAgent -UpgradeAutomatically
 }
  $protectionGroup = Add-VBRProtectionGroup -Name $protectionGroupName -Container $newContainer -ScheduleOptions $scanSchedule -DeploymentOptions $deployment
  # rescan and install
  Rescan-VBREntity -Entity $protectionGroup -Wait
}

# Script body
if (Get-VBRProtectionGroup -Name $protectionGroupName -ErrorAction SilentlyContinue) {
 Write-Host -foreground red "Protection group:" $protectionGroupName "already exists. Use another name for protection group."
} else {
 NewProtectionGroup -protectionGroupName $protectionGroupName -newComputers $newComputers -rescanTime $rescanTime -rescanPolicyType $rescanPolicyType -rescanPeriod $rescanPeriod -rebootComputer $rebootComputer
}

Tuesday, May 15, 2018

NSX integration with vRealize Automation 7.4 - part 2

In part 1 of this post we presented the configuration at vRA level. In this post we'll see how to create a service in the Service Catalog for  programmatic NSX consumption.

First let's remember the main concepts of vRealize Automation service catalog:

  • catalog items are published in the service catalog for user consumption e.g Linux VM or 3 tier web app
  • catalog items can be grouped under different services: QA, Test&Dev, Web Apps, Linux Servers
  • a user is allowed to request a service item based on his entitlements; Entitlements define who has access to catalog items and what actions he can do 

For start, we'll create a service called Linux VMs and a new Entitlement called Allow Linux VMs. We'll entitle all users of the business group to the Linux VMs service. Using services in the entitlement instead of individual items we make sure that every new item mapped to this service will be automatically accessible to the users.

Administration > Catalog Management > Services

Administration > Catalog Management > Entitlements




Next we'll create a blueprint that deploys vSphere VMs. There are several ways to provision vSphere VMs, we will use linked clones because they are very fast and use deltas to keep the changes (which is good in labs). To use linked clones we need to create a golden image: a VM configured to the desired state.

First create the VM: deploy it from an existing template or create it from scratch. VM hostname and networking details will be configured at deployment during guest OS customization. For this to work we need VMware tools installed in the VM and a customization specification created in vCenter Server. 


No other special configuration is needed for the VM.

Optional step (vRA agent installation): if you don't plan to run scripts inside the guest OS of the vRA manged VM, you can skip this step. The installation should be pretty easy since VMware already provides a script that can handle it. Go to vRA appliance URL and download the script on your Linux VM:

 wget  https://vra_app_fqdn/software/download/prepare_vra_template_linux.tar.gz --no-check-certificate

Then extract the script from the archive and run it:

tar -xvzf prepare_vra_template_linux.tar.gz
cd prepare_vra_template_linux
./prepare_vra_template.sh  

Choose default agent type (vSphere), add the address of vRealize Appliance, manager service, accept the key fingerprints for the certificates, set the download timeout, and install JRE (if not already in the VM)




Now we have a VM with all the utils inside (VMware tools and optionally vRA agent) and we create the snapshot that will be the base for linked clones.

At this point we login to vRA portal and we start working on our service creation. Go to Design > Blueprints. Start creating a New blueprint. Type the name of the blueprint, assign an unique ID or leave the automatically generated one, limit the number of deployments per request (if you want). Add lease days to control the spawn of deployed VMs (especially for temporary environments) and add a period of time you want the item to be archived before deleting (when lease expires).

Since this is for demo, I've added a default lease of 1 day and no archival (automatic deletion after lease expires). On NSX Settings tab, choose the NSX transport zone and if you want to isolate the VMs deployed from this blueprint (allow only internal traffic between VMs).







Pressing OK button will take you to the canvas. From Machine Types category, drag and drop vSphere (vCenter) Machine


From Network&Security category, drag and drop On-Demand Routed Network


Select the vSphere__vCenter__Machine_1 components form the canvas and fill in the configuration details. Add the number of instances that can be deployed in a request 


Add build information: how the VM will be created (linked clone), where to clone it from, what customization specification to use:


Type the VM consumed resources: number of CPUs, memory, storage. Take care when configuring these values: if you allow 10 instances in a deployment, each instance with a maximum of 8 vCPU and 32 GB of RAM you may end up with a deployment using 80 vCPU and 320 GB of RAM. This is a good moment when approval workflows come into place.



Finally we need to connect the VM to the network. But first we'll configure the network component. On canvas select the On-Demand_Routed_Network_1 components and choose the parent network profile (profile that has been created in part 1


Go back to vSphere component, go to Network tab and click New. From drop down box select the network name

Lastly, add a custom property for the VM to define the operating system that is being used


At this moment we've configured how to create the VM, how to create the network and linked the VM to the network. Press Finish and then Publish the blueprint:


Once the blueprint has been published, it will appear under Administration > Catalog Management > Catalog Items. Select the new catalog item, press Configure and map it to the service created earlier at the beginning of the post. 


The service will appear in the Catalog tab and you can press Request to deploy a new instance of it. To see what is happening, go to Requests tab, select the request, press View Details and when the request details open press Execution Information


Here you will see that the vxlan has been created on demand and DLR reconfigured. Also the VM has been created and mapped to the new vxlan. The process can also be monitored in vCenter Server


After the provisioning finished successfully, the components will be displayed in Items tab from where they can be managed using day 2 operations.

Friday, May 4, 2018

NSX integration with vRealize Automation 7.4 - part 1

From time to time I find myself facing a new configuration of vRealize Automation. And since I don't do it very often I also find myself having forgotten some steps. For this reason I will write down my typical integration of NSX with vRA.

The post has two parts:

  • part 1 - describes the vRA configuration at infrastructure/tenant level
  • part 2 - focuses on creation of the service that consumes NSX


Requirements:

  • NSX  ( > 6.3.x) is installed and configured (VXLAN, distributed logical router, edge services gateways)
  • vRA 7.4 is deployed, tenant created, user directory integrated (optional) 
  • familiarity with vRA 


Goal: 

  • all vRA workloads are deployed to on-demand created networks and we do not worry about routing or virtual network creation
  • security for workloads is ensured using distributed firewall and security tags (but more on this in another post). 


First thing first, drawing a small diagram (sometimes my creative side kicks in :-)) of the desired state:


To solve L2/L3 requirements, we need the following:

  • vRA uses on demand created VXLAN's that use Distributed Logical Router (DLR) as default gateway
  • DLR is connected to Edge Services Gateway (ESG) via a transit network
  • dynamic routing protocol is running between DLR and ESG
  • ESG connects VRA workloads to the rest of the world via the External portgroup (which is a distributed portgroup)
  • ESG may also run dynamic routing protocols
(please don't judge the usage of /24 in the diagram, in real life I am subnetting)

As stated earlier, NSX is already configured, DLR and ESG deployed. Let's see how to configure vRA. 

Logon to the tenant as IaaS Administrator. Ideally, you would have IaaS Administrator + Tenant Administrator roles assigned to your account so you don't need to change between roles. We  need to create vSphere endpoint, fabric group, business group, NSX endpoint, reservation. 



Create vSphere (vCenter) Endpoint:

  • go to Infrastructure > Endpoints > Endpoints
  • New > Virtual > vSphere (vCenter)
  • on General tab: give it a name (vcenter1), add the URL of vCenter Server API (https://vcenter1.mydomain.local/sdk, add credentials and Test Connection. 
  • If all good, press OK and you have created your endpoint.  

Create NSX Endpoint:

  • go to Infrastructure > Endpoints > Endpoints
  • New > Network and Security > NSX
  • on General tab: type the name of the endpoint, add the  URL to the NSX manager and the credentials
  • on Associations tab: map NSX endpoint with the previously created vSphere endpoint (this is a step that appeared in vRA 7.3 due to changes on how NSX is integrated) 

  • press Test Connection, and then OK


Create Fabric Group:
  • go to Infrastructure > Endpoints > Fabric Group > New
  • give it a name, add the Fabric administrators (users, user groups) and select the compute resources available (the list of compute resources is based on the permissions the user has in vCenter Server)


Now the compute resources are available (Infrastructure > Compute Resources > Compute Resources). Check on the compute resource that data collection has run successfully - hover on the compute resource and from the menu choose Data Collection. 

Create Business Group:

  • go to Infrastructure > Users & Groups > Business Groups > New
  • on General tab: type in the name, add an e-mail for alerts, test
  • on Members tab: add the users/user groups for the following roles - Group manager role, Support role, Shared access role,User role and press ok
Starting with vRA 7.3 there is a new role for Business group - Shared access role which can use and run actions on resources deployed by other users in the business group. It is a good addition since I remember a client wanting this back in 2015.  

Create Network Profiles
  • go to Infrastructure > Reservations > Network Profiles
  • New > External 
  • on General tab: for Transit VXLAN: add the name, subnet mask and gateway IP address 
  • on DNS tab: add DNS details
  • on Network Ranges tab: add the IP range that is usable (do not forget that there are 2 IPs already used by ESG and DLR) 
  • New > Routed 
  • on General tab: use the external network profile created previously, and add some subnetting details:
    • subnet mask (could be /24 - 254 IPs) - the whole range given to vRA for its workloads
    • range subnet mask (can be /29 - 6 IPs ) - for each application/group of application deployed
  • on Network Ranges tab: press Generate Ranges
At least the following two network profiles will be displayed:




Create Reservation:
  • go to Infrastructure > Reservations 
  • New >  vSphere (vCenter)
  • on General tab: type in the Name of the reservation, tenant name (if multiple tenants exist), business group for which the reservation is created, priority (in case multiple reservations exist for the same business group)
  • on Resources tab: select the compute resource, put in a machine quota (if needed), select the size of the memory, select the datastores and the storage quota and select the Resource pool (if one has been defined in vCenter Server)
  • on Network tab: select network adapter (transit VXLAN), transport zone, DLR and network profile
  • finalize the task by pressing OK
And we are set: we have compute, storage and network resource available for consumption. In the next post we will create a service and see how we consume NSX on demand.