Wednesday, February 27, 2013

NetApp SnapMirror traditional volumes to flex volumes

SnapMirror comes in two flavors: volume snapmirror (VSM) and qtree snapmirror (QSM). One major difference between the two is that VSM can be synchronous or asynchronous, while QSM is only asynchronous. Another important difference is that VSM is block based replication, while QSM is logical replication. More differences can be found in TR-3446, however, the most important difference (for me and this post) is the type of volumes the two replication methods support:
  • VSM: TradVol - TradVol and FlexVol - FlexVol
  • QSM: TradVol - TradVol, FlexVol - FlexVol and TradVol - FlexVol
QSM supports replication between traditional volume and flex volumes and that is the reason why I used QSM to migrate client`s data from an old filer using traditional volumes. Source filer is Filer1 and destination filer is Filer2. On Filer1 there is vol1 (trad) and on Filer2 there is client1 (flex vol). Being a QSM, destination for vol1 will be a qtree on client1 volume, but qtrees must not be created before snapmirror operation.

For each filer, check the license for snapmirror exists, if not added it:
Filer1# license add XXXXX
Filer2# license add XXXXX

Check snapmirror status:
Filer1# snapmirror status
Snapmirror is on.
Filer2# snapmirror status
Snapmirror is on.

Make sure that each filer is resolvable in /etc/hosts file (a best practice is to mount /vol/vol0 and edit the files, else use the infamous rdfile and wrfile commands)
Filer1# wrfile /etc/hosts
# All other host entries that already exist on Filer 1
# Filer2
192.168.1.2 Filer2
Filer2# wrfile /etc/hosts
# All other host entries that already exist on Filer 2
# Filer1
192.168.1.1 Filer1

Authorize destination filer to perform replication on source filer:
Filer1# options snapmirror.access host=Filer2

In order to reduce replication volumes and keep source and destination synchronized, configure snapmirro schedule (snapmirror.conf):
Filer2# wrfile /etc/snapmirror.conf
Filer1:/vol/vol1/- Filer2:/vol/client1/vol1 - 0 8,16,0 * *

Synchronizations will run automatically everyday, every 8 hours starting with 0:00.

The first synchronization is the baseline and it is manually initialized on destination (snapmirror is destination driven):
Filer2# snapmirror initialize -S Filer1:/vol/vol1/- Filer2:/vol/client1/vol1

In both snapmirror.conf and when issuing snapmirror command, the syntax for source is a bit different: Filer1:/vol/vol1/-. The hyphen ensures that trad vol1 is actually treated as qtree and not as volume. Snapmirror starts and you may check the status. When synchronization is finished or is no longer need, snapmirror has to be stopped. Do a final synchronization:
Filer2# snapmirror update -S Filer1:/vol/vol1/- Filer2:/vol/client1/vol1

Pause the destination qtree:
Filer2# snapmirror quiesce /vol/client1/vol1

Break snapmirror relationship on destionatino filer:
Filer2# snapmirror break /vol/client1/vol1

Stop snapmirror on source filer:
Filer1# snapmirror release /vol/vol1/- Filer2:/vol/client1/vol1

And do not forget to export the new qtree.
Filer2> exportfs –p sec=sys, rw=192.168.1.0/24,root=192.168.1.0/24,nosuid /vol/client1/vol1

As a best practice, use a different network link for snapmirror traffic than production networks. It tends to use a lot of bandwidth, even though QSM does not fill up the bandwidth because of its nature. If a dedicated network is not possible, then bandwidth usage can be limited:
  • add kbps=value_in_kbps in snapmirror.conf file
  • use -k option when issuing snapmirror command

Friday, February 22, 2013

Guest customization - vCloud Director and vSphere

One nice feature that VMware Tools brings is the possibility to inject and run scripts directly in the virtual machine - be it bash or batch. I`ve been using this feature called guest customization when deploying new virtual machines in either a vCloud organization or directly in vSphere environment.

Guest customization in vSphere using PowerCLI

I`ll start with vSphere, since I`ve done more customization in this environment than in vCloud. The task is simple: deploy a linux VM from a template and provide ssh connectivity. After template deployment, the VM runs and empty OS - no IPs, no routes, DNS, repositories... nothing. The solution is the following PowerCLI cmdlet: Invoke-VMScript. I will not explain the syntax since it can be found here. I`ll just show what it can do in next piece of code:

Start-VM $vmname
 

### check VMtools status 
$vmtoolsstatus = ""
do {
Start-Sleep -Seconds 5
$vm = Get-VM $vmname
$vmtoolsstatus = $vm.Guest.ExtensionData.ToolsRunningStatus
echo "Tools not running... waiting 5 seconds"
} while ($vmtoolsstatus -ne "guestToolsRunning")

echo "configuring hostname"

$hostnamecfg = "sed 's/HOSTNAME\(.*\)/HOSTNAME=${vmname}.${domain}/g' -i /etc/sysconfig/network; hostname $vmname"

Invoke-VMScript -VM $(Get-VM $vmname) -GuestUser $GuestUser -GuestPassword $GuestPassword  -ScriptType bash -ScriptText $hostnamecfg

First the VM is started. Then a loop checks that VMware Tools have started. After, the script is being build in text variable $hostnamecfg - in this case a sed in /etc/sysconfig/network file (RedHat based distro). Last, Invoke-VMScript cmdlet injects $hostnamecfg script in the guest OS and runs it. This way you can set up all network interfaces, routes, yum repositories, ntp, dns, ssh, start-stop services. For example changing yum repo for CentOS to a local repository:


echo "configuring YUM repo"
if ( $template -eq "template-CentOS6") {
$yumcfg = " sed -i '/mirrorlist=http:\/\/mirrorlist\.centos\.org/s/^/# /' /etc/yum.repos.d/CentOS-Base.repo;"
$yumcfg += " sed -i 's/#baseurl=http:\/\/mirror\.centos\.org/baseurl=http:\/\/yumlocal\.vmlab\.local/' /etc/yum.repos.d/CentOS-Base.repo;"
}

Invoke-VMScript -VM $(Get-VM $vmname) -GuestUser $GuestUser -GuestPassword $GuestPassword  -ScriptType bash -ScriptText $yumcfg 


Guest customization in vCloud Director


vCloud Director brings guest customization directly in the portal. After the VM has been deployed from a vApp template in virtual machine properties - Guest OS Customization, Customization Script field:


#!/bin/bash
if [ -f /etc/sysconfig/network-scripts/route-eth0 ]
then
sed -i 's/\.5\.1/\.52\.1/g' /etc/sysconfig/network-scripts/route-eth0
else
echo "192.168.60.0/24 via 192.168.52.1" >> /etc/sysconfig/network-scripts/route-eth0
fi


In the example above, a bash script checks to see if route file for eth0 exists. If it does, then it changes gateway for existing route (sed), if it does not it creates route-eth0  and adds the route. Upon Power on (or Power on and force recustomization) the script will be run inside guest OS. 

Adding a route for Windows OS will replace the bash script with a batch script:

@echo off
(
route add 192.168.60.0 mask 255.255.255.0 192.168.52.1 -p

)

The same rule as for vSphere applies: you can customize guest os (Linux or Windows) with whatever configuration you need and how much scripting allows it. I am not discussing workflow automation tools (vCenter Orchestrator), nor configuration management software (Puppet). It is just about making life easier for some repetitive tasks using basic scripting. 

Friday, February 15, 2013

PowerCLI - multiple vMotion to prepare host for maintenance

One of the clusters I work on has been recently upgraded to new servers with 128 MB of RAM. This makes the cluster RAM pool huge compared to the actual need. The VMs are distributed more or less equally among  hosts in the cluster. During maintenance periods I have to offload all VMs to another host which would automatically be taken care of by DRS when "Enter maintenance mode" command is issued on a host. However, when the host is back online nobody brings back the VMs. Load on other hosts is still too small for DRS to bother migrating anything. To solve my issue and not to leave a host completely empty, I use a small script in PowerCLI instead of "Enter maintenance mode" command.

The work flow I follow is: offload the host with the script, put manually the host in maintenance mode, do my thing, bring the host back online and finally load back the host using the script.
The script takes as input 3 parameters: source host, destination host and type of migration. "Out" migration represents offloading of a particular host, while "in" migration represents loading the host. For "in" to work, a csv file named vms_source-hostname.csv must exist. The file is created automatically by "out" migration in the path from were the script is run and it contains the names of powered on vms. Source host and destination host must remain the same for both "out" and "in" migrations.

Usage:
  • offloading (out) - vMotions all powered on VMs from vmhost1 to vmhost2
./Mass-vMotion -srcHost vmhost1.example.com -dstHost vmhost2.example.com -migType out
  •  loading (in) - vMotions all powered on VMs that belonged to vmhost1, from vmhost2 to vmhost1 (it gets its input from a file creted by a previous "out" run of the script)
 ./Mass-vMotion -srcHost vmhost1.example.com -dstHost vmhost2.example.com -migType in
Changing the migration type means changing migType parameter from "out" to "in". The script runs with -RunAsync parameter set and it saturates the bandwidth.
  
Param(
[Parameter(Mandatory=$True,Position=0)][string][ValidateNotNullOrEmpty()] $srcHost,
[Parameter(Mandatory=$True,Position=1)][string][ValidateNotNullOrEmpty()] $dstHost,
[Parameter(Mandatory=$True,Position=2)][string][ValidateNotNullOrEmpty()] $migType
)

if ((get-vmhost $srcHost -ErrorAction "SilentlyContinue") -and (get-vmhost $dstHost -ErrorAction "SilentlyContinue")) {
if ($migType -eq "out") ### OUT type migration
{
echo "Offloading powered on VMs from $srcHost to $dstHost"
$expfile = "vms_${srcHost}.csv"
get-vmhost $srcHost
get-vm
Where {$_.ExtensionData.Runtime.PowerState -eq "poweredOn"}
Select Name
Export-CSV $expfile
get-vmhost $srcHost
get-vm
Where {$_.ExtensionData.Runtime.PowerState -eq "poweredOn"}
Move-Vm -Destination $dstHost -RunAsync
}
elseif ($migType -eq "in") ### IN type migration
{
echo "Bringing back VMs from $dstHost to $srcHost"
$expfile = "vms_${srcHost}.csv"
if ( Test-Path $expfile)
{
$vmfile=Import-Csv $expfile
foreach ($vm in $vmfile) {$vmn=$vm.Name; Move-Vm -VM $vmn -Destination $srcHost -RunAsync}
Remove-Item $expfile
}
else { echo "VM file $expfile does not exist" }
}
else { Write-Warning "Please input correct migration type: 'out' or 'in'" }
}
else { Write-Warning "Host does not exist! Values entered are srcHost: $srcHost and dstHost: $dstHost" }

Thursday, February 7, 2013

vCloud Director storage calculation and allocation

A couple of days ago, my colleague trying to deploy a new vApp in an Organization vDC hit the storage limit. Because on paper he should have got enough space, he started to calculate the total size that VMs occupied on the storage and to compare it with what vCloud Director reported. After doing the math, he told me that vApps take 417 GB while vCD reports 451 GB. Where did 34 GB go? My first thought was the swap file for each VM:  Swap file size = Total VM RAM - Reserved Memory.   
But the swap file is created at run time and most of the VMs were stopped. Even if vCloud Director would reserve whole space from the beginning, regardless of VM power state, it would mean that all memory is reserved by default. Which was not the case for the Allocation pool. 

vCloud Director allocates space using the following formula:
Storage size allocated = Total storage of virtual machines + Total memory of virtual machines + Storage of templates/media
 
Equivalent storage of memory is allocated disregarding the power state of VMs and memory reservations - covering a scenario in which 0% memory guarantee is configured for vDC. Memory of VMs in template is not included in space allocation. 

Why is this important? It is because there is a misunderstanding of resource usage and allocation in vCloud Director, especially regarding storage. A vDC with 10 vCPU, 10 GB vRAM, 100 GB HDD will never be able to accommodate 10 VMs with 1 vCPU, 1 GB vRAM, 10 GB HDD.

Next time you calculate necessary storage space, be sure not to forget vRAM (or templates).