Showing posts with label ESXi. Show all posts
Showing posts with label ESXi. Show all posts

Saturday, March 11, 2023

Clear DNS Cache on VCSA after ESXi IP Address Update

I've recently had to do some changes and modify the ESXi management IP address. Once the ESXi host has been put to maintenance mode and removed from vCenter Server inventory I've updated the DNS server records: A and PTR records. Checking DNS resolution works, I've tried to re-add servers to vCenter Server using their FQDN, but it errored with no route to host. This is because of the DNS client cache on VCSA. 

To solve it fast and not wait too long you need to ssh to VCSA appliance and run the following commands: 

systemctl restart dnsmasq

systemctl restart systemd-resolved

Once the services are restarted you can add again the ESXi hosts using the FQDN.

Monday, May 11, 2020

VMs not Powering On in Nested ESXi Running on vSphere 7.0 and Options for Nested Lab

After upgrading physical home lab to vSphere 7.0, I've tried to power on the VMs in my nested environment to prepare a demo for an upcoming VMUG meeting. However, I couldn't get any VM to start in the nested ESXi 7.0 running on top of a physical ESXi 7.0. What actually happened is the nested ESXi host crashed.

I found out the following article  warning about this issue affecting an entire family of CPU's - Intel Skylake. My home lab runs Intel Coffe Lake CPUs on gen 8 Intel NUC's and it seems they are affected too.  It does not affect older CPU's as it is the case with my Ivy Bridge i5. Bottom line, until a patch or fix comes into main stream vSphere 7.0, you won't be able to power on a VMs in a nested ESXi 7.0 running on top of an ESXi 7.0. The rest of functionality is there and working.

I had to do my demo using the physical vSphere 7 and later come back to the lab to find a workaround. I found out there are two options that actually work at the moment:

  • option 1 - physical ESXi 7.0 running nested ESXi 6.7
  • option 2 - physical ESXi 6.7 running nested ESXi 7.0
Keeping the physical ESXi on 7.0 and downgrading nested 6.7 may seem the simpler path unless your use case is to test the new features and products. You could do it with the physical hosts, but that means to run all your tests on the base ESXi's and it could lead to partial or full lab rebuild. This approach invalidates the idea of having a nested lab. So now you are left with option 2: temporarily downgrade physical ESXi to 6.7. My use case requires to power on nested VMs, so option 2 is my choice.

I keep the physical lab on a very simple configuration with the purpose of being able to easily rebuild (reconfigure) the hosts. Before going to downgrade, a few aspects need to be considered:
  • are any VMs upgraded to the latest virtual hardware (version 17) - those VMs will not work on vSphere 6.7
  • cleanup vCenter Server: remove hosts from clusters and from vCenter Server inventory. Reusing the same hardware will cause datastore conflict if a cleanup is not done.
  • how the actual downgrade will take place (pressing Shift+R at boot start will not find any older install even it was an upgrade from 6.7)
  • hostnames and IP addresses

Having all this in mind, I embarked on the journey of fresh ESXi 6.7 installs that will allow to run nested ESXi 7.0. 





Thursday, March 7, 2019

vCenter Server Restore with Veeam Backup & Replication

Recently I went through the process of testing vCenter Server appliance restore in the most unfortunate case when the actual vCenter Server was not there. Since the tests were being done for a prod appliance, it was decided to restore it without connectivity to the network. Let's see how this went on.

Test scenario
  • distributed switches only
  • VCSA
  • Simple restore test: put VCSA back in production using a standalone host connected to VBR
Since vCenter is "gone", first thing to do is to directly attach a standalone ESXi host to VBR. The host will be used for restores (this is a good argument for network team's "why do you need connectivity to ESXi, you have vCenter Server"). The process is simple, open VBR console go to Backup Infrastructure and add ESXi host. 

You will need to type in the hostname or IP and root account. Since vCenter Server was not actually gone, we had to use the IP  instead of FQDN as it was seen through the vCenter Server connection with the FQDN. 

Next, start the an entire VM restore


During the restore wizard, select the point in time (by default last one), then select Restore to a different location or with different settings:


Make sure to select the standalone host:

Leave default Resource Pool and datastores, but check the selected datastore has sufficient space. Leave the default folder, however if you still have the source VM change the restored VM's name:

Select the network to connect to. Actually disconnect the network of the restored VM. That was the scenario, right? Since the purpose of this article is not to make you go through the same experience we had, let's not disconnect it. And you will see why immediately:

Keep defaults for the next screens and start the restore (without automatically powering on the VM after restore). 


A few minutes later the VM is restored and connected to the distributed port group. 

We started by testing a disconnected restored VM, but during the article we didn't disconnect it. And here is why: when we initially disconnected the network of the restored VM, we got an error right after the VM was registered with the host and the restore failed. 


Same error was received trying to connect to a distributed portgroup configured with ephemeral binding. The logs show the restore process actually tries to modify network configuration of an existing VM and that makes it fail when VBR is connected directly to the console.When the portgroup is not changed for the restored VM, then the restore process skips updating network configuration. Of course, updating works with standard switch port group. 


In short, the following restore scenarios will work when restoring directly through a standalone host:
  • restore VCSA to the same distributed port group to which the source VM is connected
  • restore VCSA to a standard portgroup



Friday, February 15, 2013

PowerCLI - multiple vMotion to prepare host for maintenance

One of the clusters I work on has been recently upgraded to new servers with 128 MB of RAM. This makes the cluster RAM pool huge compared to the actual need. The VMs are distributed more or less equally among  hosts in the cluster. During maintenance periods I have to offload all VMs to another host which would automatically be taken care of by DRS when "Enter maintenance mode" command is issued on a host. However, when the host is back online nobody brings back the VMs. Load on other hosts is still too small for DRS to bother migrating anything. To solve my issue and not to leave a host completely empty, I use a small script in PowerCLI instead of "Enter maintenance mode" command.

The work flow I follow is: offload the host with the script, put manually the host in maintenance mode, do my thing, bring the host back online and finally load back the host using the script.
The script takes as input 3 parameters: source host, destination host and type of migration. "Out" migration represents offloading of a particular host, while "in" migration represents loading the host. For "in" to work, a csv file named vms_source-hostname.csv must exist. The file is created automatically by "out" migration in the path from were the script is run and it contains the names of powered on vms. Source host and destination host must remain the same for both "out" and "in" migrations.

Usage:
  • offloading (out) - vMotions all powered on VMs from vmhost1 to vmhost2
./Mass-vMotion -srcHost vmhost1.example.com -dstHost vmhost2.example.com -migType out
  •  loading (in) - vMotions all powered on VMs that belonged to vmhost1, from vmhost2 to vmhost1 (it gets its input from a file creted by a previous "out" run of the script)
 ./Mass-vMotion -srcHost vmhost1.example.com -dstHost vmhost2.example.com -migType in
Changing the migration type means changing migType parameter from "out" to "in". The script runs with -RunAsync parameter set and it saturates the bandwidth.
  
Param(
[Parameter(Mandatory=$True,Position=0)][string][ValidateNotNullOrEmpty()] $srcHost,
[Parameter(Mandatory=$True,Position=1)][string][ValidateNotNullOrEmpty()] $dstHost,
[Parameter(Mandatory=$True,Position=2)][string][ValidateNotNullOrEmpty()] $migType
)

if ((get-vmhost $srcHost -ErrorAction "SilentlyContinue") -and (get-vmhost $dstHost -ErrorAction "SilentlyContinue")) {
if ($migType -eq "out") ### OUT type migration
{
echo "Offloading powered on VMs from $srcHost to $dstHost"
$expfile = "vms_${srcHost}.csv"
get-vmhost $srcHost
get-vm
Where {$_.ExtensionData.Runtime.PowerState -eq "poweredOn"}
Select Name
Export-CSV $expfile
get-vmhost $srcHost
get-vm
Where {$_.ExtensionData.Runtime.PowerState -eq "poweredOn"}
Move-Vm -Destination $dstHost -RunAsync
}
elseif ($migType -eq "in") ### IN type migration
{
echo "Bringing back VMs from $dstHost to $srcHost"
$expfile = "vms_${srcHost}.csv"
if ( Test-Path $expfile)
{
$vmfile=Import-Csv $expfile
foreach ($vm in $vmfile) {$vmn=$vm.Name; Move-Vm -VM $vmn -Destination $srcHost -RunAsync}
Remove-Item $expfile
}
else { echo "VM file $expfile does not exist" }
}
else { Write-Warning "Please input correct migration type: 'out' or 'in'" }
}
else { Write-Warning "Host does not exist! Values entered are srcHost: $srcHost and dstHost: $dstHost" }

Saturday, February 2, 2013

Masking and unmasking LUNs from ESXi

In this post I`ll present how to mask/unmask a LUN with MASK_PATH plugin and using esxcli on ESXi 5.1.

 MASKING

The  datastore to be masked is called shared1-iscsi-ssd and it is an iSCSI datastore. To find out the device identifier ssh to ESXi host (or connect to vMA) and type the command:

~ # esxcli storage vmfs extent list

It will return the volume name, VMFS UUID, number of extents, device name and partition number for all VMFS datastores mounted on the host. For shared1-iscsi-ssd the device name is t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________  (t10 naming).

ESXi host uses SCSI INQUIRY command to gather data about target devices and to generate a device name that is unique across all hosts. These names can have one of the following formats:
  • naa.number 
  • t10.number
  • eui.number
In this case, FreeNAS LUNs are identified using t10 naming. More details about the device can be displayed by running:
~ # esxcli storage core device list -d t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________

   Display Name: FreeBSD iSCSI Disk (t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________)
   Has Settable Display Name: true
   Size: 51199
   Device Type: Direct-Access
   Multipath Plugin: NMP
   Devfs Path: /vmfs/devices/disks/t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
   Vendor: FreeBSD
   Model: iSCSI Disk
   Revision: 0123
   SCSI Level: 5
   Is Pseudo: false
   Status: degraded
   Is RDM Capable: true
   Is Local: false
   Is Removable: false
   Is SSD: false
   Is Offline: false
   Is Perennially Reserved: false
   Queue Full Sample Size: 0
   Queue Full Threshold: 0
   Thin Provisioning Status: unknown
   Attached Filters:
   VAAI Status: supported
   Other UIDs: vml.02000000003000000034127b79695343534920
   Is Local SAS Device: false
   Is Boot USB Device: false


The output provides lots of information about the device such size, device status, if it could be used as RDM, if it is seen as SSD or not, if it supports VAAI.
In order to mask the LUN we will apply filters to each path. List paths to the device:
~ # esxcli  storage core path list -d t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
iqn.1998-01.com.vmware:ex5103-5b8ecbe2-00023d000006,iqn.2011-03.example.org.istgt:t1,t,1-t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
   UID: iqn.1998-01.com.vmware:ex5103-5b8ecbe2-00023d000006,iqn.2011-03.example.org.istgt:t1,t,1-t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
   Runtime Name: vmhba33:C1:T1:L0
   Device: t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
   Device Display Name: FreeBSD iSCSI Disk (t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________)
   Adapter: vmhba33
   Channel: 1
   Target: 1
   LUN: 0
   Plugin: NMP
   State: active
   Transport: iscsi
   Adapter Identifier: iqn.1998-01.com.vmware:ex5103-5b8ecbe2
   Target Identifier: 00023d000006,iqn.2011-03.example.org.istgt:t1,t,1
   Adapter Transport Details: iqn.1998-01.com.vmware:ex5103-5b8ecbe2
   Target Transport Details: IQN=iqn.2011-03.example.org.istgt:t1 Alias= Session=00023d000006 PortalTag=1
   Maximum IO Size: 131072

iqn.1998-01.com.vmware:ex5103-5b8ecbe2-00023d000003,iqn.2011-03.example.org.istgt:t1,t,1-t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________

   Runtime Name: vmhba33:C0:T1:L0
   Plugin: NMP
   State: active

...

There are 2 paths (the listing for the second path has been truncated). This listing also presents a lot of useful information - including runtime name, multipathing plugin type and path state.
If you are looking for preferred path, use the following command to display paths from the point of view of the multipathing plugin:

~ # esxcli storage nmp path list -d t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________

iqn.1998-01.com.vmware:ex5103-5b8ecbe2-00023d000006,iqn.2011-03.example.org.istgt:t1,t,1-t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
   Runtime Name: vmhba33:C1:T1:L0
   Device: t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
   Device Display Name: FreeBSD iSCSI Disk (t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________)
   Group State: active
   Array Priority: 0
   Storage Array Type Path Config: SATP VMW_SATP_DEFAULT_AA does not support path configuration.
   Path Selection Policy Path Config: {current: yes; preferred: yes}

iqn.1998-01.com.vmware:ex5103-5b8ecbe2-00023d000003,iqn.2011-03.example.org.istgt:t1,t,1-t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
   Runtime Name: vmhba33:C0:T1:L0
   Device: t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
   Device Display Name: FreeBSD iSCSI Disk (t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________)
   Group State: active
   Array Priority: 0
   Storage Array Type Path Config: SATP VMW_SATP_DEFAULT_AA does not support path configuration.
   Path Selection Policy Path Config: {current: no; preferred: no}


Next we`ll start adding claimrules for the paths. Display current rule list:
~ # esxcli storage core claimrule list

Rule Class   Rule  Class    Type       Plugin     Matches
----------  -----  -------  ---------  ---------  ---------------------------------
MP              0  runtime  transport  NMP        transport=usb
MP              1  runtime  transport  NMP        transport=sata
MP              2  runtime  transport  NMP        transport=ide
MP              3  runtime  transport  NMP        transport=block
MP              4  runtime  transport  NMP        transport=unknown
MP            101  runtime  vendor     MASK_PATH  vendor=DELL model=Universal Xport
MP            101  file     vendor     MASK_PATH  vendor=DELL model=Universal Xport
MP          65535  runtime  vendor     NMP        vendor=* model=*


Add one rule for each path we filter using runtime name information
~ # esxcli storage core claimrule add -r 200 -t location -A vmhba33 -C 0 -T 1 -L 0 -P MASK_PATH
~ # esxcli storage core claimrule add -r 201 -t location -A vmhba33 -C 1 -T 1 -L 0 -P MASK_PATH

Display the rules:
~# esxcli storage core claimrule list
Rule Class   Rule  Class    Type       Plugin     Matches
----------  -----  -------  ---------  ---------  ----------------------------------------
MP              0  runtime  transport  NMP        transport=usb
MP              1  runtime  transport  NMP        transport=sata
MP              2  runtime  transport  NMP        transport=ide
MP              3  runtime  transport  NMP        transport=block
MP              4  runtime  transport  NMP        transport=unknown
MP            101  runtime  vendor     MASK_PATH  vendor=DELL model=Universal Xport
MP            101  file     vendor     MASK_PATH  vendor=DELL model=Universal Xport
MP            200  file     location   MASK_PATH  adapter=vmhba33 channel=0 target=1 lun=0
MP            201  file     location   MASK_PATH  adapter=vmhba33 channel=1 target=1 lun=0
MP          65535  runtime  vendor     NMP        vendor=* model=*



As you can see in class column, rules are added only in the configuration file (/etc/vmware/esx.conf) and we need to load these new rules:
~ # esxcli storage core claimrule load

~ # esxcli storage core claimrule list | grep vmhba33
MP            200  runtime  location   MASK_PATH  adapter=vmhba33 channel=0 target=1 lun=0
MP            200  file     location   MASK_PATH  adapter=vmhba33 channel=0 target=1 lun=0
MP            201  runtime  location   MASK_PATH  adapter=vmhba33 channel=1 target=1 lun=0
MP            201  file     location   MASK_PATH  adapter=vmhba33 channel=1 target=1 lun=0


Unclaim all paths and then run the loaded claimrules on each of the paths to reclaim them:
 ~ # esxcli storage core claiming reclaim  -d t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________

~ # esxcli storage core claiming unclaim -t location -d t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
 

~ # esxcli storage core claimrule run

Now the device is not seen by the host. A simple test is to try to add the datastore in vSphere Client - Configuration - Hardware - Storage - Add storage and compare the availability of the device on both masked/unmasked ESXi.

UNMASKING

The way to it is pretty straight forward: delete claim rules from file, reload claimrules, reclaim the paths and run the claimrules.
List the exsiting rules:

~ # esxcli storage core claimrule list | grep vmhba33
MP            200  runtime  location   MASK_PATH  adapter=vmhba33 channel=1 target=1 lun=0
MP            200  file     location   MASK_PATH  adapter=vmhba33 channel=1 target=1 lun=0
MP            201  runtime  location   MASK_PATH  adapter=vmhba33 channel=0 target=1 lun=0
MP            201  file     location   MASK_PATH  adapter=vmhba33 channel=0 target=1 lun=0


Remove the rules and check:
~ # esxcli storage core claimrule remove -r 200
~ # esxcli storage core claimrule remove -r 201

~ # esxcli storage core claimrule list | grep vmhba33
MP            200  runtime  location   MASK_PATH  adapter=vmhba33 channel=1 target=1 lun=0
MP            201  runtime  location   MASK_PATH  adapter=vmhba33 channel=0 target=1 lun=0


Reload the claimrules from file
~ # esxcli storage core claimrule load

Reclaim the device:
~ # esxcli storage core claiming reclaim -d t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
~ # esxcli storage core claimrule run


Check that the device is available and that all paths are up:
~ # esxcli storage core device list -d t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________
~ # esxcli storage nmp path list -d t10.FreeBSD_iSCSI_Disk______0050568cdac0010_________________


There is also VMware KB 1009449 that presents the masking procedure for both ESXi 4.x and 5.0.

Saturday, January 26, 2013

PowerCLI - Starting and stopping SSH

From time to time I need to go connect on the ESXi using SSH. I know it is not best practice, still I have to do it. And the most simple task transforms into a pain - starting and stopping SSH server on several ESXi hosts. The following small piece of PowerCLI does the job just fine:
  • starting ssh service
Get-Cluster CLS01 | Get-VMHost | Get-VMHostService | Where {$_.Key -eq "TSM-SSH"} | Start-VMHostService 
  • stopping ssh service
Get-Cluster CLS01 | Get-VMHost | Get-VMHostService | Where {$_.Key -eq "TSM-SSH"} | Stop-VMHostService -Confirm:$false
Since get-vmhostservice returns all services on ESXi host, you can use this to start/stop any service.

Wednesday, January 23, 2013

VMware DirectPath I/O - adding passthrough PCI devices to VM

VMware DirectPath I/O allows guest OS to directly access physical PCI and PCIe devices connected to host. There several things to check before proceeding to configuration:
  • the maximum of PCI devices that can be attached to a VM is 6
  • the VM hardware must be 7 or later
  • Intel VT-d or AMD IOMMU must be enabled
  • PCI devices are connected and marked as available 
In vSphere Client  go to Configuration - Advanced Settings (Hardware) and see if the the device is Active (a green icon). If there is no device displayed, go to Edit and select your device. In some cases the host will need reboot - device will have an orange icon.

In /etc/vmware/esx.conf the modification is recorded as:
/device/000:000:27.0/owner = "passthru"

Go to VM - Edit Settings and add the device
A reservation equal to the memory of the VM will be automatically created. However, the reservation is not removed when the device is removed from the VM. So, be sure to clean up. If there is no memory reservation then powering on the VM will end in the following error:


Another way to do add/remove passthrough devices is using PowerCLI. The  add-passthroughdevice cmdlet does not create the memory reservation and it has to be done in a second step:
get-vmhost HostName | get-passthroughdevice | Where {$_.State -eq "Active"} | add-PassThroughDevice -VM VMName
foreach ($vm in (get-vm VMName)) {get-vmresourceconfiguration $vm | set-vmresourceconfiguration -MemReservationMB $vm.MemoryMB}


Removal of PCI device and memory reservation cleanup:
get-vm VMName | get-passthroughdevice | remove-passthroughdevice -Confirm:$false
get-vm VMName |get-vmresourceconfiguration | set-vmresourceconfiguration -MemReservationMB 0


A VM using DirectPath I/O does not support the following features:
  • snapshots
  • suspend and resume
  • HA
  • FT
  • DRS (the VM can be part of DRS cluster, but it cannot be migrated across hosts)
  • hot adding and removal of devices

Tuesday, January 1, 2013

My shiny and new white box

It`s the new year and it started with a resolution (or more like a hope - or both) - I am trying to post on the blog once a week.

This one is about the white box I have acquired before Christmas. The main purpose of the white box is to replace the old one which had only 8 GB of RAM and was wasting a lot of my time. Since I could not do any upgrades to the old one I had to buy a completely new computer. It all evolved around the idea of being able to accommodate a vCloud Director deployment and after a bit of googling around I came up with the following:
- Intel Core i5-3470 - 4 cores @ 3.2 GHz, 77 W (my current Q9400 eats up 95W), no HT, but a lot of nice virtualization technologies: VT-x, VT-d,VT-x with EPT;
- since max RAM supported by the CPU is 32 GB then max it was: 4 x 8 GB Corsair Vengeance DDR3 dual channel @ 1600 MHz;
- motherboard ASUS P8Z77-V LX2 - Intel Z77 express chipset - integrated gigabit NIC and video graphics, SATA3, SATA2 and loaded with UEFI (my first UEFI home usage device);
- the case to hold it all is Antec NSK4482 with 380 W power source.

It all went up to around 550 EUR. The HDD are reused from the old box: 1 x Mushkin Callisto 60GB SSD , 1 x Intel 330 120GB SSD and 1 x Western Digital Caviar Black 640 GB SATA 2.

After moving the HDDs from the old computer to the new one, the only configuration I had to do was to select the management interface in ESXi DCUI. Since the new rig has only 3 fans (CPU, Case and PS), it is also very silent (the old one is a real noise making machine with 5 fans). I am very happy with the arrangement, but the only tests I have done are with the current infrastructure (1 x root ESXi, 2 x virtualized ESXi, VCSA, NetApp filer, VSC,  FreeNAS and AD). All VMs are kept on the Intel SSD. I will install vCD as soon as possible and see how it moves.

After a week, I got a glimpse at one of my friend`s white box, which uses Shuttle SH67H3 barebone. If you want a smaller foot print then go for the Shuttle. It also has an interesting CPU cooling system that uses the fan from the power source. So only 1 fan in the whole system. However it is 100 EUR more expensive and there is no room for 3 HDDs. 

I am confused about what to do with the old white box, so I had sacrificed a 2 GB USB stick, had installed ESXi and stuck it in the first USB port I saw. This way I have another ESXi host with 8 GB that could also be used.

Wednesday, May 23, 2012

NFS traffic rate limiting on Juniper switches

"And now for something completely different"... 

The configuration for the ESXi infrastructure I worked with is a bit tricky. VMs are hosted on NFS filer, but the same VMs are mounting NFS exports from the same filer. Much like in the picture below:


The physical bandwidth between the access switch where ESXi are connected and the core switches where the Filer is connected is limited. So, I had a lot of ESXi servers, with a lot more VMs competing over the same physical links. Lucky me, I had control over the access switches - Juniper. The next step was elementary: ensure enough bandwidth for ESXi and give some to the VMs - rate limiting using firewall filtering from JunOS.

And this is how it was done. First, a policer was created for NFS traffic coming from VMs (guest OS) which limits the allocated bandwidth - in my case 200 Mbits with a burst size of 10 MB. When the limit is reached packets are discarded. This way two things are achieved: a decent 200 Mbit bandwidth  is ensured for the VMs and small files (up to 10 MB) are transferred very fast to the Filer (no limits). When the VMs demand a lot of resources, the policer steps in and ensures that the critical ESXi vmk traffic gets its share.

[edit firewall]
set policer policer-NFS-1 if-exceeding bandwidth-limit 200m burst-size-limit 10m
set policer policer-NFS-1 then discard

Then, the firewall filter is created. The filter matches all traffic that goes to the IP address of the Filer and applies the policer to it:

[edit firewall family inet]
set filter limit-vlan100-NFS term term-1 from destination-address 192.168.100.192
set filter limit-vlan100-NFS term term-1 then policer policer-NFS-1
set filter limit-vlan100-NFS term term-default then accept

Last, add firewall filter is applied to the interface - in this case it is VLAN 100:

[edit]
set interfaces vlan unit 100 family inet filter input limit-vlan100-NFS

The downside is that firewall filtering adds a bit of a load on the CPUs of the switches. Care should be taken when implementing such solutions (as always).

Monday, May 21, 2012

Virtual port group details

I know there are tools that get all this info, but not in the form I needed and when and where I needed - during vmnic migration  and port group reconfiguration. The idea was to redistribute traffic on different vmincs and to keep failover at the vSwitch level.  In order to check the status of the port groups on each host, I used the following:

$report=@()

get-vmhost | foreach {
foreach ($vsw in Get-VirtualSwitch -VMHost $_)
{
 $hostName = $_.Name
foreach ($vpg in Get-VirtualPortGroup -VirtualSwitch $vsw)
{
 $row = " " | Select hostName, vpgName, vpgInherit, vpgAN, vpgSN, vpgUN, vpgFailover, vpgLB
$vpgnicteaming = Get-NicTeamingPolicy -VirtualPortGroup $vpg

$row.hostName = $hostName
$row.vpgName = $vpgnicteaming.VirtualPortGroup.Name
$row.vpgInherit = $vpgnicteaming.IsFailoverOrderInherited
$row.vpgAN = $vpgnicteaming.ActiveNic
$row.vpgSN = $vpgnicteaming.StandbyNic
$row.vpgUN = $vpgnicteaming.UnusedNic
$row.vpgFailover = $vpgnicteaming.NetworkFailoverDetectionPolicy
$row.vpgLB = $vpgnicteaming.LoadBalancingPolicy

 $report += $row
}
}
}
$report

The things I am looking for is to find out for each virtual port group what are the active, stanby and unused interfaces, what type of failover and load balancing policies are implemented. After running the script I got the following listing (the listing is reduced):

hostName    : testbox.esxi
vpgName     : VM Network
vpgInherit  : True
vpgAN       : {vmnic0}
vpgSN       :
vpgUN       :
vpgFailover : LinkStatus
vpgLB       : LoadBalanceSrcId

hostName    : testbox.esxi
vpgName     : vmk_vMotion
vpgInherit  : True
vpgAN       : {vmnic1}
vpgSN       :
vpgUN       :
vpgFailover : LinkStatus
vpgLB       : LoadBalanceSrcId

My setup involves separating traffic at vmnic level - vminc0 used for VM traffic, while vmnic1 is used for vMotion. Both vminc0 and vminc1 are attached to the same vSwitch.

Thursday, May 17, 2012

Automate VM deployment from templates

The title is a little pretentious, but the next piece of PowerCLI does the job fairly enough. The only thing I have to do is to have the templates ready and modify the input file.
The script takes as input a csv file, parses each row and initializes variables that are used by New-Vm powercli command:

$input = Import-Csv "deploy_from_template.csv"
foreach ($row in $file) {
$vmname = $row.VmName
$respool = $row.ResPool
$location = $row.Location
$datastore = $row.Datastore
$template = $row.Template
echo "deploying " $vmname
New-Vm -Name $vmname -ResourcePool $respool -Location $location -Datastore $datastore -Template $template -RunAsync
}

RunAsync starts allows the execution of the next line in the script without waiting for current task to end. So use it wisely.

The csv file has the following structure, but you can put any other parameters in it and modify the script accordingly:
VmName,ResPool,Location,Datastore,Template
websrv01,Pool_Web,Client1,DS001,Template_RHEL6_Web
dbsrv01,Pool_DB,Client1,DS005,Template_RHEL5.5_DB

Tuesday, May 15, 2012

PowerCLI - ESXi network interfaces

I have added new gigabit NIC`s on all the ESXi servers and before making any modifications to the traffic flows I wanted to see that the network was up and running. So, the following script came in handy:


### Get pnics and their link speed

get-vmhost | foreach {
$_.Name
$pnics = $_.NetworkInfo.ExtensionData2.NetworkInfo.Pnic
foreach ($p in $pnics) {
$row=""
$row += $p.Device + " " + $p.LinkSpeed.SpeedMB
$row
}
}


After finishing all reconfigurations on ESXi networking - mainly redistributing traffic flows on the newly installed NIC`s, I felt the need to have a quick look over the deeds just done. The idea was to check  that ESXi hosts had the correct vmk interfaces and the correct IP addresses. So, another small bit of scripting:


###Get vmk interfaces and IP addresses for all hosts
foreach ($vmh in get-vmhost)
{
$row = ""
$row += $vmh.Name + " "
foreach ($i in $vmh.NetworkInfo.ExtensionData2.NetworkConfig.Vnic)
{
$vmk= $i.Device
$row += $vmk + " "
$vmkip = $i.Spec.Ip.IpAddress
$row += $vmkip + " "
$vmksubnet = $i.Spec.Ip.SubnetMask
$row += $vmksubnet + " "
}
$row
}