Monday, June 15, 2020

Being a vExpert

I applied to vExpert program for the first time 2 years ago. I was kind of pushed by one of my VMUG co-leaders who was accepted earlier that year. So, I applied too. I still remember how it felt when I got the acceptance letter. It gave me joy and a peculiar sense of pride. All of a sudden there were 2 vExperts in Romania. I didn't realize the true potential of it until I had access to this beautiful community of people driven by their passion and to the resources made available to me:
  • access to a global network of techies through dedicated Slack channels 
  • VMware licenses for my lab for 1 year 
  • dedicated webinars for vExperts
  • parties at VMworld (this will wait a bit for now) 
  • more people and knowledge through the subprogram 
  • increased visibility on social media: Twitter, LinkedIn
More goodies: I got licenses and complimentary subscriptions from partners like Hy Trust, Runecast, Veeam, Pluralsight and others.

Having the tools I was able to rebuild my own lab and test new products and features. I was able to expand my knowledge with new technologies. This boosted my confidence, but it also made me think how I can give back more. 

Giving back for me it's mostly time through the blog posts, the VMUG meetings I help organizing and sometimes even speak at and the chats I have with my peers. But you can take other paths than me: be a public speaker, write a book or an article, be a customer evangelizing within your organization or at public events, be a passionate member of a partner organization. There are many ways in which you can apply, you just need to apply (here).   

Sunday, June 14, 2020

Veeam NAS Backup - File Restore Options

We are going to look at restore options available for files backed up using NAS backup job. Once you have successfully completed a file share backup, you get the following restore options:

  • restore entire file share 
  • rollback to a point in time - it is actually a fast entire file share restore 
  • files and folders - allows to pick individual files and folders

Let's take a look at each of them.

Restore file share 

Once started, the restore wizard will ask to select a specific restore point 


The location of the restore can be the original server or to another server

You will choose how to process the restore when the files already exist in destination: keep existing files, replace older file, replace newer files, overwrite. In the same tab you can select to keep the security attributes and permissions of the restored files.

Press Finish on Summary page and wait for the restore process to finish. In my case 51 files have been restored out of  750 on the share:


Rollback to a point in time
In case you don't want to go through the whole process from above, select the second restore option, select the point in time and the file share will be reverted to it:


Files and folders
This options let's you restore specific files or folder. It opens a searchable file explorer. Three different views are present in the file explorer:

  • latest - presents a list with the latest versions of the files 

  • all time - displays all versions of the file on the share 

  • selected - presents the version of a file existing in specific restore point 

Looking at the different views from above you will notice that not only file size is different, but also the number of objects displayed in each view. This is based on the actual state of the file share when the backup was taken. 

After the file has been selected, the restore the file to the same location or copy it locally. 

In case you do not overwrite the original file, you will see the files in the share


Tuesday, June 2, 2020

Backup vSAN 7 File Share with Veeam Backup & Replication 10


This is a blog post about two of the new features that were released this year in vSAN 7 and in Veeam Backup & Replication v10:
  • vSAN File Service
  • Veeam NAS Backup 

As you've already guessed, we're going to create a file share on vSAN, put files on it and backup it up with Veeam Backup & Replication (VBR) using the native NAS support.

At a high level the environment looks as following:



A 2 node vSAN 7 cluster runs in a nested environment. Each node has 4 vCPU, 32 GB of RAM, 20 GB Cache SSD and 200 GB Data SSD. A witness appliance has been also deployed. VBR v10 is configured with the minimum hardware: 2 vCPU and 8GB. Both proxy and repository roles are installed on the same VM with backup server, as well as the embedded MSSQL Express DB. Network connectivity between vSAN and VBR is 1 Gbit.

The configuration of the vSAN cluster or the installation of VBR are not in scope of this blog as there are many resources out there. I've used William Lam's nested library for the vSAN nodes and downloaded the witness appliance directly from VMware site.

The prerequisites for the following steps are: a running vSAN cluster and a VBR installation. You will also need 2 IP's reserved for File Server virutal appliances (one for each node) and a working DNS server with records added for the 2 IP's

Part 1 - vSAN File Service

vSAN File Service provides file shares on top of vSAN storage. It supports NFS 3 an NFS 4 protocols. It comprises of vSAN Distributed File System (vDFS) and it is integrated with vSAN Storage Policy Based Management.

Enable vSAN File Services

At the cluster level go to Configure - vSAN - Services and press enable File Service


Select whether the File Server agent OVF appliance will be download automatically or it will be manually uploaded:

Give a namespace for file share, type in DNS server address and domain

Select a portgroup, type the netmask and default gateway IP


Type in the IP addresses for each of the nodes

Review the configuration and press finish. Installation process will start deploying the agents on each of the vSAN nodes.

Once it is finished you will 2 new components deployed on the vSAN cluster: the File Service Agents. These are virtual appliances running Photon OS 1 and Docker and they act as NFS file servers.

Create NFS file share

It's time to create the share and put some files to it. To create the share Configure - vSAN - File Services Share. Enter the name of the share, storage policy, soft and hard quota limits.

You can also assign labels to the file share. Quotas can be changed at a later time. Next define access control: what IP addresses have access to this share, what type of access and if to protect the share with root squash.

Lastly review the settings and create the share. Once created you will see it vSAN - File Services Share:


Now the file share is ready to be used. Because in part 2 we will use Veeam's NAS backup, we've added a small load of files on the share using a linux VM. First get the share path by selecting the file share and pressing copy on URL:

Next ssh to your favorite Linux VM and mount the file share:

To create random files with random size, we've used a simple script found on here and which was slightly adapted.

for n in {1..500}; do
    dd if=/dev/urandom of=file$( printf %03d "$n" ).bin bs=100 count=$(( RANDOM + 1024 ))
done

The workload tested was:
- 200 files ranging  between a few hundreds of KB and a few MB
- 50 files ranging from a few MB and few tens of M

The script can be adapted to create larger workloads with thousands of files. Since the whole lab is nested, performance testing was not in scope.

Part 2 - Veeam NAS Backup 

The architecture for NAS backup requires at a minimum a file share (in our case NFS from vSAN), file proxy, cache repository, backup repository and a Veeam Backup Server.

Additionally, secondary and archive repositories can be added in infrastructure. In our lab infrastructure all components are deployed on a single VM - the backup server.

File proxy - component that acts as the data mover transporting data from the source (file share) to the backup proxy. It is used in backup and restore activities.

Cache repository - is a location to store temporary metadata. This metadata is used to reduce the load on the file share during incremental backups.

Backup repository - main location of the backups

Add File Share 
In VBR Console go to Inventory - Add File Share

Select NFS share and specify the path to the share


Select the File proxy, cache repository and backup speed


Review the summary and finish the configuration.

Having the file share added, we will add it to a backup job. Right click it and add it to a new backup job (or use an existing one).



Select the file share to backup


Select destination repository, backup policy and archive repository if any.

Select a secondary target if required (backup copy for short term)


Enable the job to run automatically

Review the settings and run the job.


The initial execution backups up all 250 files on the share. For the incremental, we've removed and recreated the first 50 files on the share:


for n in {1..50}; do 
  rm -f file$( printf %03d "$n" ).bin
  dd if=/dev/urandom of=file$( printf %03d "$n" ).bin bs=100 count=$(( RANDOM + 1024 ))
done

As you can see the cache repository is pretty effective in determining which files to be backed up:

Thursday, May 28, 2020

Static route on dual homed vSphere Replication appliance

Recently went through the process of upgrading and troubleshooting a vSphere Replication environment. What was particular about that environment is the vSphere Replication appliances had 2 network interfaces.


The first interface (eth0) has the default gateway, but it is not used for replication traffic. The second interface (eth1) is connected to the portgroup that also connects to ESXi replication vmkernel portgroup. So, replication traffic is supposed to go over eth1. Main site and DR site have networks from different subnets, but connectivity is possible over the replication network. Since hosts in protected site (main site) need to communicate to vSphere Replication server  in DR site we need to force this communication to go over the replication network.

The solution is pretty simple, add a static routes on the appliances to reach the opposite site over the replication network as following:

route add -net 192.168.200.0/24 gw 192.168.100.1

The route is not persistent and it will be lost upon reboot. To make it persistent, we need to add it to a configuration file. vSphere Replication 8.1 and 8.2 are running on VMware Photon OS 2.0. Normally you add the static route in the configuration file for the network where you want to have it. In my case in /etc/systemd/network/10-eth1.network:

[Match]
Name=eth1
[Network]
Address=192.168.100.11/24
DHCP=no
Domains=admnet.vodafone.com
[DHCP]
UseDNS=false

[ROUTE]
Destination=192.168.200.0/24
Gateway=192.168.100.1

However this did not work and the route was not picked up at reboot. Then I tried a different approach. I needed to be sure the route add command would be run every time the appliance restarts, so I added it as a service. I first created the service configuration file called staticroute.service ( a name of my choice). The file is created in /lib/systemd/system/ and contains the following:

[Unit]
Description=Add static route for eth1
After=local-fs.target network-online.target network.target
Wants=local-fs.target network-online.target network.target

[Service]
ExecStart=/usr/sbin/route add -net 192.168.200.0/24 gw 192.168.100.1
Type=oneshot

[Install]
WantedBy=multi-user.target

Finally I've created a symbolic link for the file:

cd /lib/systemd/system/multi-user.target.wants/
ln -s ../staticroute.service staticroute.service

Once you do that you can run ls -la to display the files and you will see your staticroute.service


This will ensure the static route is created at every reboot. Make sure to add the routes in both sites. To test the communication you only need to traceroute the ESXi host replication IP from the opposite site.


Monday, May 11, 2020

VMs not Powering On in Nested ESXi Running on vSphere 7.0 and Options for Nested Lab

After upgrading physical home lab to vSphere 7.0, I've tried to power on the VMs in my nested environment to prepare a demo for an upcoming VMUG meeting. However, I couldn't get any VM to start in the nested ESXi 7.0 running on top of a physical ESXi 7.0. What actually happened is the nested ESXi host crashed.

I found out the following article  warning about this issue affecting an entire family of CPU's - Intel Skylake. My home lab runs Intel Coffe Lake CPUs on gen 8 Intel NUC's and it seems they are affected too.  It does not affect older CPU's as it is the case with my Ivy Bridge i5. Bottom line, until a patch or fix comes into main stream vSphere 7.0, you won't be able to power on a VMs in a nested ESXi 7.0 running on top of an ESXi 7.0. The rest of functionality is there and working.

I had to do my demo using the physical vSphere 7 and later come back to the lab to find a workaround. I found out there are two options that actually work at the moment:

  • option 1 - physical ESXi 7.0 running nested ESXi 6.7
  • option 2 - physical ESXi 6.7 running nested ESXi 7.0
Keeping the physical ESXi on 7.0 and downgrading nested 6.7 may seem the simpler path unless your use case is to test the new features and products. You could do it with the physical hosts, but that means to run all your tests on the base ESXi's and it could lead to partial or full lab rebuild. This approach invalidates the idea of having a nested lab. So now you are left with option 2: temporarily downgrade physical ESXi to 6.7. My use case requires to power on nested VMs, so option 2 is my choice.

I keep the physical lab on a very simple configuration with the purpose of being able to easily rebuild (reconfigure) the hosts. Before going to downgrade, a few aspects need to be considered:
  • are any VMs upgraded to the latest virtual hardware (version 17) - those VMs will not work on vSphere 6.7
  • cleanup vCenter Server: remove hosts from clusters and from vCenter Server inventory. Reusing the same hardware will cause datastore conflict if a cleanup is not done.
  • how the actual downgrade will take place (pressing Shift+R at boot start will not find any older install even it was an upgrade from 6.7)
  • hostnames and IP addresses

Having all this in mind, I embarked on the journey of fresh ESXi 6.7 installs that will allow to run nested ESXi 7.0. 





Friday, May 8, 2020

vSphere Distributed Resource Scheduling - DRS

 DRS is a core technology for resource management in a vSphere cluster. It has been around since ESX3 and it's a battle proven feature without vSphere clusters would not look the same. But what it actually does?
At a high level, it enables to use the resources of ESXi hosts in a cluster as an aggregated pool of resources. Drilling a bit into what it does we'll see that:

  • it provides virtual machine admission control - are there enough resources in the cluster to power on a VM
  • it provides initial placement of a VM - what is the most appropriate host to power on the VM
  • it is responsible for resource pools - quantifiable and aggregated resources to be consumed by a VM or group of VMs
  • it is responsible for resource allocation to VMs or resource pools using shares, reservation and limits
  • it balances the load in the cluster 

vSphere 7 comes with an important change in the logic DRS uses. Until vSphere 7, DRS would try to balance the load looking at the cluster. If a host was overloaded at some point in time, it would try to balance it by migrating VMs to less utilized hosts. Checking cycle was 5 minutes. Starting with vSphere 7 the focus has shifted to VM. DRS calculates a per VM score called virtual machine happiness. Looking at the VM and running every minute, provides a better way of load balancing and ensuring placement of VMs.


Let's look at some of the features in DRS as they appear in the UI. As you can see above at the cluster level you can see the score of the cluster (an average of the scores of each VM) as well the score buckets for VMs. All my VMs are happy in the 80-100%, meaning they have all the resources they require. Going to VM view, we'll see the individual VM scores as well as some of the monitored metrics such as CPU % ready, swapped or ballooned memory:



DRS is enabled at cluster level. Once enabled, four tabs get activated.

Automation tab
The first choice is how much freedom you give to DRS: Automation Level.

There are 3 levels you can choose from

  • manual - generates recommendations for initial placement and migrations. But you have to actually apply the recommendations. Hence it is manual intervention every time.Very good when you need to do some troubleshooting. 
  • partially automated - initial placement of VMs is done by DRS, but migrations are kept at recommendation level. 
  • fully automated - DRS will take care of both initial placement and migrations
Once you have decided which automation level to use, you will choose the threshold for which migrations should be made. The slider is scaled from conservative to aggressive. DRS looks at an imbalance in the cluster the five levels on the slider determine how big that imbalance can be. A conservative setting will not generate migration recommendation for load balancing. An aggressive setting will calculate a very small imbalance threshold.  This translates from almost no migrations (except for specific cases like putting a host into maintenance mode) to a lot of migrations. 

Predictive DRS has been introduced with vSphere 6.5 and it utilizes metrics from vRealize Operations Manager to balance predicted cluster load and workload spikes. 

Virtual Machine Automation enables VM level override of DRS and HA settings. When enabled, you can specify at Cluster - Configure - VM overrides the VMs for which you would change the default settings like having them excluded from migration recommendations:



Additional Options tab

VM Distribution instructs DRS to try and evenly distribute the VMs on hosts. It is a soft limit that will not be enforced over migration recommendations. 

CPU Over-Commitment enforces the defined ratio of vCPU/core. When enabled, DRS will not allow to power on VMs if the ratio is overpasses. This enables to keep some clusters in the realm of performance. The max value is 32, this being the maximum vCPU/core for vSphere 7. 


Scalable shares is a new feature introduced in vSphere 7. You can find very good articles here and here. In a nutshell, scalable shares takes care that the shares allocated to a VM are actually taking into consideration the share priority (high, normal, low) and avoids situations where VMs in resource pools with lower priority can get more resources than VMs in resource pools with higher priority. This situation is called resource pool priority-pie paradox.

Power Management tab


When activated, Distributed Power Management (DPM) looks at the cluster utilization and consolidates VMs on hosts based in order to power off hosts and save energy. For more details, you may look at this article 

Advanced options tab


The tab displays advanced options that have been set for DRS through the UI or manually.

This has been a small introduction to DRS as it looks now in vSphere 7. There are a lot of features and details that have been barely touched or not touched at all. For a deep dive, I recommend the famous Clustering Deep Dive book although I am waiting for an updated version.

Friday, May 1, 2020

vSphere 7 Local Disk Fresh Install and VMFS-L

I just got my new Intel NUC and it was time to install it. So I popped the vSphere 7 USB stick and started installing. Ten minutes later I was looking at the freshly installed system and noticed that the hard drive was much smaller than expected - 337 GB from a 500 GB raw drive. The vSphere 6.7 NUC with the same drive had a capacity of 458 GB. So what happened to 120 GB of space?

Looking at the partition layout the new 120GB VMFSL caught my attention.

Because I am an engineer and I read the manual after the fact, I started reading vSphere 7 storage requirements in the official documentation. The VMFS-L is used as ESX-OSData partition instead of the scratch partition. It stores logs, coredumps and configuration. However I cannot loose 120 GB on each of my NUCs and I am already running vSphere 7 from a USB stick . So two questions came up:
1.  How to recover some of the 120GB from VMFS-L
2. How to install vSphere 7

1.  How to recover some of the 120GB from VMFS-L

I used the install USB stick to install vSphere 7 on it. However this didn't work since vSphere 7 would find the big VMFS-L partition and use it. Which made removing it not possible.


I also turned off scratch 

Then I actually booted up a vSphere 6.7 USB installation. Now I could access the disk and remove VMFS-L partition. 

2. How to install vSphere 7

Since I already had vSphere 7 running off a 16 GB USB stick, I figured out that it should be able to run with less capacity (again, it is my home lab, I wouldn't to all these tricks in production). Hence I installed vSphere 6.7 on the local disk and that got me to the following layout:


Then I added the host to vCSA 7.0 and upgraded to vSphere 7 using Lifecycle Manager which got me to a better looking final partition layout. The upgrade uses the existing core dump, locker, and scratch partitions to create the ESX-OSData volume. 



Seems that it's better to read first even if you are playing in your lab. In my defense, I had installed vSphere 7 before but it was a nested ESXi and they had a small dedicated boot drive.

For more details on how to change scratch partitions you can also look at the following KB