Wednesday, December 2, 2020

Using NSX load balancer as a monitoring tool for RESTful APIs

We are going to look at a different use case for NSX load balancer - monitoring tool for external API's. 

Our core platform is integrating with other systems using RESTful API's. These systems even though they are built with high availability in mind, they are sometimes highly unavailable. They are also part of the critical path for our core platform. Not being able to reach the systems creates troubles  in the form of incident tickets because we fail to deliver services to our customers. So we needed a way to monitor those API's.

We know that the systems are monitored, but we don't have access to those tools. We have our own tools, but they do not offer a simple and efficient way to check the status of a RESTful API. Ideally we don't want to introduce another monitoring tool. However, the core platform is running on top of NSX and it actually uses NSX load balancers for its internal service. So why not use load balancers to monitor the external services? 

We created a service monitor and a pool in the load balancer for each of the external systems. This way NSX monitors the status of the RESTful API of the system and generates alerts whenever it is down. The status of the pool is then checked by the core platform. All communication between core platform and APIs goes directly. It does not use the load balancer.

The pool contains the RESTful API endpoint of the system that we use to connect directly from the core platform. 

The service monitor uses GET requests to check the availability of the RESTful API. 


Nothing fancy, basic configuration for a load balancer. Half-configuration actually because here we stop as no traffic goes through the load balancer to these pools. But whenever the external system is not reachable, the load balancer knows it because now the external system is a member in on of its pools: 


The status of the member in the pool is accessible through the RESTful API of NSX Manager. 

GET /api/4.0/edges/{edge-id}/loadbalancer/statistics

<status>DOWN</status>
<failureCause>layer 7 response error, code:400 Bad Request</failureCause>
<lastStateChangeTime>2020-12-02 18:20:43</lastStateChangeTime>

This way the core platform knows the status of its external systems before doing anything. More important core platform can now act on that status. In this case it will wait a specific period of time until trying again to use the system. 

It is a pretty simple solution. It is also pretty obvious that the APIs should have been monitored. We actually relied too much on the availability of those API's and used a fire and forget approach. The approach was far from optimal and it impacted our KPIs and created additional operational workload. 

Tuesday, November 3, 2020

About vSphere Cluster sizing, vMotions and DRS

This post applies for the ones that are (still) running older versions of vSphere. In my case it is vSphere 6.5

We came across recently on situation where we had a 16 host cluster and a very peculiar situation. Out of the 400 VMs in the cluster, 200 where hosted on 2 hosts and the rest of 200 on the remaining 14 hosts. We were in the process of migrating VMs to this cluster from another one, but we were expecting DRS to actually distribute the VMs more evenly across the hosts. No, the VMs did not compete for memory or cpu, however having 100 VMs on the same host while other hosts are running less than 20 can cause issues in the case of a host failure event. 

We were aware of DRS not being aware of vSphere HA, but this was not the case since the VMs were live migrated. The VMs did not compete for memory or CPU because the hosts have sufficient resources and the average memory size in this case was small. 

The issue was fixed with manual redistribution across the cluster and human selection of the hosts during vMotion. One of the lessons we learnt is to better design the host size to match average VM size and have a pretty good idea on how many VMs we want to run on a host. If 100 is not acceptable, then make the hosts smaller (or the VM bigger :-) )

Let's do a bit of math too:

We have 300 VMs with an average memory size of 12 GB and and average CPU size of  3 vCPU's for a total of 3600 GB of RAM and 900 vCPUs. For 1:3 physical core to vCPU oversubscription we need 300 physical cores - a 24 core CPU with HT enabled will provide 48 cores. On a dual socket server we can get 96 threads so we could fit 300 VMs on 3 servers with 1:3.125 oversubscription ratio. Add 1.5 TB of RAM on each ESXi host and you have your 100 VMs per host. But this is exactly the case we wanted to avoid. The alternative is we downsize to smaller CPU's, less RAM and more physical ESXi hosts. Let's aim for 60 VMs per host. We know we will need 5 hosts to accommodate the load with 60 cores each ESXi host and 720 GB of RAM. Between the two, I would choose to the second one. I think I should right size the host capacity to fit the workload rather than putting a lot of resources in there. And don't forget about the N+1 failure tolerance of the the cluster. 

Tuesday, October 27, 2020

PowerCLI - Optimizing Scripts

Not sure when and how this drive to optimize appeared in me, but I've seen it taking control in different situations and driving other people crazy. So I thought why not give it a try on the blog also. Maybe something good comes out of it. 

I will go over a couple of simple concepts that will help with the execution of PowerCLI scripts. The first one is API calls and to be more specific the number of calls. 

Let's use the following example: 

  • we have a list of VMs and we need to get the total used space by those VMs

The use case can be approached in two ways:
  • get the size for each VM from the list and make the sum 
foreach ($vmName in $vmList) {
	$totalUsedSpace  += (Get-VM $vmName).UsedSpaceGB
}
  
In this case we would use Get-VM cmdlet for each VM in the list. When we use Get-VM we actually make an API call to vCenter Server. So that means for a list of 10 VMs we do 10 API calls. If we do this with 100 VMs that means 100 calls. The immediate effect is that the script takes a long time to execute and we increase the load on vCenter Server with each call. 

But we can get all VMs from vCenter Server in one call and then go through the VMs
  • get all VMs in vCenter Server and then check each VM in the list 
$allVms = Get-VM

foreach ($vm in $allVms) {
	foreach ($vmName in $vmList) {
		if ($vm.Name -eq $vmName) {
			$totalUsedSpace  += $vm.UsedSpaceGB
		}
	} 
	
}

The advantage of the previous example is we do only one API call. The disadvantage is we take all objects and then we have imbricated "for" loops. And this can take a long time especially when there are a lot of objects in vCenter Server. Let's take a look at some execution times.

The data set is made of more than 6000 VMs in vCenter Server ($allVms) and we are looking for the size of a list of 100 VMs ($vmList)

For 100 VMs, the first script (on API call for each VM in list) takes around 275 seconds. The second script takes 14 seconds. Even if we do "for" in "for", it takes way less time than to do hundreds of API calls. Obviously, if we increase the $vmList size to 300 VMs the first script will take almost 3 times the time while the second will only increase by a few seconds to 16 seconds. That comes from the increased complexity of running the imbricate loops. The more we increase $vmList, the more time it will take. At 600 VMs in $vmList, it takes 21 seconds to run. At this moment we'd like to see if there is another way to actually decrease the complexity of the script and make it faster. 

Let's see how to get rid of the imbricated loops. For this to work, we'll use a hash table (key-value pair) created from vm names and their used space. Once the hash table is created then we search in it only for the vms in $vmList:
$allVms = get-vm 
$hashVmSize = @{}
foreach ($vm in $allVms) {
	$hashVmSize.Add($vm.Name,$vm.UsedSpaceGB)
}
$totalUsedSpace = 0 
foreach ($vmName in $vmList) {
	$totalUsedSpace  += $hashVmSize[$vmName]
}

For the new script the time to execute for the 600 VMs in $vmList is less than 12 seconds (almost half the time of the imbricated loops script). The complexity reduction comes from the number of operations executed. If we count inside the loops how many times it is executed, we'll see that for the hash table script we run the "for" loops for almost 7000 times, while for the second script (imbricated loops), we run the loop for more than 38 million times (yes, millions). 

What we've seen so far:
  • calling the API is expensive - 1 call is better than 10 calls 
  • imbricated loops are not scalable (not even nice) 
  • hash tables can help
This doesn't mean that if you want to see the size of 2 VMs you should bring 6000 VMs from vCenter Server. It means that next time when your script takes 20 minutes, maybe there is a faster way to do it. 

Tuesday, September 29, 2020

VMware Cloud - What's New

As part of this year's VMworld news, we'll take a quick look at what's new in VMware Cloud. In the past 5 years, VMware Cloud expanded from the beginning with IBM Cloud to all major cloud providers (AWS, Azure, GCP and so on). It also acquired new technologies to help develop its any cloud any app strategy. It's only fair to say that the following picture is impressive and has to be put here to get an understanding of what actually VMware Cloud has come to. 





This is a dear picture since I had my (small) part in the IBM team that made possible the IBM Cloud Partnership (yeah, a bit of bragging never hurts). However in this post we'll look at three other topics:

  • VMware Cloud in Dell EMC
  • VMware Cloud on AWS
  • VMware Cloud DRaaS


VMware Cloud on Dell EMC


VMware Cloud on Dell EMC is a fully managed on-premises cloud solution. The client buys a rack preinstalled with servers and preconfigured with VMware Software Defined Datacenter (SDDC). VMware is managing the whole infrastructure, while the client's only worry would be to find room in the datacenter for the rack - his own or co-located. So far this offering is available only in US, but will soon come to other parts of the world so it's time to take a closer look. 





The solution comes with regulatory compliance and certifications - ISO 27001, CSA, SCO2, CCPA, GDP


For the hardware nodes, a t-shirt size approach is available with the latest addition of X1d.xlarge node type which scales to 1.5 TB of memory and almost 62 TB of NVMe storage. You can put up to 24 such nodes in a rack, which sums up to some pretty impressive values.  




A new feature allows for segmenting the 24 nodes in a rack in different workload clusters for use cases such as special licensing requirements. It allows for up to 8 cluster (3 node cluster) in a rack. To get your workloads in and out you can now use VMware HCX based migrations. 


This is a VMware managed service which includes end to end life cycle management of the solution, support 24 * 7 and a 4 hour break fix SLA. 



VMware Cloud on AWS


Since its release in 2017, it has become one of the most talked about and developed integration in the portfolio. It not only ensures that you can run native VMware workloads on AWS bare metal across a global footprint of datacenters, but it also provides direct access between VMware VMs and their services to AWS services. 



When we talk about what's new in this area we can refer to:

  • core SDDC 
    • i3en.metal instances with VSAN compression enabled
    • HCX enhancements to avoid hairpining / tromboning and ease migration by logically grouping and migrating applications 
    • multi-edge SDDC for improved North-South network bandwidth 
    • multi SDDC connection with VMware Transit Connect 
    • vCenter Linking within an SDDC group which will bring the inventory of multiple vCenter Servers under the same pane of glass 
  • operations and automation 
    • vRealize Operations Cloud enhancements 
    • vRealize Log Insight Cloud enhancements
    • vRealize Network Insight Cloud for network visibility 
    • vRealize Cloud Automation enhancements for IaC, Terraform service
    • vRealize Orchestrator support for workflow automation
  • workloads 
    • Tanzu support for K8S runtime and management
  • disaster recovery 
    • improvements to hot DRaaS solution with VMware Siter Recovery Manager
    • new on-demand DRaaS with VMware Cloud Disaster Recovery 


VMware Cloud Disaster Recovery 


The newest offering in VMware's Cloud portfolio is based on Datrium recent acquisition and it provides on-demand Disaster Recovery as a Service. Even if it can be seen as overlapping with the traditional Site Recovery Manager and vSphere Replication based solution, the main difference is that for the new offering you get to pay what you are using. Instead of having to build up your whole DR site somewhere in the cloud, DRaaS on-demand allows you to keep a minimal number of hosts by replicating to a cloud based storage. When required, VMs are powered on demand on the VMC SDDC capacity. 


The following table summarizes the differences between on-demand DRaaS and hot DRaaS. 



In this case you want to optimize costs before RPO, this is the way to go. 






Monday, June 15, 2020

Being a vExpert

I applied to vExpert program for the first time 2 years ago. I was kind of pushed by one of my VMUG co-leaders who was accepted earlier that year. So, I applied too. I still remember how it felt when I got the acceptance letter. It gave me joy and a peculiar sense of pride. All of a sudden there were 2 vExperts in Romania. I didn't realize the true potential of it until I had access to this beautiful community of people driven by their passion and to the resources made available to me:
  • access to a global network of techies through dedicated Slack channels 
  • VMware licenses for my lab for 1 year 
  • dedicated webinars for vExperts
  • parties at VMworld (this will wait a bit for now) 
  • more people and knowledge through the subprogram 
  • increased visibility on social media: Twitter, LinkedIn
More goodies: I got licenses and complimentary subscriptions from partners like Hy Trust, Runecast, Veeam, Pluralsight and others.

Having the tools I was able to rebuild my own lab and test new products and features. I was able to expand my knowledge with new technologies. This boosted my confidence, but it also made me think how I can give back more. 

Giving back for me it's mostly time through the blog posts, the VMUG meetings I help organizing and sometimes even speak at and the chats I have with my peers. But you can take other paths than me: be a public speaker, write a book or an article, be a customer evangelizing within your organization or at public events, be a passionate member of a partner organization. There are many ways in which you can apply, you just need to apply (here).   

Sunday, June 14, 2020

Veeam NAS Backup - File Restore Options

We are going to look at restore options available for files backed up using NAS backup job. Once you have successfully completed a file share backup, you get the following restore options:

  • restore entire file share 
  • rollback to a point in time - it is actually a fast entire file share restore 
  • files and folders - allows to pick individual files and folders

Let's take a look at each of them.

Restore file share 

Once started, the restore wizard will ask to select a specific restore point 


The location of the restore can be the original server or to another server

You will choose how to process the restore when the files already exist in destination: keep existing files, replace older file, replace newer files, overwrite. In the same tab you can select to keep the security attributes and permissions of the restored files.

Press Finish on Summary page and wait for the restore process to finish. In my case 51 files have been restored out of  750 on the share:


Rollback to a point in time
In case you don't want to go through the whole process from above, select the second restore option, select the point in time and the file share will be reverted to it:


Files and folders
This options let's you restore specific files or folder. It opens a searchable file explorer. Three different views are present in the file explorer:

  • latest - presents a list with the latest versions of the files 

  • all time - displays all versions of the file on the share 

  • selected - presents the version of a file existing in specific restore point 

Looking at the different views from above you will notice that not only file size is different, but also the number of objects displayed in each view. This is based on the actual state of the file share when the backup was taken. 

After the file has been selected, the restore the file to the same location or copy it locally. 

In case you do not overwrite the original file, you will see the files in the share


Tuesday, June 2, 2020

Backup vSAN 7 File Share with Veeam Backup & Replication 10


This is a blog post about two of the new features that were released this year in vSAN 7 and in Veeam Backup & Replication v10:
  • vSAN File Service
  • Veeam NAS Backup 

As you've already guessed, we're going to create a file share on vSAN, put files on it and backup it up with Veeam Backup & Replication (VBR) using the native NAS support.

At a high level the environment looks as following:



A 2 node vSAN 7 cluster runs in a nested environment. Each node has 4 vCPU, 32 GB of RAM, 20 GB Cache SSD and 200 GB Data SSD. A witness appliance has been also deployed. VBR v10 is configured with the minimum hardware: 2 vCPU and 8GB. Both proxy and repository roles are installed on the same VM with backup server, as well as the embedded MSSQL Express DB. Network connectivity between vSAN and VBR is 1 Gbit.

The configuration of the vSAN cluster or the installation of VBR are not in scope of this blog as there are many resources out there. I've used William Lam's nested library for the vSAN nodes and downloaded the witness appliance directly from VMware site.

The prerequisites for the following steps are: a running vSAN cluster and a VBR installation. You will also need 2 IP's reserved for File Server virutal appliances (one for each node) and a working DNS server with records added for the 2 IP's

Part 1 - vSAN File Service

vSAN File Service provides file shares on top of vSAN storage. It supports NFS 3 an NFS 4 protocols. It comprises of vSAN Distributed File System (vDFS) and it is integrated with vSAN Storage Policy Based Management.

Enable vSAN File Services

At the cluster level go to Configure - vSAN - Services and press enable File Service


Select whether the File Server agent OVF appliance will be download automatically or it will be manually uploaded:

Give a namespace for file share, type in DNS server address and domain

Select a portgroup, type the netmask and default gateway IP


Type in the IP addresses for each of the nodes

Review the configuration and press finish. Installation process will start deploying the agents on each of the vSAN nodes.

Once it is finished you will 2 new components deployed on the vSAN cluster: the File Service Agents. These are virtual appliances running Photon OS 1 and Docker and they act as NFS file servers.

Create NFS file share

It's time to create the share and put some files to it. To create the share Configure - vSAN - File Services Share. Enter the name of the share, storage policy, soft and hard quota limits.

You can also assign labels to the file share. Quotas can be changed at a later time. Next define access control: what IP addresses have access to this share, what type of access and if to protect the share with root squash.

Lastly review the settings and create the share. Once created you will see it vSAN - File Services Share:


Now the file share is ready to be used. Because in part 2 we will use Veeam's NAS backup, we've added a small load of files on the share using a linux VM. First get the share path by selecting the file share and pressing copy on URL:

Next ssh to your favorite Linux VM and mount the file share:

To create random files with random size, we've used a simple script found on here and which was slightly adapted.

for n in {1..500}; do
    dd if=/dev/urandom of=file$( printf %03d "$n" ).bin bs=100 count=$(( RANDOM + 1024 ))
done

The workload tested was:
- 200 files ranging  between a few hundreds of KB and a few MB
- 50 files ranging from a few MB and few tens of M

The script can be adapted to create larger workloads with thousands of files. Since the whole lab is nested, performance testing was not in scope.

Part 2 - Veeam NAS Backup 

The architecture for NAS backup requires at a minimum a file share (in our case NFS from vSAN), file proxy, cache repository, backup repository and a Veeam Backup Server.

Additionally, secondary and archive repositories can be added in infrastructure. In our lab infrastructure all components are deployed on a single VM - the backup server.

File proxy - component that acts as the data mover transporting data from the source (file share) to the backup proxy. It is used in backup and restore activities.

Cache repository - is a location to store temporary metadata. This metadata is used to reduce the load on the file share during incremental backups.

Backup repository - main location of the backups

Add File Share 
In VBR Console go to Inventory - Add File Share

Select NFS share and specify the path to the share


Select the File proxy, cache repository and backup speed


Review the summary and finish the configuration.

Having the file share added, we will add it to a backup job. Right click it and add it to a new backup job (or use an existing one).



Select the file share to backup


Select destination repository, backup policy and archive repository if any.

Select a secondary target if required (backup copy for short term)


Enable the job to run automatically

Review the settings and run the job.


The initial execution backups up all 250 files on the share. For the incremental, we've removed and recreated the first 50 files on the share:


for n in {1..50}; do 
  rm -f file$( printf %03d "$n" ).bin
  dd if=/dev/urandom of=file$( printf %03d "$n" ).bin bs=100 count=$(( RANDOM + 1024 ))
done

As you can see the cache repository is pretty effective in determining which files to be backed up: