This is a blog post about two of the new features that were released this year in vSAN 7 and in Veeam Backup & Replication v10:
- vSAN File Service
- Veeam NAS Backup
As you've already guessed, we're going to create a file share on vSAN, put files on it and backup it up with Veeam Backup & Replication (VBR) using the native NAS support.
At a high level the environment looks as following:
A 2 node vSAN 7 cluster runs in a nested environment. Each node has 4 vCPU, 32 GB of RAM, 20 GB Cache SSD and 200 GB Data SSD. A witness appliance has been also deployed. VBR v10 is configured with the minimum hardware: 2 vCPU and 8GB. Both proxy and repository roles are installed on the same VM with backup server, as well as the embedded MSSQL Express DB. Network connectivity between vSAN and VBR is 1 Gbit.
The configuration of the vSAN cluster or the installation of VBR are not in scope of this blog as there are many resources out there. I've used William Lam's nested library for the vSAN nodes and downloaded the witness appliance directly from VMware site.
The prerequisites for the following steps are: a running vSAN cluster and a VBR installation. You will also need 2 IP's reserved for File Server virutal appliances (one for each node) and a working DNS server with records added for the 2 IP's
Part 1 - vSAN File Service
vSAN File Service provides file shares on top of vSAN storage. It supports NFS 3 an NFS 4 protocols. It comprises of vSAN Distributed File System (vDFS) and it is integrated with vSAN Storage Policy Based Management.
Enable vSAN File Services
At the cluster level go to Configure - vSAN - Services and press enable File Service
Select whether the File Server agent OVF appliance will be download automatically or it will be manually uploaded:
Give a namespace for file share, type in DNS server address and domain
Select a portgroup, type the netmask and default gateway IP
Type in the IP addresses for each of the nodes
Review the configuration and press finish. Installation process will start deploying the agents on each of the vSAN nodes.
Once it is finished you will 2 new components deployed on the vSAN cluster: the File Service Agents. These are virtual appliances running Photon OS 1 and Docker and they act as NFS file servers.
Create NFS file share
It's time to create the share and put some files to it. To create the share Configure - vSAN - File Services Share. Enter the name of the share, storage policy, soft and hard quota limits.
You can also assign labels to the file share. Quotas can be changed at a later time. Next define access control: what IP addresses have access to this share, what type of access and if to protect the share with root squash.
Lastly review the settings and create the share. Once created you will see it vSAN - File Services Share:
Now the file share is ready to be used. Because in part 2 we will use Veeam's NAS backup, we've added a small load of files on the share using a linux VM. First get the share path by selecting the file share and pressing copy on URL:
Next ssh to your favorite Linux VM and mount the file share:
To create random files with random size, we've used a simple script found on here and which was slightly adapted.
for n in {1..500}; do dd if=/dev/urandom of=file$( printf %03d "$n" ).bin bs=100 count=$(( RANDOM + 1024 )) done
The workload tested was:
- 200 files ranging between a few hundreds of KB and a few MB
- 50 files ranging from a few MB and few tens of M
The script can be adapted to create larger workloads with thousands of files. Since the whole lab is nested, performance testing was not in scope.
Part 2 - Veeam NAS Backup
The architecture for NAS backup requires at a minimum a file share (in our case NFS from vSAN), file proxy, cache repository, backup repository and a Veeam Backup Server.
Additionally, secondary and archive repositories can be added in infrastructure. In our lab infrastructure all components are deployed on a single VM - the backup server.
File proxy - component that acts as the data mover transporting data from the source (file share) to the backup proxy. It is used in backup and restore activities.
Cache repository - is a location to store temporary metadata. This metadata is used to reduce the load on the file share during incremental backups.
Backup repository - main location of the backups
Add File Share
In VBR Console go to Inventory - Add File Share
Select NFS share and specify the path to the share
Select the File proxy, cache repository and backup speed
Review the summary and finish the configuration.
Having the file share added, we will add it to a backup job. Right click it and add it to a new backup job (or use an existing one).
Select the file share to backup
Select destination repository, backup policy and archive repository if any.
Select a secondary target if required (backup copy for short term)
Enable the job to run automatically
Review the settings and run the job.
The initial execution backups up all 250 files on the share. For the incremental, we've removed and recreated the first 50 files on the share:
for n in {1..50}; do rm -f file$( printf %03d "$n" ).bin dd if=/dev/urandom of=file$( printf %03d "$n" ).bin bs=100 count=$(( RANDOM + 1024 )) done
As you can see the cache repository is pretty effective in determining which files to be backed up:
No comments:
Post a Comment