As a part of a larger Veeam project I was looking for the most efficient Veeam setup with minimal impact to the whole virtualization and storage environment during the backup window. The main requirement was to deliver constant performance at any time. My tests quickly turned out, that the Veeam NetApp Backup from Storage Snapshot is the best transport mode to reach this goal.
What means efficient in this context:
- Minimal impact to the production environment
- Additional costs should be avoided
Main parameters of the infrastructure
- VMware vSphere 6.5
- NFS Datastores
- NetApp ONTAP 9
- Veeam Backup & Replication 9.5
- Isolated NFS network (no routing, etc.)
Efficient Veeam NetApp Backup from Storage Snapshot design
The awesome whitepaper NetApp and Veeam Backup & Replication 9.5: Configuration Guide and Best Practices from Stefan Renner shows a lot of concepts of a highly efficient NetApp backups with Veeam. The premium solution is definitely the NetApp integration of SnapMirror / SnapVault with backup from the secondary site. With this method only minimal impact to the components on the production site takes place. But on the other side this concept generates additional costs because of the additional array and the data duplication. Unfortunately, that is the reason why my design has to do without NetApp SnapMirror or NetApp SnapVault.
In addition to the base networks LAN and NFS is a new network for the backup traffic needed within this design. The new backup network connects the Veeam backup proxy with an additional NetApp SVM network interface to transfer the data via Direct Storage Access.
The design only makes sense when dedicated interfaces are used for the Backup, NFS and LAN networks. The SVM which exports the VMware Datastores needs minimum two networks, Backup and NFS. The management metwork (LAN) is optional.
Veeam NetApp Backup from Storage Snapshot procedure:
- Veeam creates VMware VM Snapshot, if necessary
- Veeam creates NetApp Volume Snapshot
- Veeam deletes VMware VM Snapshot, if necessary
- NetApp creates Read-Only export of the NetApp Volume Snapshot
- Veeam Proxy reads Snapshot data with the integrated NFS Client
- Veeam deletes NetApp Volume Snapshot
As you can see in the procedure is one of the main benefits of this concept that the VM snapshots is only a short period of time open compared to the Hot Add or network transfer mode. The result is a way quicker consolidation of the VM snapshot and a reduced impact to the VM itself. In combination with the backup traffic transfer over the dedicated NetApp interface is the impact to the production environment minimized.
Disadvantages of the concept:
- Restore will be processed via network transport mode
- If a VM Snapshot exists before the Veeam Backup starts the network transport mode will be also used
It is possible to enhance the concept with an additional Proxy for virtual appliance transport mode, this proxy will be used if VM Snapshot exists. If the additional backup proxy will be placed in the backup network the traffic through the firewall will be dramatic reduced.
Even with the additional Hot Add Proxy the restore process will use network transport mode. The only way (at the moment) to prevent Veeam from falling back from Direct NFS Restore to Network Transport Mode is access to the NFS Network with the Direct NFS Proxy.
Veeam NetApp Backup from Storage Snapshot configuration
For the setup of this backup method we need to take look at three components, the NetApp SVM configuration, the Veeam setup and the vSphere infrastructure.
NetApp SVM configuration
To serve NFS Datastores to ESXi hosts from a NetApp SVM only one interface in the NFS network is necessary. But for this concept one additional interface for the backup traffic must be added.
In my test environment the interface svm-nfs_data represents the NFS Network and svm-nfs_backup the backup network. As mentioned earlier, both interfaces uses different Ethernet ports as backing.
To serve NFS traffic to the ESXi hosts and the backup proxy both networks need to be added to the export policy.
The first step to enable the Storage Integration is adding the NetApp cluster to the Veeam Backup & Replication server (in my test lab the version 9.5 Update 3 is used). At this point only the management traffic between Veeam Backup & Replication server and NetApp cluster IP happens. The Direct NFS Proxy will be used in further steps like the scan of the NetApp Volumes.
It is also possible to use the default option “Create required NFS export rules automatically”. I removed this options to do some further tests with modified export rules.
Excluding all NetApp SVM root volumes will speed up the scan process in larger environments a little bit.
With the Veeam PowerShell SnapIn we can gather some more details about the managed NetApp Cluster:
PS C:\> (Get-NetAppHost).NaOptions ConnectionOptions : Veeam.Backup.SanPlugin.NetApp.CDomNaHostConnectionOptions DomContainer : Veeam.Backup.Common.CDomContainer HostType : NaCluster IsMetroClusterEnabled : False MetroClusterPartner : IsMetroClusterAlive : False IsHAPairEnabled : False VolumesRescanMode : ExceptExcluded SelectedSanProtocols : NFS CreateNfsExportRulesAutomatically : False IsRescanProxyAutoSelect : False HAPairPartner : IsNeedToShowRetentionForSnapMirror : False License : FlexClone, SnapRestore, Fcp, Iscsi, Nfs, SnapVaultPrimary, SnapVaultSecondary, SnapMirror IsVFilerLicensed : False IsFlexCloneLicensed : True IsSnapRestoreLicensed : True IsFcpLicensed : True IsIscsiLicensed : True IsNfsLicensed : True IsSnapVaultPrimary : True IsSnapVaultSecondary : True IsSnapMirror : True IsHAPairLicensed : False
Especially the existing NetApp licenses are very interesting, the available licenses have influence the possible restore options on the NetApp side. You can find the procedure on page 11 (Restore: NFS Protocol, (ONTAP)) in the whitepaper NetApp and Veeam Backup & Replication 9.5: Configuration Guide and Best Practices from Stefan Renner.
The next component to configure is the Direct NFS Proxy. This proxy type has unlike the Hot Add proxy no dependency to the vSphere VM that need to be backed up. This proxy type can even be a physical server or can run in a different vCenter.
Within this design the configuration of a Preferred Backup Network is an optional step. It is only necessary when the NetApp SVM can be accessed via different networks or proxies.
The final setup uses these backup proxies, one for Direct Storage Access and one für Hot Add:
PS C:\> (Get-VBRViProxy -Name Veeam-02.lab.local).Options TransportMode : San FailoverToNetwork : True UseSsl : False IsAutoVddkMode : True IsAutoDetectDisks : False MaxTasksCount : 2 IsAutoDetectAffinityRepositories : True PS C:\> (Get-VBRViProxy -Name Veeam-03.lab.local).Options TransportMode : HotAdd FailoverToNetwork : True UseSsl : False IsAutoVddkMode : True IsAutoDetectDisks : True MaxTasksCount : 1 IsAutoDetectAffinityRepositories : True
The last step is enabling “Primary Storage integration” in all backup jobs (it’s the default option).
Is this option in one or more jobs not enabled but the Veeam and NetApp setup allows Veeam NetApp Backup from Storage Snapshot a Direct NFS Backup without Storage Snapshot will be done (thanks for the clarification Niels Engelen!) .
Task log without Storage Integration:
Using backup proxy Veeam-02.lab.local for disk Festplatte 1 [nfs]
Task log with Storage Integration:
Using backup proxy Veeam-02.lab.local for retrieving Festplatte 1 data from storage snapshot
My test lab only has minimalistic vSphere Setup. But even in a production setup no additional configuration is necessary to leverage Veeam NetApp Backup from Storage Snapshot.
For more details about a proper VMware NFS setup with NetApp please refer to the NetApp TR- 4597 VMware vSphere with ONTAP.