For some time now I have been focusing more on NetApp Data ONTAP again. As usual, automation is a spotlight of my interest. NetApp offers a lot of options around automation, for example, Data ONTAP PowerShell Toolkit, ONTAP REST APIs, Ansible modules for NetApp and many more. This blog post covers how to create a NetApp NFS Export with Ansible.
NetApp is a Red Hat certified support module vendor and ships a broad range of Certified Ansible modules. At the moment only nine vendors are on the list of Ansible certified modules.
Getting started with NetApp and Ansible
If you use a development environment similar to mine, that I have introduced in a previous blog post, just one additional python library is necessary to utilize the NetApp Data ONTAP Modules.
pip install netapp-lib
The Ansible Modules for NetApp Data ONTAP work similar to the VMware vSphere Modules. The host is ‘localhost’ and all connection details are specified per module / task.
Playbook example task for a new Aggregate:
- name: Create Aggregate na_ontap_aggregate: state: present service_state: online name: "{{ aggr_name }}" disk_count: 5 wait_for_online: True time_out: 300 hostname: "{{ netapp_hostname }}" username: "{{ netapp_username }}" password: "{{ netapp_password }}" https: true validate_certs: false
If you are interested in NetApp automation with Ansible and NetApp automation in general, netapp.io is a great source of news and knowledge. Please check also their Slack Workspace “thePub” and visit the #configurationmgmt channel for Ansible related discussions.
Ansible Collections
With the release of Ansible 2.9, Collections have been officially been introduced. NetApp is an early adopter of Ansible Collections and has already released its Data ONTAP modules as a Collection on Ansible Galaxy.
You can install and use Ansible Collections in different ways, one of the most common actions might be the installation with ‘ansible-galaxy’ command and a given fully qualified collection name (FQCN).
# Latest Version ansible-galaxy collection install netapp.ontap # Specific Version ansible-galaxy collection install netapp.ontap:19.10.0

If the Ansible Collection has been installed, you are able to specify the usage in your Playbook. If the Collection is not available the default module (if existing) will be used.
- name: NetApp NFS Setup hosts: localhost gather_facts: no collections: - netapp.ontap vars: aggr_name: test tasks: - name: Create Aggregate na_ontap_aggregate: state: present service_state: online name: "{{ aggr_name }}" disk_count: 5 wait_for_online: True time_out: 300 hostname: "{{ netapp_hostname }}" username: "{{ netapp_username }}" password: "{{ netapp_password }}" https: true validate_certs: false

To make sure that the Module from the Collection is used, you can use the fully qualified collection name (FQCN) in the tasks. If the Collection Module is not available in that case, the playbook or role task execution throws an error.
--- - name: Create Aggregate netapp.ontap.na_ontap_aggregate: state: present service_state: online name: "{{ aggr_name }}" disk_count: 5 wait_for_online: True time_out: 300 hostname: "{{ netapp_hostname }}" username: "{{ netapp_username }}" password: "{{ netapp_password }}" https: true validate_certs: false
Create NetApp NFS Export with Ansible
For this blog post, I have used Ansible 2.9.0 with the NetApp Data ONTAP modules from the Collection version 19.10.1 to create the NetApp NFS export with Ansible. Similar to my prior Ansible project, I have created a Role to be more flexible in development and sharing.
My netapp_nfs Role executes the following tasks if all steps are enabled:
- Create Aggregate
- Create SVM
- Configure Broadcast Domain
- Create Interface
- Configure NFS Service
- Create NFS Export
- Create Volume and set Junction Path
As the Role is available on Ansible Galaxy you can use the ‘ansible-galaxy’ command for the installation.
ansible-galaxy install mycloudrevolution.netapp_nfs
Usage of the Role
The netapp_nfs Role can be included in any Ansible Playbook. The individual Role Tasks can be enabled or disabled by setting ‘true’ or ‘false’ for the ‘create_*’ variables in my example. This design might be useful if you want to use for example an existing Aggregate.
You can find all available variables, expect the connection details, in the definition of the defaults for the Role. If you want to override a default variable, just define it in the Playbook.
--- # defaults file for netapp_nfs ## role tasks create_aggr: true create_svm: true create_broadcast: true create_interface: true create_vol: true create_nfs: true create_export: true verify_export: false ## role vars aggr_name: aggr_data002 broadcast_name: data_domain broadcast_ports: ["netapp-01:e0a", "netapp-01:e0b"] broadcast_ports_default: ["netapp-01:e0c", "netapp-01:e0d"] if_home_port: e0b if_home_node: netapp-01 if_address: 10.0.2.13 if_netmask: 255.255.255.0 vserver_name: data002 vol_name: vol_data002 vol_size: 1024 vol_size_unit: mb export_policy_name: data002 export_policy_rule_client: 10.0.2.0/24 mount_directory: /mnt/verify
Playbook example for the execution of the netapp_nfs Role with all tasks enabled:
- name: NetApp NFS Setup hosts: localhost gather_facts: no vars: create_aggr: true create_svm: true create_broadcast: true create_interface: true create_vol: true create_nfs: true create_export: true verify_export: true netapp_hostname: 10.0.2.11 netapp_username: admin netapp_password: <Passw0rd> roles: - netapp_nfs

The newly created NFS Export is now ready to be mounted.

Use of existing objects
If you want to use an existing object, like an Aggregate, you just need to disable the task ‘create_aggr’ and define the required variable.
Playbook example for the execution of the netapp_nfs Role with existing Aggregate:
- name: NetApp NFS Setup hosts: localhost gather_facts: no vars: create_aggr: false create_svm: true create_broadcast: true create_interface: true create_vol: true create_nfs: true create_export: true verify_export: true netapp_hostname: 10.0.2.11 netapp_username: admin netapp_password: <Passw0rd> aggr_name: ExistingAggr001 roles: - netapp_nfs
Role Task details
Create Aggregate
--- - name: Create Aggregate netapp.ontap.na_ontap_aggregate: state: present service_state: online name: "{{ aggr_name }}" disk_count: 5 wait_for_online: True time_out: 300 hostname: "{{ netapp_hostname }}" username: "{{ netapp_username }}" password: "{{ netapp_password }}" https: true validate_certs: false

Create SVM
--- - name: Create SVM netapp.ontap.na_ontap_svm: state: present name: "{{ vserver_name }}" root_volume: "vol0_{{ vserver_name }}" root_volume_aggregate: "{{ aggr_name }}" root_volume_security_style: unix allowed_protocols: nfs hostname: "{{ netapp_hostname }}" username: "{{ netapp_username }}" password: "{{ netapp_password }}" https: true validate_certs: false

Configure Broadcast Domain
--- - name: Modify Default Broadcast Domain netapp.ontap.na_ontap_broadcast_domain: state: present name: Default mtu: 1500 ipspace: Default ports: "{{ broadcast_ports_default }}" username: "{{ netapp_username }}" password: "{{ netapp_password }}" hostname: "{{ netapp_hostname }}" https: true validate_certs: false - name: Create Broadcast Domain netapp.ontap.na_ontap_broadcast_domain: state: present name: "{{ broadcast_name }}" mtu: 1500 ipspace: Default ports: "{{ broadcast_ports }}" username: "{{ netapp_username }}" password: "{{ netapp_password }}" hostname: "{{ netapp_hostname }}" https: true validate_certs: false

Create Interface
--- - name: Create NFS Interface netapp.ontap.na_ontap_interface: state: present interface_name: "if_{{ vserver_name }}" home_port: "{{ if_home_port }}" home_node: "{{ if_home_node }}" role: data protocols: nfs admin_status: up failover_policy: local-only firewall_policy: mgmt is_auto_revert: true address: "{{ if_address }}" netmask: "{{ if_netmask }}" vserver: "{{ vserver_name }}" hostname: "{{ netapp_hostname }}" username: "{{ netapp_username }}" password: "{{ netapp_password }}" https: true validate_certs: false

Configure NFS Service
--- - name: Change NFS Status netapp.ontap.na_ontap_nfs: state: present service_state: started vserver: "{{ vserver_name }}" nfsv3: enabled nfsv4: enabled nfsv40_acl: enabled nfsv40_read_delegation: enabled nfsv40_referrals: enabled nfsv40_write_delegation: enabled nfsv41: enabled nfsv41_acl: enabled nfsv41_read_delegation: enabled nfsv41_write_delegation: enabled tcp: enabled udp: enabled vstorage_state: enabled showmount: enabled hostname: "{{ netapp_hostname }}" username: "{{ netapp_username }}" password: "{{ netapp_password }}" https: true validate_certs: false

Create NFS Export
--- - name: Modify default Export Policy Rule netapp.ontap.na_ontap_export_policy_rule: state: present policy_name: default vserver: "{{ vserver_name }}" client_match: 0.0.0.0/0 ro_rule: any rw_rule: none super_user_security: none hostname: "{{ netapp_hostname }}" username: "{{ netapp_username }}" password: "{{ netapp_password }}" https: yes validate_certs: false - name: Create New Export Policy Rule netapp.ontap.na_ontap_export_policy_rule: state: present name: "{{ export_policy_name }}" vserver: "{{ vserver_name }}" client_match: "{{ export_policy_rule_client }}" rw_rule: any ro_rule: any protocol: nfs,nfs3,nfs4 super_user_security: any allow_suid: true hostname: "{{ netapp_hostname }}" username: "{{ netapp_username }}" password: "{{ netapp_password }}" https: yes validate_certs: false

Create Volume and set Junction Path
--- - name: Create Volume netapp.ontap.na_ontap_volume: state: present name: "{{ vol_name }}" junction_path: "/{{ vol_name }}" is_infinite: false aggregate_name: "{{ aggr_name }}" size: "{{ vol_size }}" size_unit: "{{ vol_size_unit }}" space_guarantee: none policy: "{{ export_policy_name }}" volume_security_style: unix percent_snapshot_space: 60 vserver: "{{ vserver_name }}" wait_for_completion: true hostname: "{{ netapp_hostname }}" username: "{{ netapp_username }}" password: "{{ netapp_password }}" https: true validate_certs: false


I just discovered your blog this morning (due to a reddit post) and I love it! it’s full of good stuff. I do have one question though: When you use variables like“{{ netapp_hostname }}“, where are these defined? Is this covered in an earlier blog post or are you using Tower to inject them?
Thanks Tim.
The variable is defined in the “vars: section” of the playbook. You can find an example in chapter 2.1