Document URL: https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html-single/Administration_Guide/index.html Section Number and Name: 7.2.3.3. Configuring NFS-Ganesha using Gluster CLI 7.2.3.3.1. Prerequisites to run NFS-Ganesha Describe the issue: As discussed in the below threads, IPv6 need not be enabled for nfs-ganesha cluster setup and functionality to work. The same needs to be corrected in the admin guide. Suggestions for improvement: Additional information:
Shashank, Could you please verify and confirm the same. Thanks!
Verified this by following the below steps: 1) Edit /etc/default/grub and append ipv6.disable=1 to GRUB_CMDLINE_LINUX as below on all ganesha nodes: GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel_dhcp37-44/root rd.lvm.lv=rhel_dhcp37-44/swap rhgb quiet ipv6.disable=1" 2) Run the grub2-mkconfig command to regenerate the grub.cfg file on all ganesha nodes: [root@dhcp37-44 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg Generating grub configuration file ... Found linux image: /boot/vmlinuz-3.10.0-327.13.1.el7.x86_64 Found initrd image: /boot/initramfs-3.10.0-327.13.1.el7.x86_64.img Found linux image: /boot/vmlinuz-0-rescue-604c6de4495746dc9269f481bf71d86a Found initrd image: /boot/initramfs-0-rescue-604c6de4495746dc9269f481bf71d86a.img done 3) Reboot all the nodes. 4) Observe that ipv6 is not enabled after the reboot [root@dhcp37-44 ~]# ip a 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:87:30:9d brd ff:ff:ff:ff:ff:ff inet 10.70.37.44/23 brd 10.70.37.255 scope global dynamic eth0 valid_lft 86387sec preferred_lft 86387sec 3: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN link/ether 52:f0:ff:0c:94:08 brd ff:ff:ff:ff:ff:ff 5) Enable nfs-ganesha on the cluster and observe that there are no failures: [root@dhcp37-44 ~]# gluster nfs-ganesha enable Enabling NFS-Ganesha requires Gluster-NFS to be disabled across the trusted pool. Do you still want to continue? (y/n) y This will take a few minutes to complete. Please wait .. nfs-ganesha : success [root@dhcp37-44 ~]# pcs status Cluster name: G1466405410.4 Last updated: Tue Jun 21 04:03:09 2016 Last change: Tue Jun 21 04:02:43 2016 by root via cibadmin on dhcp37-44.lab.eng.blr.redhat.com Stack: corosync Current DC: dhcp37-44.lab.eng.blr.redhat.com (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum 4 nodes and 16 resources configured Online: [ dhcp37-111.lab.eng.blr.redhat.com dhcp37-173.lab.eng.blr.redhat.com dhcp37-220.lab.eng.blr.redhat.com dhcp37-44.lab.eng.blr.redhat.com ] Full list of resources: Clone Set: nfs_setup-clone [nfs_setup] Started: [ dhcp37-111.lab.eng.blr.redhat.com dhcp37-173.lab.eng.blr.redhat.com dhcp37-220.lab.eng.blr.redhat.com dhcp37-44.lab.eng.blr.redhat.com ] Clone Set: nfs-mon-clone [nfs-mon] Started: [ dhcp37-111.lab.eng.blr.redhat.com dhcp37-173.lab.eng.blr.redhat.com dhcp37-220.lab.eng.blr.redhat.com dhcp37-44.lab.eng.blr.redhat.com ] Clone Set: nfs-grace-clone [nfs-grace] Started: [ dhcp37-111.lab.eng.blr.redhat.com dhcp37-173.lab.eng.blr.redhat.com dhcp37-220.lab.eng.blr.redhat.com dhcp37-44.lab.eng.blr.redhat.com ] dhcp37-44.lab.eng.blr.redhat.com-cluster_ip-1 (ocf::heartbeat:IPaddr): Started dhcp37-44.lab.eng.blr.redhat.com dhcp37-220.lab.eng.blr.redhat.com-cluster_ip-1 (ocf::heartbeat:IPaddr): Started dhcp37-220.lab.eng.blr.redhat.com dhcp37-111.lab.eng.blr.redhat.com-cluster_ip-1 (ocf::heartbeat:IPaddr): Started dhcp37-111.lab.eng.blr.redhat.com dhcp37-173.lab.eng.blr.redhat.com-cluster_ip-1 (ocf::heartbeat:IPaddr): Started dhcp37-173.lab.eng.blr.redhat.com PCSD Status: dhcp37-44.lab.eng.blr.redhat.com: Online dhcp37-220.lab.eng.blr.redhat.com: Online dhcp37-111.lab.eng.blr.redhat.com: Online dhcp37-173.lab.eng.blr.redhat.com: Online Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/disabled
Thanks a lot Shashank. Also could you please run any regression suites testing ganesha functionality and failover/failback.
Ran basic failover/failback and IO cases as below and it passed when ipv6 is disabled on the system: test_1_ha_failover_v4 (__main__.gluster_tests) ... ok test_2_ha_failover_v3 (__main__.gluster_tests) ... ok test_3_ha_setup_io_v4 (__main__.gluster_tests) ... ok test_4_ha_failover_failback_subdir_v4 (__main__.gluster_tests) ... ok test_5_ha_failover_fops_v3 (__main__.gluster_tests) ... ok test_6_test_ha_failover_multiple_clients_v4 (__main__.gluster_tests) ... ok test_7_ha_failover_failback_twiceB_v4 (__main__.gluster_tests) ... ok test_8_ha_failover_failback_rebootv4 (__main__.gluster_tests) ... ok
Hi Soumya, The following bullet point from 7.2.4.3.1. Prerequisites to run NFS-Ganesha is now removed. * IPv6 must be enabled on the host interface which is used by the NFS-Ganesha daemon. To enable IPv6 support, perform the following steps: * Comment or remove the line options ipv6 disable=1 in the /etc/modprobe.d/ipv6.conf file. * Reboot the system. Let me know if there any more changes that needs to be made in the Admin guide. http://jenkinscat.gsslab.pnq.redhat.com:8080/view/Gluster/job/doc-Red_Hat_Gluster_Storage-3.1.3-Administration_Guide%20%28html-single%29/lastBuild/artifact/tmp/en-US/html-single/index.html#sect-NFS_Ganesha Thanks!
The changes look good to me. Thanks!
Thanks Soumya. Moving the bug on_qa.
Since the existing statement regarding "IPv6 enable" under prerequisites section of nfs-ganesha has been removed and which is no longer a requirement to setup nfs-ganesha, marking this bug as Verified.