Description of problem: Failed to unexport volume when ganesha cluster is in failover state. Error message in console: ------------------------- volume set: failed: Staging failed on dhcp37-62.lab.eng.blr.redhat.com. Error: Dynamic export addition/deletion failed. Please see log file for details Version-Release number of selected component (if applicable): nfs-ganesha-gluster-2.4.1-7.el7rhgs.x86_64 nfs-ganesha-2.4.1-7.el7rhgs.x86_64 glusterfs-ganesha-3.8.4-13.el7rhgs.x86_64 How reproducible: Always Steps to Reproduce: 1. Create 4 node ganesha cluster. 2. Create volume and export the volume. 3. Bring down ganesha service in one of the node.(service nfs-ganesha stop) 4. Ensure ganesha cluster is in failover state. 5. Unexport the volume from other node. Actual results: Failed to unexport volume when ganesha cluster is in failover state. Expected results: Unexport should succeed. Additional info: No log messages are seen in ganesha.log related to this failure. [root@dhcp37-145 ~]# showmount -e Export list for dhcp37-145.lab.eng.blr.redhat.com: /nfsvol1 (everyone) [root@dhcp37-145 ~]# gluster volume set nfsvol1 ganesha.enable off volume set: failed: Staging failed on dhcp37-62.lab.eng.blr.redhat.com. Error: Dynamic export addition/deletion failed. Please see log file for details [root@dhcp37-145 ~]# showmount -e Export list for dhcp37-145.lab.eng.blr.redhat.com: [root@dhcp37-145 ~]# Similar failure is seen in the following scenario as well. 1. Shutdown one of the node. Ganesha cluster will in failover state. 2. Export the volume. 3. Bring the node up. 4. Unexport the volume.
Patch posted upstream for review https://review.gluster.org/#/c/17081/
Verified this bug on # rpm -qa | grep ganesha nfs-ganesha-gluster-2.4.4-10.el7rhgs.x86_64 nfs-ganesha-debuginfo-2.4.4-10.el7rhgs.x86_64 glusterfs-ganesha-3.8.4-31.el7rhgs.x86_64 nfs-ganesha-2.4.4-10.el7rhgs.x86_64 Volume is un-exported successfully when Ganesha cluster is in fail over state. Hence moving this bug to verified state.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2774