Description of problem: At times. running "gluster nfs-ganesha disable" fails with below error message - [root@gqas009 ganesha]# gluster nfs-ganesha disable Disabling NFS-Ganesha will tear down entire ganesha cluster across the trusted pool. Do you still want to continue? (y/n) y This will take a few minutes to complete. Please wait .. nfs-ganesha: failed: nfs-ganesha is already (null)d. [root@gqas009 ganesha]# Here in the above error message, instead of "disabled" the string contains "(null)d". Jiffin and myself found that this issue happens mostly when this command was run without enabling nfs-ganesha global option even once in the storage pool setup. The issue is with de-referencing NULL string in glusterd_op_stage_set_ganesha() when priv->opts dict doesn't contain global key. Version-Release number of selected component (if applicable): 3.2 How reproducible: Fairly Steps to Reproduce: 1. On a fresh setup, without enabling NFS-Ganesha, try running disable command. Actual results: The error message contain invalid string. Expected results: The error message should be proper. Additional info:
Patch posted upstream for review https://review.gluster.org/#/c/16791/
Verified with latest build. Execute nfs-ganesha disable for the first time without enabling and it dows not give the error message as nulld as it was giving before. gluster nfs-ganesha disable Disabling NFS-Ganesha will tear down entire ganesha cluster across the trusted pool. Do you still want to continue? (y/n) y This will take a few minutes to complete. Please wait .. nfs-ganesha: failed: nfs-ganesha is already disabled. Now it gives proper error message. Marking the BZ verified. nfs-ganesha-2.4.4-4.el7rhgs.x86_64 glusterfs-ganesha-3.8.4-24.el7rhgs.x86_64 nfs-ganesha-gluster-2.4.4-4.el7rhgs.x86_64
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2774