Description of problem: Presently, when we are executing the command "gluster nfs-ganesha disable", the disable of nfs-ganesha cluster starts working. Whereas, in some cases the customer may not want to disable the cluster so we shall let him be judicious by giving a prompt to let him say "yes or no" for going ahead with the disable mechanism. Version-Release number of selected component (if applicable): glusterfs-3.7.5-5.el7rhgs.x86_64 nfs-ganesha-2.2.0-10.el7rhgs.x86_64 How reproducible: always Actual results: # gluster nfs-ganesha disable This will take a few minutes to complete. Please wait .. nfs-ganesha : success Expected results: Prompt it to say yes or no, to move ahead accordingly. Additional info:
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions
Verified this bug with latest build 3.7.9-1 and its working as expected. [root@dhcp46-247 exports]# rpm -qa|grep glusterfs glusterfs-3.7.9-1.el7rhgs.x86_64 glusterfs-api-3.7.9-1.el7rhgs.x86_64 glusterfs-cli-3.7.9-1.el7rhgs.x86_64 glusterfs-fuse-3.7.9-1.el7rhgs.x86_64 glusterfs-geo-replication-3.7.9-1.el7rhgs.x86_64 glusterfs-libs-3.7.9-1.el7rhgs.x86_64 glusterfs-ganesha-3.7.9-1.el7rhgs.x86_64 glusterfs-rdma-3.7.9-1.el7rhgs.x86_64 glusterfs-client-xlators-3.7.9-1.el7rhgs.x86_64 glusterfs-server-3.7.9-1.el7rhgs.x86_64 [root@dhcp46-247 exports]# rpm -qa|grep ganesha nfs-ganesha-gluster-2.3.1-1.el7rhgs.x86_64 glusterfs-ganesha-3.7.9-1.el7rhgs.x86_64 nfs-ganesha-2.3.1-1.el7rhgs.x86_64 Whenever we execute, gluster nfs-ganesha disable, it prompts the user with (y/n) option. [root@dhcp46-247 exports]# gluster nfs-ganesha disable Disabling NFS-Ganesha will tear down entire ganesha cluster across the trusted pool. Do you still want to continue? (y/n) y This will take a few minutes to complete. Please wait .. nfs-ganesha : success Based on the above observation marking this bug as Verified.
Doc text looks good to me.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1240