Description of problem: Once gluster nfs-ganesha disable is executed it is suppose to bring down nfs-ganesha, dismantle pcs cluster and accordingly rpcbind entries should be cleared. But that is not the case on of the nodes. Version-Release number of selected component (if applicable): glusterfs-3.7.0-3.el6rhs.x86_64 nfs-ganesha-2.2.0-0.el6.x86_64 How reproducible: most of the time Steps to Reproduce: 1. create a volume of type 6x2 2. bring up nfs-ganesha, after completing the pre-requisites 3. check the cluster status 4. dismantle nfs-ganesha cluster Actual results: As can be seen rpcbind entries is not clear on one of the nodes, rhs-client21.lab.eng.blr.redhat.com program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 49863 status 100024 1 tcp 33582 status 100003 3 udp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 udp 2049 nfs 100003 4 tcp 2049 nfs 100005 1 udp 58276 mountd 100005 1 tcp 33539 mountd 100005 3 udp 58276 mountd 100005 3 tcp 33539 mountd 100021 4 udp 40756 nlockmgr 100021 4 tcp 34556 nlockmgr 100011 1 udp 4501 rquotad 100011 1 tcp 4501 rquotad 100011 2 udp 4501 rquotad 100011 2 tcp 4501 rquotad ----------- rhs-client23.lab.eng.blr.redhat.com program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 52147 status 100024 1 tcp 41318 status ----------- rhs-client36.lab.eng.blr.redhat.com program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 42625 status 100024 1 tcp 42220 status ----------- rhs-hpc-srv3.lab.eng.blr.redhat.com program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 46021 status 100024 1 tcp 34998 status Expected results: rpcbind entries should be cleared on all nodes of the nfs-ganesha cluster this is big problem in order to bring up glusterfs-nfs back. Additional info:
team-nfs
This bug is duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1114574
Doc text is edited. Please sign off to be included in Known Issues.
We have corrected the doc text. Kindly update the same.
Included the edited text.
*** Bug 1114574 has been marked as a duplicate of this bug. ***
Verified this bug with the latest build 3.7.9-1 and its working as expected. Once the ganesha cluster is up and running, rpcinfo shows all the required services and their respective entries as below on all the nodes of the cluster. [root@dhcp46-247 ~]# rpcinfo -p program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 37616 status 100024 1 tcp 56243 status 100003 3 udp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 udp 2049 nfs 100003 4 tcp 2049 nfs 100005 1 udp 20048 mountd 100005 1 tcp 20048 mountd 100005 3 udp 20048 mountd 100005 3 tcp 20048 mountd 100021 4 udp 32000 nlockmgr 100021 4 tcp 32000 nlockmgr 100011 1 udp 4501 rquotad 100011 1 tcp 4501 rquotad 100011 2 udp 4501 rquotad 100011 2 tcp 4501 rquotad Disable ganesha on the cluster: [root@dhcp46-247 ~]# gluster nfs-ganesha disable Disabling NFS-Ganesha will tear down entire ganesha cluster across the trusted pool. Do you still want to continue? (y/n) y This will take a few minutes to complete. Please wait .. nfs-ganesha : success Check the rpcinfo on all the cluster nodes: [root@dhcp46-247 ~]# rpcinfo -p program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 37616 status 100024 1 tcp 56243 status [root@dhcp46-26 ~]# rpcinfo -p program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 40217 status 100024 1 tcp 49807 status [root@dhcp47-139 ~]# rpcinfo -p program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 37206 status 100024 1 tcp 50879 status [root@dhcp46-202 ~]# rpcinfo -p program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 39103 status 100024 1 tcp 58245 status Based on the above observation, marking this bug as Verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2016:1288