Bug 1227169

Summary: nfs-ganesha: rpcinfo is not cleared of nfs entries even after disable
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Saurabh <saujain>
Component: nfs-ganeshaAssignee: Jiffin <jthottan>
Status: CLOSED ERRATA QA Contact: Shashank Raj <sraj>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: unspecifiedCC: asriram, jthottan, mlawrenc, mzywusko, nlevinki, rcyriac, rhinduja, rhs-bugs, sashinde, skoduri
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 3.1.3   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: nfs-ganesha-2.3.1-1 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-06-23 05:35:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1233533    
Bug Blocks: 1087818, 1216951, 1299184    

Description Saurabh 2015-06-02 04:20:41 UTC
Description of problem:

Once gluster nfs-ganesha disable is executed it is suppose to bring down nfs-ganesha, dismantle pcs cluster and accordingly rpcbind entries should be cleared.
But that is not the case on of the nodes.


Version-Release number of selected component (if applicable):
glusterfs-3.7.0-3.el6rhs.x86_64
nfs-ganesha-2.2.0-0.el6.x86_64

How reproducible:
most of the time

Steps to Reproduce:
1. create a volume of type 6x2
2. bring up nfs-ganesha, after completing the pre-requisites
3. check the cluster status
4. dismantle nfs-ganesha cluster

Actual results:

As can be seen rpcbind entries is not clear on one of the nodes,
rhs-client21.lab.eng.blr.redhat.com
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  49863  status
    100024    1   tcp  33582  status
    100003    3   udp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   udp   2049  nfs
    100003    4   tcp   2049  nfs
    100005    1   udp  58276  mountd
    100005    1   tcp  33539  mountd
    100005    3   udp  58276  mountd
    100005    3   tcp  33539  mountd
    100021    4   udp  40756  nlockmgr
    100021    4   tcp  34556  nlockmgr
    100011    1   udp   4501  rquotad
    100011    1   tcp   4501  rquotad
    100011    2   udp   4501  rquotad
    100011    2   tcp   4501  rquotad
-----------
rhs-client23.lab.eng.blr.redhat.com
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  52147  status
    100024    1   tcp  41318  status
-----------
rhs-client36.lab.eng.blr.redhat.com
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  42625  status
    100024    1   tcp  42220  status
-----------
rhs-hpc-srv3.lab.eng.blr.redhat.com
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  46021  status
    100024    1   tcp  34998  status


Expected results:
rpcbind entries should be cleared on all nodes of the nfs-ganesha cluster
this is big problem in order to bring up glusterfs-nfs back.

Additional info:

Comment 2 Vivek Agarwal 2015-06-04 07:46:18 UTC
team-nfs

Comment 3 Jiffin 2015-07-13 07:12:20 UTC
This bug is duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1114574

Comment 4 monti lawrence 2015-07-22 17:20:15 UTC
Doc text is edited. Please sign off to be included in Known Issues.

Comment 5 Soumya Koduri 2015-07-27 09:11:10 UTC
We have corrected the doc text. Kindly update the same.

Comment 6 Anjana Suparna Sriram 2015-07-28 02:30:33 UTC
Included the edited text.

Comment 8 Jiffin 2016-01-27 07:15:40 UTC
*** Bug 1114574 has been marked as a duplicate of this bug. ***

Comment 12 Shashank Raj 2016-03-30 11:19:12 UTC
Verified this bug with the latest build 3.7.9-1 and its working as expected.

Once the ganesha cluster is up and running, rpcinfo shows all the required services and their respective entries as below on all the nodes of the cluster.

[root@dhcp46-247 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  37616  status
    100024    1   tcp  56243  status
    100003    3   udp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   udp   2049  nfs
    100003    4   tcp   2049  nfs
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100021    4   udp  32000  nlockmgr
    100021    4   tcp  32000  nlockmgr
    100011    1   udp   4501  rquotad
    100011    1   tcp   4501  rquotad
    100011    2   udp   4501  rquotad
    100011    2   tcp   4501  rquotad

Disable ganesha on the cluster:

[root@dhcp46-247 ~]# gluster nfs-ganesha disable
Disabling NFS-Ganesha will tear down entire ganesha cluster across the trusted pool. Do you still want to continue?
 (y/n) y
This will take a few minutes to complete. Please wait ..
nfs-ganesha : success


Check the rpcinfo on all the cluster nodes:

[root@dhcp46-247 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  37616  status
    100024    1   tcp  56243  status

[root@dhcp46-26 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  40217  status
    100024    1   tcp  49807  status

[root@dhcp47-139 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  37206  status
    100024    1   tcp  50879  status

[root@dhcp46-202 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  39103  status
    100024    1   tcp  58245  status


Based on the above observation, marking this bug as Verified.

Comment 15 errata-xmlrpc 2016-06-23 05:35:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2016:1288