Bug 1227169 - nfs-ganesha: rpcinfo is not cleared of nfs entries even after disable
Summary: nfs-ganesha: rpcinfo is not cleared of nfs entries even after disable
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: nfs-ganesha
Version: unspecified
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: RHGS 3.1.3
Assignee: Jiffin
QA Contact: Shashank Raj
URL:
Whiteboard:
: 1114574 (view as bug list)
Depends On: 1233533
Blocks: 1087818 1216951 1299184
TreeView+ depends on / blocked
 
Reported: 2015-06-02 04:20 UTC by Saurabh
Modified: 2016-11-08 03:53 UTC (History)
10 users (show)

Fixed In Version: nfs-ganesha-2.3.1-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-06-23 05:35:19 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2016:1288 0 normal SHIPPED_LIVE nfs-ganesha update for Red Hat Gluster Storage 3.1 update 3 2016-06-23 09:12:51 UTC

Description Saurabh 2015-06-02 04:20:41 UTC
Description of problem:

Once gluster nfs-ganesha disable is executed it is suppose to bring down nfs-ganesha, dismantle pcs cluster and accordingly rpcbind entries should be cleared.
But that is not the case on of the nodes.


Version-Release number of selected component (if applicable):
glusterfs-3.7.0-3.el6rhs.x86_64
nfs-ganesha-2.2.0-0.el6.x86_64

How reproducible:
most of the time

Steps to Reproduce:
1. create a volume of type 6x2
2. bring up nfs-ganesha, after completing the pre-requisites
3. check the cluster status
4. dismantle nfs-ganesha cluster

Actual results:

As can be seen rpcbind entries is not clear on one of the nodes,
rhs-client21.lab.eng.blr.redhat.com
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  49863  status
    100024    1   tcp  33582  status
    100003    3   udp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   udp   2049  nfs
    100003    4   tcp   2049  nfs
    100005    1   udp  58276  mountd
    100005    1   tcp  33539  mountd
    100005    3   udp  58276  mountd
    100005    3   tcp  33539  mountd
    100021    4   udp  40756  nlockmgr
    100021    4   tcp  34556  nlockmgr
    100011    1   udp   4501  rquotad
    100011    1   tcp   4501  rquotad
    100011    2   udp   4501  rquotad
    100011    2   tcp   4501  rquotad
-----------
rhs-client23.lab.eng.blr.redhat.com
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  52147  status
    100024    1   tcp  41318  status
-----------
rhs-client36.lab.eng.blr.redhat.com
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  42625  status
    100024    1   tcp  42220  status
-----------
rhs-hpc-srv3.lab.eng.blr.redhat.com
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  46021  status
    100024    1   tcp  34998  status


Expected results:
rpcbind entries should be cleared on all nodes of the nfs-ganesha cluster
this is big problem in order to bring up glusterfs-nfs back.

Additional info:

Comment 2 Vivek Agarwal 2015-06-04 07:46:18 UTC
team-nfs

Comment 3 Jiffin 2015-07-13 07:12:20 UTC
This bug is duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1114574

Comment 4 monti lawrence 2015-07-22 17:20:15 UTC
Doc text is edited. Please sign off to be included in Known Issues.

Comment 5 Soumya Koduri 2015-07-27 09:11:10 UTC
We have corrected the doc text. Kindly update the same.

Comment 6 Anjana Suparna Sriram 2015-07-28 02:30:33 UTC
Included the edited text.

Comment 8 Jiffin 2016-01-27 07:15:40 UTC
*** Bug 1114574 has been marked as a duplicate of this bug. ***

Comment 12 Shashank Raj 2016-03-30 11:19:12 UTC
Verified this bug with the latest build 3.7.9-1 and its working as expected.

Once the ganesha cluster is up and running, rpcinfo shows all the required services and their respective entries as below on all the nodes of the cluster.

[root@dhcp46-247 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  37616  status
    100024    1   tcp  56243  status
    100003    3   udp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   udp   2049  nfs
    100003    4   tcp   2049  nfs
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100021    4   udp  32000  nlockmgr
    100021    4   tcp  32000  nlockmgr
    100011    1   udp   4501  rquotad
    100011    1   tcp   4501  rquotad
    100011    2   udp   4501  rquotad
    100011    2   tcp   4501  rquotad

Disable ganesha on the cluster:

[root@dhcp46-247 ~]# gluster nfs-ganesha disable
Disabling NFS-Ganesha will tear down entire ganesha cluster across the trusted pool. Do you still want to continue?
 (y/n) y
This will take a few minutes to complete. Please wait ..
nfs-ganesha : success


Check the rpcinfo on all the cluster nodes:

[root@dhcp46-247 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  37616  status
    100024    1   tcp  56243  status

[root@dhcp46-26 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  40217  status
    100024    1   tcp  49807  status

[root@dhcp47-139 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  37206  status
    100024    1   tcp  50879  status

[root@dhcp46-202 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  39103  status
    100024    1   tcp  58245  status


Based on the above observation, marking this bug as Verified.

Comment 15 errata-xmlrpc 2016-06-23 05:35:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2016:1288


Note You need to log in before you can comment on or make changes to this bug.