Bug 1233533 - After Nfs-ganesha disable, gluster nfs doesnt start
Summary: After Nfs-ganesha disable, gluster nfs doesnt start
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Deadline: 2015-08-28
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: doc-Administration_Guide
Version: rhgs-3.1
Hardware: x86_64
OS: All
unspecified
medium
Target Milestone: ---
: RHGS 3.2.0
Assignee: Bhavana
QA Contact: Rahul Hinduja
URL:
Whiteboard:
Depends On:
Blocks: 1216951 1227169 1256227 1351546
TreeView+ depends on / blocked
 
Reported: 2015-06-19 06:29 UTC by Apeksha
Modified: 2017-03-24 10:19 UTC (History)
15 users (show)

Fixed In Version: nfs-ganesha-2.2.0-10
Doc Type: Bug Fix
Doc Text:
With this fix, when the nfs-ganesha option is turn off, gluster nfs can be started by turning off nfs.disable option or creating a new volume
Clone Of:
Environment:
Last Closed: 2017-03-24 10:19:03 UTC
Embargoed:


Attachments (Terms of Use)

Description Apeksha 2015-06-19 06:29:43 UTC
Description of problem:
After Nfs-ganesha disable, gluster nfs doesnt start

Version-Release number of selected component (if applicable):
nfs-ganesha-2.2.0-3.el6rhs.x86_64
glusterfs-3.7.1-3.el6rhs.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Setup Ganesha Ha cluster
2. Create a volume and export using ganesha.enable on
3. Now unexport the volume using gsneah.enable off
4. Now tier down the cluster- gluster nfs-ganesha disable
5. set the nfs.disbale to off for the volume
6. Volume is not exported
7. Also created a new volume, which also does not get exported

Actual results:
Volumes not getting exported via gluster nfs

Expected results:
Volumes must get exported via gluster nfs


Additional info:
[root@nfs1 ~]# gluster v set testvol ganesha.enable on
volume set: success

[root@nfs1 ~]# showmount -e localhost
Export list for localhost:
/testvol (everyone)

[root@nfs1 ~]# gluster v set testvol ganesha.enable off
volume set: success

[root@nfs1 ~]# showmount -e localhost
Export list for localhost:

[root@nfs1 ~]# gluster nfs-ganesha di[root@nfs1 ~]# gluster v set testvol nfs.disable off
volume set: success

[root@nfs1 ~]# gluster v info
 
Volume Name: gluster_shared_storage
Type: Replicate
Volume ID: 25885433-5044-4baa-9012-cee002411f97
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.37.153:/rhs/brick1/gluster_shared_storage_brick0
Brick2: 10.70.37.124:/rhs/brick1/gluster_shared_storage_brick1
Brick3: 10.70.37.103:/rhs/brick1/gluster_shared_storage_brick2
Options Reconfigured:
ganesha.enable: off
nfs.disable: on
performance.readdir-ahead: on
nfs-ganesha: disable
 
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 5f598c50-9a06-44ad-8c60-4a8877f37843
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.37.153:/rhs/brick1/brick1/testvol_brick0
Brick2: 10.70.37.124:/rhs/brick1/brick1/testvol_brick1
Brick3: 10.70.37.103:/rhs/brick1/brick1/testvol_brick2
Brick4: 10.70.37.177:/rhs/brick1/brick0/testvol_brick3
Brick5: 10.70.37.153:/rhs/brick1/brick2/testvol_brick4
Brick6: 10.70.37.124:/rhs/brick1/brick2/testvol_brick5
Brick7: 10.70.37.103:/rhs/brick1/brick2/testvol_brick6
Brick8: 10.70.37.177:/rhs/brick1/brick1/testvol_brick7
Brick9: 10.70.37.153:/rhs/brick1/brick3/testvol_brick8
Brick10: 10.70.37.124:/rhs/brick1/brick3/testvol_brick9
Brick11: 10.70.37.103:/rhs/brick1/brick3/testvol_brick10
Brick12: 10.70.37.177:/rhs/brick1/brick2/testvol_brick11
Options Reconfigured:
ganesha.enable: off
features.cache-invalidation: off
nfs.disable: off
performance.readdir-ahead: on
nfs-ganesha: disable
[root@nfs1 ~]# showmount -e localhost
rpc mount export: RPC: Unable to receive; errno = Connection refused
sable

[root@nfs1 rhs]# gluster v create dummyvol 10.70.37.153:/tmp/brick99 force
volume create: dummyvol: success: please start the volume to access data

[root@nfs1 rhs]# gluster v start dummyvol
volume start: dummyvol: success

[root@nfs1 rhs]# showmount -e localhost
rpc mount export: RPC: Unable to receive; errno = Connection refused

[root@nfs1 rhs]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  40487  status
    100024    1   tcp  39535  status
    100003    3   udp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   udp   2049  nfs
    100003    4   tcp   2049  nfs
    100005    1   udp  56709  mountd
    100005    1   tcp  51154  mountd
    100005    3   udp  56709  mountd
    100005    3   tcp  51154  mountd
    100021    4   udp  58606  nlockmgr
    100021    4   tcp  52195  nlockmgr
    100011    1   udp   4501  rquotad
    100011    1   tcp   4501  rquotad
    100011    2   udp   4501  rquotad
    100011    2   tcp   4501  rquotad



Work around: service restart rpcbind and force start volume
[root@nfs1 rhs]# service rpcbind restart
Stopping rpcbind:                                          [  OK  ]
Starting rpcbind:                                          [  OK  ]
[root@nfs1 rhs]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper

[root@nfs1 rhs]# gluster v start dummyvol  force
volume start: dummyvol: success

[root@nfs1 rhs]# showmount -e localhost
Export list for localhost:
/dummyvol *
/testvol  *

Comment 3 monti lawrence 2015-07-22 18:50:53 UTC
Doc text is edited. Please sign off to be included in Known Issues.

Comment 4 Soumya Koduri 2015-07-27 09:16:27 UTC
Updated the doc text. Kindly check the same.

Comment 5 Anjana Suparna Sriram 2015-07-27 18:10:49 UTC
Included the edited text.

Comment 8 Jiffin 2015-08-10 05:51:56 UTC
The patch got merged in upstream nfs-ganesha https://review.gerrithub.io/#/c/242232/

Comment 14 Jiffin 2015-09-09 08:35:24 UTC
The patch got merged in upstream https://review.gerrithub.io/#/c/242232/

Comment 16 Saurabh 2015-10-26 08:42:33 UTC
  The fix for the bz includes cleaning up the rpcbind entries, but it does not actually trigger the glusterfs-nfs to come up whereas the whole intention of this bz was to bring up glusterfs-nfs once nfs-ganesha is disabled.

  Please provide insights for not bringing up glusterfs-nfs once nfs-ganesha is disable programatically.

Comment 17 Jiffin 2015-10-26 13:06:15 UTC
As far as I remember from the discussion with Meghana, the issue given to me was , clean the ports used by nfs-ganesha properly. So that gluster-NFS can be up  without any obstacles(rpcbinding related issues) in situations like creation of new volume, restarting of volume etc. Automatic triggering of gluster-NFS was not mentioned at point of time. 

And one more thing to add, right now as far as my understanding of code the option "nfs.disable" set to "on" for when user enable "nfs-ganesha" but it is not turn to "off" when user disable "nfs-ganesha"

Should we need to handle these two scenarios when nfs-ganesha disable is performed???

Comment 18 Saurabh 2015-10-29 04:20:55 UTC
(In reply to Jiffin from comment #17)
> As far as I remember from the discussion with Meghana, the issue given to me
> was , clean the ports used by nfs-ganesha properly. So that gluster-NFS can
> be up  without any obstacles(rpcbinding related issues) in situations like
> creation of new volume, restarting of volume etc. Automatic triggering of
> gluster-NFS was not mentioned at point of time. 
> 
> And one more thing to add, right now as far as my understanding of code the
> option "nfs.disable" set to "on" for when user enable "nfs-ganesha" but it
> is not turn to "off" when user disable "nfs-ganesha"
> 
> Should we need to handle these two scenarios when nfs-ganesha disable is
> performed???

yes, we should.

Comment 19 Jiffin 2015-10-29 06:13:03 UTC
(In reply to Saurabh from comment #18)
> (In reply to Jiffin from comment #17)
> > As far as I remember from the discussion with Meghana, the issue given to me
> > was , clean the ports used by nfs-ganesha properly. So that gluster-NFS can
> > be up  without any obstacles(rpcbinding related issues) in situations like
> > creation of new volume, restarting of volume etc. Automatic triggering of
> > gluster-NFS was not mentioned at point of time. 
> > 
> > And one more thing to add, right now as far as my understanding of code the
> > option "nfs.disable" set to "on" for when user enable "nfs-ganesha" but it
> > is not turn to "off" when user disable "nfs-ganesha"
> > 
> > Should we need to handle these two scenarios when nfs-ganesha disable is
> > performed???
> 
> yes, we should.

As per current design when a "gluster nfs-ganesha enable" performed , we are turning on "nfs.disable" for all the volumes. But in the reverse operation "gluster nfs-ganesha disable" we are not changing back the "nfs.disable" option , so there is point in bringing up the glusterNFS .

This behavior should be changed if planning to bring back the glusterNFS. i.e we should not turn on "nfs.disable" option. Otherwise glusterNFS will be unaware of previously exported volumes when it comes back online.

Comment 20 Niels de Vos 2015-10-29 08:29:05 UTC
(In reply to Jiffin from comment #17)
> And one more thing to add, right now as far as my understanding of code the
> option "nfs.disable" set to "on" for when user enable "nfs-ganesha" but it
> is not turn to "off" when user disable "nfs-ganesha"
> 
> Should we need to handle these two scenarios when nfs-ganesha disable is
> performed???

I do not think we should automatically set "nfs.disable" back to the default when disabling the "nfs-ganesha" option. Many users disable Gluster/NFS for particular volumes, and we should not try to enable that.

From my perspective, adding something like this to the documentation should be sufficient:

  After disabling the "nfs-ganesha" option, the Gluster volumes will not 
  automatically be exported with Gluster/NFS. You will need to enable 
  Gluster/NFS for each volume that you want to export with
  
    # gluster volume reset $VOLUME nfs.disable'

  This will then automatically start the Gluster/NFS process on all storage 
  servers.


Saurabh, what do you think?

Comment 21 Kaleb KEITHLEY 2015-11-02 15:25:21 UTC
Yes, Niels is correct. We cannot blindly enable gnfs after disabling ganesha nfs.

Users may enable gnfs on a volume by volume basis, or across the board.

Should we consider persisting the per-volume state of gnfs and restoring it? I'm not sure it's worth it; we intend to deprecate gnfs over the next few releases.

Comment 22 Vivek Agarwal 2015-11-17 07:01:40 UTC
Per discussion with Saurabh, we will put an extra step in documentation.

Comment 27 Soumya Koduri 2016-06-15 14:05:36 UTC
The current behaviour is as per design. As mentioned in the comments above we cannot track the volumes which were exported by gluster-NFS prior to NFS-Ganesha setup to re-export them post teardown. This shall not be fixed and probably needs to documented as expected.

Comment 29 Bhavana 2017-02-08 11:49:37 UTC
An additional note is added (as another bullet point) right after providing the example for tearing down the HA cluster in section 7.2.4.4.2. Configuring the HA Cluster.

http://ccs-jenkins.gsslab.brq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.2-Administration_Guide-branch-master/lastSuccessfulBuild/artifact/tmp/en-US/html-single/index.html#sect-NFS_Ganesha

Comment 30 surabhi 2017-02-10 08:13:07 UTC
As per discussion and agreement, it is mentioned in the doc that gluster nfs will not come up automatically after ganesha disable. So the note is mentione din the doc to enable gluster NFS manually after disabling NFS-ganesha.
verified the content.

Comment 31 Rejy M Cyriac 2017-03-24 10:19:03 UTC
RHGS 3.2.0 GA completed on 23 March 2017


Note You need to log in before you can comment on or make changes to this bug.