Description of problem: After Nfs-ganesha disable, gluster nfs doesnt start Version-Release number of selected component (if applicable): nfs-ganesha-2.2.0-3.el6rhs.x86_64 glusterfs-3.7.1-3.el6rhs.x86_64 How reproducible: Always Steps to Reproduce: 1. Setup Ganesha Ha cluster 2. Create a volume and export using ganesha.enable on 3. Now unexport the volume using gsneah.enable off 4. Now tier down the cluster- gluster nfs-ganesha disable 5. set the nfs.disbale to off for the volume 6. Volume is not exported 7. Also created a new volume, which also does not get exported Actual results: Volumes not getting exported via gluster nfs Expected results: Volumes must get exported via gluster nfs Additional info: [root@nfs1 ~]# gluster v set testvol ganesha.enable on volume set: success [root@nfs1 ~]# showmount -e localhost Export list for localhost: /testvol (everyone) [root@nfs1 ~]# gluster v set testvol ganesha.enable off volume set: success [root@nfs1 ~]# showmount -e localhost Export list for localhost: [root@nfs1 ~]# gluster nfs-ganesha di[root@nfs1 ~]# gluster v set testvol nfs.disable off volume set: success [root@nfs1 ~]# gluster v info Volume Name: gluster_shared_storage Type: Replicate Volume ID: 25885433-5044-4baa-9012-cee002411f97 Status: Started Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.70.37.153:/rhs/brick1/gluster_shared_storage_brick0 Brick2: 10.70.37.124:/rhs/brick1/gluster_shared_storage_brick1 Brick3: 10.70.37.103:/rhs/brick1/gluster_shared_storage_brick2 Options Reconfigured: ganesha.enable: off nfs.disable: on performance.readdir-ahead: on nfs-ganesha: disable Volume Name: testvol Type: Distributed-Replicate Volume ID: 5f598c50-9a06-44ad-8c60-4a8877f37843 Status: Started Number of Bricks: 6 x 2 = 12 Transport-type: tcp Bricks: Brick1: 10.70.37.153:/rhs/brick1/brick1/testvol_brick0 Brick2: 10.70.37.124:/rhs/brick1/brick1/testvol_brick1 Brick3: 10.70.37.103:/rhs/brick1/brick1/testvol_brick2 Brick4: 10.70.37.177:/rhs/brick1/brick0/testvol_brick3 Brick5: 10.70.37.153:/rhs/brick1/brick2/testvol_brick4 Brick6: 10.70.37.124:/rhs/brick1/brick2/testvol_brick5 Brick7: 10.70.37.103:/rhs/brick1/brick2/testvol_brick6 Brick8: 10.70.37.177:/rhs/brick1/brick1/testvol_brick7 Brick9: 10.70.37.153:/rhs/brick1/brick3/testvol_brick8 Brick10: 10.70.37.124:/rhs/brick1/brick3/testvol_brick9 Brick11: 10.70.37.103:/rhs/brick1/brick3/testvol_brick10 Brick12: 10.70.37.177:/rhs/brick1/brick2/testvol_brick11 Options Reconfigured: ganesha.enable: off features.cache-invalidation: off nfs.disable: off performance.readdir-ahead: on nfs-ganesha: disable [root@nfs1 ~]# showmount -e localhost rpc mount export: RPC: Unable to receive; errno = Connection refused sable [root@nfs1 rhs]# gluster v create dummyvol 10.70.37.153:/tmp/brick99 force volume create: dummyvol: success: please start the volume to access data [root@nfs1 rhs]# gluster v start dummyvol volume start: dummyvol: success [root@nfs1 rhs]# showmount -e localhost rpc mount export: RPC: Unable to receive; errno = Connection refused [root@nfs1 rhs]# rpcinfo -p program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 40487 status 100024 1 tcp 39535 status 100003 3 udp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 udp 2049 nfs 100003 4 tcp 2049 nfs 100005 1 udp 56709 mountd 100005 1 tcp 51154 mountd 100005 3 udp 56709 mountd 100005 3 tcp 51154 mountd 100021 4 udp 58606 nlockmgr 100021 4 tcp 52195 nlockmgr 100011 1 udp 4501 rquotad 100011 1 tcp 4501 rquotad 100011 2 udp 4501 rquotad 100011 2 tcp 4501 rquotad Work around: service restart rpcbind and force start volume [root@nfs1 rhs]# service rpcbind restart Stopping rpcbind: [ OK ] Starting rpcbind: [ OK ] [root@nfs1 rhs]# rpcinfo -p program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper [root@nfs1 rhs]# gluster v start dummyvol force volume start: dummyvol: success [root@nfs1 rhs]# showmount -e localhost Export list for localhost: /dummyvol * /testvol *
Doc text is edited. Please sign off to be included in Known Issues.
Updated the doc text. Kindly check the same.
Included the edited text.
The patch got merged in upstream nfs-ganesha https://review.gerrithub.io/#/c/242232/
The patch got merged in upstream https://review.gerrithub.io/#/c/242232/
The fix for the bz includes cleaning up the rpcbind entries, but it does not actually trigger the glusterfs-nfs to come up whereas the whole intention of this bz was to bring up glusterfs-nfs once nfs-ganesha is disabled. Please provide insights for not bringing up glusterfs-nfs once nfs-ganesha is disable programatically.
As far as I remember from the discussion with Meghana, the issue given to me was , clean the ports used by nfs-ganesha properly. So that gluster-NFS can be up without any obstacles(rpcbinding related issues) in situations like creation of new volume, restarting of volume etc. Automatic triggering of gluster-NFS was not mentioned at point of time. And one more thing to add, right now as far as my understanding of code the option "nfs.disable" set to "on" for when user enable "nfs-ganesha" but it is not turn to "off" when user disable "nfs-ganesha" Should we need to handle these two scenarios when nfs-ganesha disable is performed???
(In reply to Jiffin from comment #17) > As far as I remember from the discussion with Meghana, the issue given to me > was , clean the ports used by nfs-ganesha properly. So that gluster-NFS can > be up without any obstacles(rpcbinding related issues) in situations like > creation of new volume, restarting of volume etc. Automatic triggering of > gluster-NFS was not mentioned at point of time. > > And one more thing to add, right now as far as my understanding of code the > option "nfs.disable" set to "on" for when user enable "nfs-ganesha" but it > is not turn to "off" when user disable "nfs-ganesha" > > Should we need to handle these two scenarios when nfs-ganesha disable is > performed??? yes, we should.
(In reply to Saurabh from comment #18) > (In reply to Jiffin from comment #17) > > As far as I remember from the discussion with Meghana, the issue given to me > > was , clean the ports used by nfs-ganesha properly. So that gluster-NFS can > > be up without any obstacles(rpcbinding related issues) in situations like > > creation of new volume, restarting of volume etc. Automatic triggering of > > gluster-NFS was not mentioned at point of time. > > > > And one more thing to add, right now as far as my understanding of code the > > option "nfs.disable" set to "on" for when user enable "nfs-ganesha" but it > > is not turn to "off" when user disable "nfs-ganesha" > > > > Should we need to handle these two scenarios when nfs-ganesha disable is > > performed??? > > yes, we should. As per current design when a "gluster nfs-ganesha enable" performed , we are turning on "nfs.disable" for all the volumes. But in the reverse operation "gluster nfs-ganesha disable" we are not changing back the "nfs.disable" option , so there is point in bringing up the glusterNFS . This behavior should be changed if planning to bring back the glusterNFS. i.e we should not turn on "nfs.disable" option. Otherwise glusterNFS will be unaware of previously exported volumes when it comes back online.
(In reply to Jiffin from comment #17) > And one more thing to add, right now as far as my understanding of code the > option "nfs.disable" set to "on" for when user enable "nfs-ganesha" but it > is not turn to "off" when user disable "nfs-ganesha" > > Should we need to handle these two scenarios when nfs-ganesha disable is > performed??? I do not think we should automatically set "nfs.disable" back to the default when disabling the "nfs-ganesha" option. Many users disable Gluster/NFS for particular volumes, and we should not try to enable that. From my perspective, adding something like this to the documentation should be sufficient: After disabling the "nfs-ganesha" option, the Gluster volumes will not automatically be exported with Gluster/NFS. You will need to enable Gluster/NFS for each volume that you want to export with # gluster volume reset $VOLUME nfs.disable' This will then automatically start the Gluster/NFS process on all storage servers. Saurabh, what do you think?
Yes, Niels is correct. We cannot blindly enable gnfs after disabling ganesha nfs. Users may enable gnfs on a volume by volume basis, or across the board. Should we consider persisting the per-volume state of gnfs and restoring it? I'm not sure it's worth it; we intend to deprecate gnfs over the next few releases.
Per discussion with Saurabh, we will put an extra step in documentation.
The current behaviour is as per design. As mentioned in the comments above we cannot track the volumes which were exported by gluster-NFS prior to NFS-Ganesha setup to re-export them post teardown. This shall not be fixed and probably needs to documented as expected.
An additional note is added (as another bullet point) right after providing the example for tearing down the HA cluster in section 7.2.4.4.2. Configuring the HA Cluster. http://ccs-jenkins.gsslab.brq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.2-Administration_Guide-branch-master/lastSuccessfulBuild/artifact/tmp/en-US/html-single/index.html#sect-NFS_Ganesha
As per discussion and agreement, it is mentioned in the doc that gluster nfs will not come up automatically after ganesha disable. So the note is mentione din the doc to enable gluster NFS manually after disabling NFS-ganesha. verified the content.
RHGS 3.2.0 GA completed on 23 March 2017