Description of problem: ======================= Getting "NFS Server N/A" entry in the volume status by default in rhgs 3.2 ( glusterfs-3.8.4-1 ) ]# gluster volume status Status of volume: Dis Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick node1:/tmp/1 49154 0 Y 32200 NFS Server on localhost N/A N/A N N/A Task Status of Volume Dis ------------------------------------------------------------------------------ There are no active volume tasks ]# gluster volume get Dis nfs.disable Option Value ------ ----- nfs.disable on [root@]# Version-Release number of selected component (if applicable): ============================================================= glusterfs-3.8.4-1 How reproducible: ================= Always Steps to Reproduce: =================== 1. Have setup having glusterfs-3.8.4-1 bits. 2. Create a Simple distribute volume and start it. 3. Check the volume status. (OR) Have 3.1.3 setup with volume and update to 3.8.4-1 and check the volume status. Actual results: =============== "NFS Server N/A" entry in the volume status by default Expected results: ================= By default "NFS Server N/A" should not show in the volume status. Additional info:
The gluster NFS server has been disabled by default for GlusterFS-3.8 upstream [1]. This is done to encourage users to use NFS-Ganesha for their NFS needs. If RHGS requires this, it can be enabled downstream. [1]: https://github.com/gluster/glusterfs/blob/release-3.8/doc/release-notes/3.8.0.md#glusternfs-disabled-by-default
(In reply to Kaushal from comment #2) > The gluster NFS server has been disabled by default for GlusterFS-3.8 > upstream [1]. This is done to encourage users to use NFS-Ganesha for their > NFS needs. > > If RHGS requires this, it can be enabled downstream. > > [1]: > https://github.com/gluster/glusterfs/blob/release-3.8/doc/release-notes/3.8. > 0.md#glusternfs-disabled-by-default Entry should not show by default in the volume status for the first time after volume create and start.
This is when we do update from 3.1.3 to 3.2.
Have you ensured to bump up the op-version?
I don't see this issue when the op-version is bumped up and hence closing this bug.
I am reopening this bug based on comment 4 info. For the 3.1.3 volumes we see the "NFS Server N/A" after update to 3.2 + op-version bump up but for the new volumes we create after update we won't see this one. Volume options in the volume info should be same for multiple same volume type by default.
(In reply to Byreddy from comment #7) > I am reopening this bug based on comment 4 info. > > For the 3.1.3 volumes we see the "NFS Server N/A" after update to 3.2 + > op-version bump up > but for the new volumes we create after update we won't see this one. > >
Byreddy, Apologies! This issue is valid for a 3.2 upgrade path. However what I mentioned in comment 6 is also true but with the patch http://review.gluster.org/#/c/15568/ applied.
upstream mainline : http://review.gluster.org/15568 upstream 3.9 : http://review.gluster.org/15652 downstream patch : https://code.engineering.redhat.com/gerrit/#/c/87434
Verified this bug using the build - 3.8.4-3 Fix is working good and i am not seeing the "NFS Server N/A" entry in the volume status after updating 3.1.3 volume to 3.2 available build. Moving to verified state.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html