Bug 1378342

Summary: Getting "NFS Server N/A" entry in the volume status by default.
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Byreddy <bsrirama>
Component: glusterdAssignee: Atin Mukherjee <amukherj>
Status: CLOSED ERRATA QA Contact: Byreddy <bsrirama>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.2CC: bsrirama, kaushal, rhinduja, rhs-bugs, storage-qa-internal, vbellur
Target Milestone: ---Keywords: Reopened
Target Release: RHGS 3.2.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.8.4-3 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1383005 (view as bug list) Environment:
Last Closed: 2017-03-23 05:48:13 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1351528, 1383005    

Description Byreddy 2016-09-22 08:14:00 UTC
Description of problem:
=======================
Getting "NFS Server  N/A"  entry in the volume status by default in rhgs 3.2 ( glusterfs-3.8.4-1 ) 

]# gluster volume status
Status of volume: Dis
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node1:/tmp/1                   49154     0          Y       32200
NFS Server on localhost                     N/A       N/A        N       N/A  
 
Task Status of Volume Dis
------------------------------------------------------------------------------
There are no active volume tasks

]# gluster volume get Dis nfs.disable
Option                                  Value                                   
------                                  -----                                   
nfs.disable                             on                                      
[root@]# 


Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.8.4-1


How reproducible:
=================
Always

Steps to Reproduce:
===================
1. Have setup having  glusterfs-3.8.4-1 bits.
2. Create a Simple distribute volume and start it.
3. Check the volume status.

(OR)

Have 3.1.3 setup with volume and update to 3.8.4-1 and check the volume status.


Actual results:
===============
"NFS Server  N/A"  entry in the volume status by default 


Expected results:
=================
By default "NFS Server  N/A" should not show in the volume status.


Additional info:

Comment 2 Kaushal 2016-09-22 08:28:04 UTC
The gluster NFS server has been disabled by default for GlusterFS-3.8 upstream [1]. This is done to encourage users to use NFS-Ganesha for their NFS needs.

If RHGS requires this, it can be enabled downstream.

[1]: https://github.com/gluster/glusterfs/blob/release-3.8/doc/release-notes/3.8.0.md#glusternfs-disabled-by-default

Comment 3 Byreddy 2016-09-22 08:36:50 UTC
(In reply to Kaushal from comment #2)
> The gluster NFS server has been disabled by default for GlusterFS-3.8
> upstream [1]. This is done to encourage users to use NFS-Ganesha for their
> NFS needs.
> 
> If RHGS requires this, it can be enabled downstream.
> 
> [1]:
> https://github.com/gluster/glusterfs/blob/release-3.8/doc/release-notes/3.8.
> 0.md#glusternfs-disabled-by-default

Entry should not show by default in the volume status for the first time after volume create and start.

Comment 4 Byreddy 2016-09-22 09:17:44 UTC
This is when we do update from 3.1.3 to 3.2.

Comment 5 Atin Mukherjee 2016-09-22 11:27:04 UTC
Have you ensured to bump up the op-version?

Comment 6 Atin Mukherjee 2016-09-24 08:24:00 UTC
I don't see this issue when the op-version is bumped up and hence closing this bug.

Comment 7 Byreddy 2016-09-28 11:09:25 UTC
I am reopening this bug based on comment 4 info. 

For the 3.1.3 volumes we see the "NFS Server N/A" after update to 3.2 + op-version bump up
but for the new volumes we create after update we won't see this one.

Volume options in the volume info should be same for multiple same volume type by default.

Comment 8 Byreddy 2016-09-28 11:12:20 UTC
(In reply to Byreddy from comment #7)
> I am reopening this bug based on comment 4 info. 
> 
> For the 3.1.3 volumes we see the "NFS Server N/A" after update to 3.2 +
> op-version bump up
> but for the new volumes we create after update we won't see this one.
> 
>

Comment 9 Atin Mukherjee 2016-09-28 12:02:13 UTC
Byreddy,

Apologies! This issue is valid for a 3.2 upgrade path. However what I mentioned in comment 6 is also true but with the patch http://review.gluster.org/#/c/15568/ applied.

Comment 12 Atin Mukherjee 2016-10-18 05:28:45 UTC
upstream mainline : http://review.gluster.org/15568
upstream 3.9 : http://review.gluster.org/15652
downstream patch : https://code.engineering.redhat.com/gerrit/#/c/87434

Comment 14 Byreddy 2016-10-27 10:19:42 UTC
Verified this bug using the build - 3.8.4-3

Fix is working good and i am not seeing the "NFS Server N/A" entry in the volume status after updating 3.1.3 volume to 3.2 available build.

Moving to verified state.

Comment 17 errata-xmlrpc 2017-03-23 05:48:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html