Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1226817 - nfs-ganesha: new volume creation tries to bring up glusterfs-nfs even when nfs-ganesha is already on
nfs-ganesha: new volume creation tries to bring up glusterfs-nfs even when nf...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: nfs-ganesha (Show other bugs)
3.1
x86_64 Linux
high Severity low
: ---
: RHGS 3.1.1
Assigned To: Kaleb KEITHLEY
Apeksha
: ZStream
Depends On:
Blocks: 1202842 1216951 1240614 1251815 1251857 1254419
  Show dependency treegraph
 
Reported: 2015-06-01 04:16 EDT by Saurabh
Modified: 2016-01-19 01:15 EST (History)
9 users (show)

See Also:
Fixed In Version: glusterfs-3.7.1-12
Doc Type: Bug Fix
Doc Text:
NFS-Ganesha always runs in a subset of nodes in the trusted storage pool. So when a new volume is created, then Gluster-NFS can be started on the nodes outside that subset. As a consequence, same volume is exported via NFS-Ganesha in one node and via Gluster-NFS on another. As a fix, the Gluster-NFS should be disabled when nfs-ganesha option is enabled. Now, either NFS-Ganesha or Gluster-NFS will export the volume in the trusted storage pool.
Story Points: ---
Clone Of:
: 1251857 (view as bug list)
Environment:
Last Closed: 2015-10-05 03:09:48 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1845 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.1 update 2015-10-05 07:06:22 EDT

  None (edit)
Description Saurabh 2015-06-01 04:16:46 EDT
Description of problem:
Once we have nfs-ganesha up and running, the new volume creation still tries to bring up the glusterfs-nfs up, though unsuccessfully.

This is visible once you check the gluster status for that particular newly created volume,

[root@nfs1 ~]# gluster volume create vol3 replica 2 10.70.37.148:/rhs/brick1/d1r1-vol3 10.70.37.77:/rhs/brick1/d1r2-vol3 10.70.37.76:/rhs/brick1/d2r1-vol3 10.70.37.69:/rhs/brick1/d2r2-vol3 10.70.37.148:/rhs/brick1/d3r1-vol3 10.70.37.77:/rhs/brick1/d3r2-vol3 10.70.37.76:/rhs/brick1/d4r1-vol3 10.70.37.69:/rhs/brick1/d4r2-vol3 10.70.37.148:/rhs/brick1/d5r1-vol3 10.70.37.77:/rhs/brick1/d5r2-vol3 10.70.37.76:/rhs/brick1/d6r1-vol3 10.70.37.69:/rhs/brick1/d6r2-vol3
volume create: vol3: success: please start the volume to access data
[root@nfs1 ~]# gluster volume start vol3
volume start: vol3: success
[root@nfs1 ~]# 
[root@nfs1 ~]# 
[root@nfs1 ~]# gluster volume status vol3
Status of volume: vol3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.37.148:/rhs/brick1/d1r1-vol3    49159     0          Y       10547
Brick 10.70.37.77:/rhs/brick1/d1r2-vol3     49164     0          Y       11666
Brick 10.70.37.76:/rhs/brick1/d2r1-vol3     49158     0          Y       21786
Brick 10.70.37.69:/rhs/brick1/d2r2-vol3     49158     0          Y       5755 
Brick 10.70.37.148:/rhs/brick1/d3r1-vol3    49160     0          Y       10564
Brick 10.70.37.77:/rhs/brick1/d3r2-vol3     49165     0          Y       11684
Brick 10.70.37.76:/rhs/brick1/d4r1-vol3     49159     0          Y       21811
Brick 10.70.37.69:/rhs/brick1/d4r2-vol3     49159     0          Y       5772 
Brick 10.70.37.148:/rhs/brick1/d5r1-vol3    49161     0          Y       10581
Brick 10.70.37.77:/rhs/brick1/d5r2-vol3     49166     0          Y       11701
Brick 10.70.37.76:/rhs/brick1/d6r1-vol3     49160     0          Y       21830
Brick 10.70.37.69:/rhs/brick1/d6r2-vol3     49160     0          Y       5789 
NFS Server on localhost                     N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       10607
NFS Server on 10.70.37.76                   N/A       N/A        N       N/A  
Self-heal Daemon on 10.70.37.76             N/A       N/A        Y       21856
NFS Server on 10.70.37.77                   N/A       N/A        N       N/A  
Self-heal Daemon on 10.70.37.77             N/A       N/A        Y       11727
NFS Server on 10.70.37.69                   N/A       N/A        N       N/A  
Self-heal Daemon on 10.70.37.69             N/A       N/A        Y       5825 
 
Task Status of Volume vol3
------------------------------------------------------------------------------
There are no active volume tasks



Version-Release number of selected component (if applicable):
glusterfs-3.7.0-2.el6rhs.x86_64
nfs-ganesha-2.2.0-0.el6.x86_64

How reproducible:
always

Steps to Reproduce:
1. create a volume of type 6x2, start it
2. bring up nfs-ganesha, after doing all the pre-requisites
3. create another volume of any type,
4. gluster volume status <name of newly created volume>

Actual results:
step 4 results in response as displayed in description section

Expected results:
we should have a mechanism to find out if nfs-ganesha is already running, then the new volume should accept that as the nfs server, rather try to bring glusterfs-nfs.

Additional info:
Comment 4 Divya 2015-06-22 06:50:11 EDT
Hi,

Please provide the doc text for this known issue in the Doc Text field.
Comment 6 monti lawrence 2015-07-22 16:27:34 EDT
Doc text is edited. Please sign off to be included in Known Issues.
Comment 7 Anjana Suparna Sriram 2015-07-27 14:03:10 EDT
Please review and sign off the edited text to be included in Known Issues.
Comment 8 Soumya Koduri 2015-07-28 03:34:32 EDT
doc text looks good to me.
Comment 12 Apeksha 2015-08-26 08:06:06 EDT
Now a new ganesha volume creation does not try to bring up gluster-nfs

 Status of volume: testvol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.37.137:/rhs/brick1/brick0/testv
ol_brick0                                   49153     0          Y       28806
Brick 10.70.37.56:/rhs/brick1/brick1/testvo
l_brick1                                    49153     0          Y       28066
Brick 10.70.37.100:/rhs/brick1/brick1/testv
ol_brick2                                   49153     0          Y       28059
Brick 10.70.37.150:/rhs/brick1/brick0/testv
ol_brick3                                   49152     0          Y       27916
Brick 10.70.37.137:/rhs/brick1/brick1/testv
ol_brick4                                   49154     0          Y       28827
Brick 10.70.37.56:/rhs/brick1/brick2/testvo
l_brick5                                    49154     0          Y       28084
Brick 10.70.37.100:/rhs/brick1/brick2/testv
ol_brick6                                   49154     0          Y       28077
Brick 10.70.37.150:/rhs/brick1/brick1/testv
ol_brick7                                   49153     0          Y       27934
Brick 10.70.37.137:/rhs/brick1/brick2/testv
ol_brick8                                   49155     0          Y       28853
Brick 10.70.37.56:/rhs/brick1/brick3/testvo
l_brick9                                    49155     0          Y       28102
Brick 10.70.37.100:/rhs/brick1/brick3/testv
ol_brick10                                  49155     0          Y       28095
Brick 10.70.37.150:/rhs/brick1/brick2/testv
ol_brick11                                  49154     0          Y       27952
Self-heal Daemon on localhost               N/A       N/A        Y       28876
Self-heal Daemon on 10.70.37.150            N/A       N/A        Y       27987
Self-heal Daemon on 10.70.37.100            N/A       N/A        Y       28114
Self-heal Daemon on 10.70.37.56             N/A       N/A        Y       28123
 
Task Status of Volume testvol
------------------------------------------------------------------------------
There are no active volume tasks


Verified on glusterfs-3.7.1-12.el7rhgs.x86_64
Comment 14 Divya 2015-09-29 02:07:35 EDT
Please review and sign-off the edited doc text.
Comment 15 Jiffin 2015-09-29 02:32:29 EDT
Requires minor modification 

NFS-Ganesha always runs in a subset of nodes in the trusted storage pool. So when a new volume is created, then Gluster-NFS can be started on the nodes outside that subset. As a consequence, same volume is exported via NFS-Ganesha in one node and via Gluster-NFS on another. As a fix, the Gluster-NFS should be disabled when nfs-ganesha option is enabled. Now, either NFS-Ganesha or Gluster-NFS will export the volume in the trusted storage pool.
Comment 17 errata-xmlrpc 2015-10-05 03:09:48 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1845.html

Note You need to log in before you can comment on or make changes to this bug.