Bug 1226817
Summary: | nfs-ganesha: new volume creation tries to bring up glusterfs-nfs even when nfs-ganesha is already on | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Saurabh <saujain> | |
Component: | nfs-ganesha | Assignee: | Kaleb KEITHLEY <kkeithle> | |
Status: | CLOSED ERRATA | QA Contact: | Apeksha <akhakhar> | |
Severity: | low | Docs Contact: | ||
Priority: | high | |||
Version: | rhgs-3.1 | CC: | asriram, asrivast, divya, jthottan, kkeithle, mzywusko, nlevinki, skoduri, vagarwal | |
Target Milestone: | --- | Keywords: | ZStream | |
Target Release: | RHGS 3.1.1 | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.7.1-12 | Doc Type: | Bug Fix | |
Doc Text: |
NFS-Ganesha always runs in a subset of nodes in the trusted storage pool. So when a new volume is created, then Gluster-NFS can be started on the nodes outside that subset. As a consequence, same volume is exported via NFS-Ganesha in one node and via Gluster-NFS on another. As a fix, the Gluster-NFS should be disabled when nfs-ganesha option is enabled. Now, either NFS-Ganesha or Gluster-NFS will export the volume in the trusted storage pool.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1251857 (view as bug list) | Environment: | ||
Last Closed: | 2015-10-05 07:09:48 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1202842, 1216951, 1240614, 1251815, 1251857, 1254419 |
Description
Saurabh
2015-06-01 08:16:46 UTC
Hi, Please provide the doc text for this known issue in the Doc Text field. Doc text is edited. Please sign off to be included in Known Issues. Please review and sign off the edited text to be included in Known Issues. doc text looks good to me. Now a new ganesha volume creation does not try to bring up gluster-nfs Status of volume: testvol Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.37.137:/rhs/brick1/brick0/testv ol_brick0 49153 0 Y 28806 Brick 10.70.37.56:/rhs/brick1/brick1/testvo l_brick1 49153 0 Y 28066 Brick 10.70.37.100:/rhs/brick1/brick1/testv ol_brick2 49153 0 Y 28059 Brick 10.70.37.150:/rhs/brick1/brick0/testv ol_brick3 49152 0 Y 27916 Brick 10.70.37.137:/rhs/brick1/brick1/testv ol_brick4 49154 0 Y 28827 Brick 10.70.37.56:/rhs/brick1/brick2/testvo l_brick5 49154 0 Y 28084 Brick 10.70.37.100:/rhs/brick1/brick2/testv ol_brick6 49154 0 Y 28077 Brick 10.70.37.150:/rhs/brick1/brick1/testv ol_brick7 49153 0 Y 27934 Brick 10.70.37.137:/rhs/brick1/brick2/testv ol_brick8 49155 0 Y 28853 Brick 10.70.37.56:/rhs/brick1/brick3/testvo l_brick9 49155 0 Y 28102 Brick 10.70.37.100:/rhs/brick1/brick3/testv ol_brick10 49155 0 Y 28095 Brick 10.70.37.150:/rhs/brick1/brick2/testv ol_brick11 49154 0 Y 27952 Self-heal Daemon on localhost N/A N/A Y 28876 Self-heal Daemon on 10.70.37.150 N/A N/A Y 27987 Self-heal Daemon on 10.70.37.100 N/A N/A Y 28114 Self-heal Daemon on 10.70.37.56 N/A N/A Y 28123 Task Status of Volume testvol ------------------------------------------------------------------------------ There are no active volume tasks Verified on glusterfs-3.7.1-12.el7rhgs.x86_64 Please review and sign-off the edited doc text. Requires minor modification NFS-Ganesha always runs in a subset of nodes in the trusted storage pool. So when a new volume is created, then Gluster-NFS can be started on the nodes outside that subset. As a consequence, same volume is exported via NFS-Ganesha in one node and via Gluster-NFS on another. As a fix, the Gluster-NFS should be disabled when nfs-ganesha option is enabled. Now, either NFS-Ganesha or Gluster-NFS will export the volume in the trusted storage pool. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1845.html |