Description of problem: Once we have nfs-ganesha up and running, the new volume creation still tries to bring up the glusterfs-nfs up, though unsuccessfully. This is visible once you check the gluster status for that particular newly created volume, [root@nfs1 ~]# gluster volume create vol3 replica 2 10.70.37.148:/rhs/brick1/d1r1-vol3 10.70.37.77:/rhs/brick1/d1r2-vol3 10.70.37.76:/rhs/brick1/d2r1-vol3 10.70.37.69:/rhs/brick1/d2r2-vol3 10.70.37.148:/rhs/brick1/d3r1-vol3 10.70.37.77:/rhs/brick1/d3r2-vol3 10.70.37.76:/rhs/brick1/d4r1-vol3 10.70.37.69:/rhs/brick1/d4r2-vol3 10.70.37.148:/rhs/brick1/d5r1-vol3 10.70.37.77:/rhs/brick1/d5r2-vol3 10.70.37.76:/rhs/brick1/d6r1-vol3 10.70.37.69:/rhs/brick1/d6r2-vol3 volume create: vol3: success: please start the volume to access data [root@nfs1 ~]# gluster volume start vol3 volume start: vol3: success [root@nfs1 ~]# [root@nfs1 ~]# [root@nfs1 ~]# gluster volume status vol3 Status of volume: vol3 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.37.148:/rhs/brick1/d1r1-vol3 49159 0 Y 10547 Brick 10.70.37.77:/rhs/brick1/d1r2-vol3 49164 0 Y 11666 Brick 10.70.37.76:/rhs/brick1/d2r1-vol3 49158 0 Y 21786 Brick 10.70.37.69:/rhs/brick1/d2r2-vol3 49158 0 Y 5755 Brick 10.70.37.148:/rhs/brick1/d3r1-vol3 49160 0 Y 10564 Brick 10.70.37.77:/rhs/brick1/d3r2-vol3 49165 0 Y 11684 Brick 10.70.37.76:/rhs/brick1/d4r1-vol3 49159 0 Y 21811 Brick 10.70.37.69:/rhs/brick1/d4r2-vol3 49159 0 Y 5772 Brick 10.70.37.148:/rhs/brick1/d5r1-vol3 49161 0 Y 10581 Brick 10.70.37.77:/rhs/brick1/d5r2-vol3 49166 0 Y 11701 Brick 10.70.37.76:/rhs/brick1/d6r1-vol3 49160 0 Y 21830 Brick 10.70.37.69:/rhs/brick1/d6r2-vol3 49160 0 Y 5789 NFS Server on localhost N/A N/A N N/A Self-heal Daemon on localhost N/A N/A Y 10607 NFS Server on 10.70.37.76 N/A N/A N N/A Self-heal Daemon on 10.70.37.76 N/A N/A Y 21856 NFS Server on 10.70.37.77 N/A N/A N N/A Self-heal Daemon on 10.70.37.77 N/A N/A Y 11727 NFS Server on 10.70.37.69 N/A N/A N N/A Self-heal Daemon on 10.70.37.69 N/A N/A Y 5825 Task Status of Volume vol3 ------------------------------------------------------------------------------ There are no active volume tasks Version-Release number of selected component (if applicable): glusterfs-3.7.0-2.el6rhs.x86_64 nfs-ganesha-2.2.0-0.el6.x86_64 How reproducible: always Steps to Reproduce: 1. create a volume of type 6x2, start it 2. bring up nfs-ganesha, after doing all the pre-requisites 3. create another volume of any type, 4. gluster volume status <name of newly created volume> Actual results: step 4 results in response as displayed in description section Expected results: we should have a mechanism to find out if nfs-ganesha is already running, then the new volume should accept that as the nfs server, rather try to bring glusterfs-nfs. Additional info:
Hi, Please provide the doc text for this known issue in the Doc Text field.
Doc text is edited. Please sign off to be included in Known Issues.
Please review and sign off the edited text to be included in Known Issues.
doc text looks good to me.
https://code.engineering.redhat.com/gerrit/#/c/55401/
Now a new ganesha volume creation does not try to bring up gluster-nfs Status of volume: testvol Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.37.137:/rhs/brick1/brick0/testv ol_brick0 49153 0 Y 28806 Brick 10.70.37.56:/rhs/brick1/brick1/testvo l_brick1 49153 0 Y 28066 Brick 10.70.37.100:/rhs/brick1/brick1/testv ol_brick2 49153 0 Y 28059 Brick 10.70.37.150:/rhs/brick1/brick0/testv ol_brick3 49152 0 Y 27916 Brick 10.70.37.137:/rhs/brick1/brick1/testv ol_brick4 49154 0 Y 28827 Brick 10.70.37.56:/rhs/brick1/brick2/testvo l_brick5 49154 0 Y 28084 Brick 10.70.37.100:/rhs/brick1/brick2/testv ol_brick6 49154 0 Y 28077 Brick 10.70.37.150:/rhs/brick1/brick1/testv ol_brick7 49153 0 Y 27934 Brick 10.70.37.137:/rhs/brick1/brick2/testv ol_brick8 49155 0 Y 28853 Brick 10.70.37.56:/rhs/brick1/brick3/testvo l_brick9 49155 0 Y 28102 Brick 10.70.37.100:/rhs/brick1/brick3/testv ol_brick10 49155 0 Y 28095 Brick 10.70.37.150:/rhs/brick1/brick2/testv ol_brick11 49154 0 Y 27952 Self-heal Daemon on localhost N/A N/A Y 28876 Self-heal Daemon on 10.70.37.150 N/A N/A Y 27987 Self-heal Daemon on 10.70.37.100 N/A N/A Y 28114 Self-heal Daemon on 10.70.37.56 N/A N/A Y 28123 Task Status of Volume testvol ------------------------------------------------------------------------------ There are no active volume tasks Verified on glusterfs-3.7.1-12.el7rhgs.x86_64
Please review and sign-off the edited doc text.
Requires minor modification NFS-Ganesha always runs in a subset of nodes in the trusted storage pool. So when a new volume is created, then Gluster-NFS can be started on the nodes outside that subset. As a consequence, same volume is exported via NFS-Ganesha in one node and via Gluster-NFS on another. As a fix, the Gluster-NFS should be disabled when nfs-ganesha option is enabled. Now, either NFS-Ganesha or Gluster-NFS will export the volume in the trusted storage pool.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1845.html