Once we have nfs-ganesha up and running, the new volume creation still tries to bring up the glusterfs-nfs up, though unsuccessfully. This is visible once you check the gluster status for that particular newly created volume, Version-Release number of selected component (if applicable): glusterfs-3.7.1 How reproducible: always Steps to Reproduce: 1. create a volume of type 6x2, start it 2. bring up nfs-ganesha, after doing all the pre-requisites 3. create another volume of any type, 4. gluster volume status <name of newly created volume> Actual results: step 4 results in response as displayed in description section Expected results: we should have a mechanism to find out if nfs-ganesha is already running, then the new volume should accept that as the nfs server, rather try to bring glusterfs-nfs.
REVIEW: http://review.gluster.org/11871 (Set nfs.disable to "on" when global NFS-Ganesha key is enabled) posted (#1) for review on master by Meghana M (mmadhusu)
REVIEW: http://review.gluster.org/11871 (Set nfs.disable to "on" when global NFS-Ganesha key is enabled) posted (#2) for review on master by Meghana M (mmadhusu)
REVIEW: http://review.gluster.org/11871 (Set nfs.disable to "on" when global NFS-Ganesha key is enabled) posted (#3) for review on master by Meghana M (mmadhusu)
COMMIT: http://review.gluster.org/11871 committed in master by Kaleb KEITHLEY (kkeithle) ------ commit cdf238e7c90273beff73617481d19d77fc8014db Author: Meghana M <mmadhusu> Date: Mon Aug 3 03:03:07 2015 +0530 Set nfs.disable to "on" when global NFS-Ganesha key is enabled "nfs.disable" gets set to "on" for all the existing volumes, when the command "gluster nfs-ganesha enable" is executed. When a new volume is created,it gets exported via Gluster-NFS on the nodes outside the NFS-Ganesha. To fix this, the "nfs.disable" key is set to "on" before starting the volume, whenever the global option is set to "enable". Change-Id: I7ce58928c36eadb8c122cded5bdcea271a0a4ffa BUG: 1251857 Signed-off-by: Meghana M <mmadhusu> Reviewed-on: http://review.gluster.org/11871 Reviewed-by: jiffin tony Thottan <jthottan> Tested-by: NetBSD Build System <jenkins.org> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Kaleb KEITHLEY <kkeithle>
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user