The issue here is that * CLI (enabling ganesha.enable on) increments .export_added value on it localhost and exports that volume using that export_id generated where as * refresh-config just copies the export_files from the nodes it is being executed to other nodes in the cluster. It then unexports the volume with the export_id as in the first node and re-exports it with the new export file. So in case of customer setup, both the nodes had different value. on node1: .export_added value was 52 on node2: 37 Now for a new volume, if the option ganesha.enable is enabled, on node1 it shall be successful with export_id 53 but on node2 as there could be already volumes exported with export_id 38 it fails. But when refresh-config is executed, it tries to unexport the volume with id 53 on node2 which yet again will fail but the following add_export (re-export) will succeed once we copy the export_config file from node1.
REVIEW: http://review.gluster.org/13459 (ganesha: Read export_id on each node while performing refresh-config) posted (#1) for review on release-3.7 by soumya k (skoduri)
REVIEW: http://review.gluster.org/13459 (ganesha: Read export_id on each node while performing refresh-config) posted (#2) for review on release-3.7 by soumya k (skoduri)
REVIEW: http://review.gluster.org/13459 (ganesha: Read export_id on each node while performing refresh-config) posted (#3) for review on release-3.7 by soumya k (skoduri)
COMMIT: http://review.gluster.org/13459 committed in release-3.7 by Kaleb KEITHLEY (kkeithle) ------ commit e0e633cdce7586af92490730257ed7f0cffcff61 Author: Soumya Koduri <skoduri> Date: Wed Feb 17 15:34:44 2016 +0530 ganesha: Read export_id on each node while performing refresh-config As mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=1309238#c1, there could be cases which shall result in having different ExportIDs for the same volume on each node forming the ganesha cluster. Hence during refresh-config, it is necessary to read the ExportID on each of those nodes and re-export that volume with the same ID. Change-Id: I44058352fe977ccc649d378da3b68bbfb992fcd7 BUG: 1309238 Signed-off-by: Soumya Koduri <skoduri> Reviewed-on: http://review.gluster.org/13459 CentOS-regression: Gluster Build System <jenkins.com> Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: Kaleb KEITHLEY <kkeithle>
REVIEW: http://review.gluster.org/13726 (ganesha: Read export_id on each node while performing refresh-config) posted (#1) for review on master by Kaleb KEITHLEY (kkeithle)
REVIEW: http://review.gluster.org/13726 (ganesha: Read export_id on each node while performing refresh-config) posted (#2) for review on master by Kaleb KEITHLEY (kkeithle)
COMMIT: http://review.gluster.org/13726 committed in master by Kaleb KEITHLEY (kkeithle) ------ commit ef1b79a86714e235a7430e2eb95acceb83cfc774 Author: Soumya Koduri <skoduri> Date: Wed Feb 17 15:34:44 2016 +0530 ganesha: Read export_id on each node while performing refresh-config As mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=1309238#c1, there could be cases which shall result in having different ExportIDs for the same volume on each node forming the ganesha cluster. Hence during refresh-config, it is necessary to read the ExportID on each of those nodes and re-export that volume with the same ID. BUG: 1309238 Change-Id: Id39b3a0ce2614ee611282ff2bee04cede1fc129d Signed-off-by: Soumya Koduri <skoduri> Reviewed-on: http://review.gluster.org/13459 CentOS-regression: Gluster Build System <jenkins.com> Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: Kaleb KEITHLEY <kkeithle> Reviewed-on: http://review.gluster.org/13726 Tested-by: Kaleb KEITHLEY <kkeithle>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.9, please open a new bug report. glusterfs-3.7.9 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://www.gluster.org/pipermail/gluster-users/2016-March/025922.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user