REVIEW: http://review.gluster.org/14372 (glusterd/geo-rep: slave volume uuid to identify a geo-rep session) posted (#2) for review on release-3.8 by Saravanakumar Arumugam (sarumuga)
COMMIT: http://review.gluster.org/14372 committed in release-3.8 by Niels de Vos (ndevos) ------ commit 9ace7ecc2a278ac06dd5a0744be9a85679d8ceca Author: Saravanakumar Arumugam <sarumuga> Date: Tue Dec 29 19:22:36 2015 +0530 glusterd/geo-rep: slave volume uuid to identify a geo-rep session Problem: Currently, it is possible to create multiple geo-rep session from the Master host to Slave host(s), where Slave host(s) belonging to the same volume. For example: Consider Master Host M1 having volume tv1 and Slave volume tv2, which spans across two Slave hosts S1 and S2. Currently, it is possible to create geo-rep session from M1(tv1) to S1(tv2) as well as from M1(tv1) to S2(tv2). When the Slave Host is alone modified, it is identified as a new geo-rep session (as slave host and slave volume together are identifying Slave side). Also, it is possible to create both root and non-root geo-rep session between same Master volume and Slave volume. This should also be avoided. Solution: This multiple geo-rep session creation must be avoided and in order to avoid, use Slave volume uuid to identify a Slave. This way, we can identify whether a session is already created for the same Slave volume and avoid creating again (using different host). When the session creation is forced in the above scenario, rename the existing geo-rep session directory with new Slave Host mentioned. Change-Id: I9239759cbc0d15dad63c48b8cf62950bb687c7c8 BUG: 1336704 Signed-off-by: Saravanakumar Arumugam <sarumuga> Signed-off-by: Aravinda VK <avishwan> Reviewed-on: http://review.gluster.org/13111 Reviewed-by: Kotresh HR <khiremat> Tested-by: Kotresh HR <khiremat> Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: Atin Mukherjee <amukherj> CentOS-regression: Gluster Build System <jenkins.com> (cherry picked from commit a9128cda34b1f696b717ba09fa0ac5a929be8969) Reviewed-on: http://review.gluster.org/14372
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user