REVIEW: http://review.gluster.org/14372 (glusterd/geo-rep: slave volume uuid to identify a geo-rep session) posted (#2) for review on release-3.8 by Saravanakumar Arumugam (firstname.lastname@example.org)
COMMIT: http://review.gluster.org/14372 committed in release-3.8 by Niels de Vos (email@example.com)
Author: Saravanakumar Arumugam <firstname.lastname@example.org>
Date: Tue Dec 29 19:22:36 2015 +0530
glusterd/geo-rep: slave volume uuid to identify a geo-rep session
Currently, it is possible to create multiple geo-rep session from
the Master host to Slave host(s), where Slave host(s) belonging
to the same volume.
Consider Master Host M1 having volume tv1 and Slave volume tv2,
which spans across two Slave hosts S1 and S2.
Currently, it is possible to create geo-rep session from
M1(tv1) to S1(tv2) as well as from M1(tv1) to S2(tv2).
When the Slave Host is alone modified, it is identified as a new geo-rep
session (as slave host and slave volume together are identifying
Also, it is possible to create both root and non-root geo-rep session between
same Master volume and Slave volume. This should also be avoided.
This multiple geo-rep session creation must be avoided and
in order to avoid, use Slave volume uuid to identify a Slave.
This way, we can identify whether a session is already created for
the same Slave volume and avoid creating again (using different host).
When the session creation is forced in the above scenario, rename
the existing geo-rep session directory with new Slave Host mentioned.
Signed-off-by: Saravanakumar Arumugam <email@example.com>
Signed-off-by: Aravinda VK <firstname.lastname@example.org>
Reviewed-by: Kotresh HR <email@example.com>
Tested-by: Kotresh HR <firstname.lastname@example.org>
Smoke: Gluster Build System <email@example.com>
NetBSD-regression: NetBSD Build System <firstname.lastname@example.org>
Reviewed-by: Atin Mukherjee <email@example.com>
CentOS-regression: Gluster Build System <firstname.lastname@example.org>
(cherry picked from commit a9128cda34b1f696b717ba09fa0ac5a929be8969)
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.
glusterfs-3.8.0 has been announced on the Gluster mailinglists , packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist  and the update infrastructure for your distribution.