Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1336704 - [geo-rep]: Multiple geo-rep session to the same slave is allowed for different users
[geo-rep]: Multiple geo-rep session to the same slave is allowed for differen...
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: geo-replication (Show other bugs)
3.8.0
x86_64 Linux
high Severity high
: ---
: ---
Assigned To: bugs@gluster.org
: Triaged, ZStream
Depends On: 1261838 1294813
Blocks: 1335728
  Show dependency treegraph
 
Reported: 2016-05-17 05:02 EDT by Saravanakumar
Modified: 2016-06-16 10:06 EDT (History)
9 users (show)

See Also:
Fixed In Version: glusterfs-3.8rc2
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1294813
Environment:
Last Closed: 2016-06-16 10:06:58 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Comment 1 Vijay Bellur 2016-05-17 05:03:48 EDT
REVIEW: http://review.gluster.org/14372 (glusterd/geo-rep: slave volume uuid to identify a geo-rep session) posted (#2) for review on release-3.8 by Saravanakumar Arumugam (sarumuga@redhat.com)
Comment 2 Vijay Bellur 2016-05-23 04:14:02 EDT
COMMIT: http://review.gluster.org/14372 committed in release-3.8 by Niels de Vos (ndevos@redhat.com) 
------
commit 9ace7ecc2a278ac06dd5a0744be9a85679d8ceca
Author: Saravanakumar Arumugam <sarumuga@redhat.com>
Date:   Tue Dec 29 19:22:36 2015 +0530

    glusterd/geo-rep: slave volume uuid to identify a geo-rep session
    
    Problem:
    Currently, it is possible to create multiple geo-rep session from
    the Master host to Slave host(s), where Slave host(s) belonging
    to the same volume.
    
    For example:
    Consider Master Host M1 having volume tv1 and Slave volume tv2,
    which spans across two Slave hosts S1 and S2.
    Currently, it is possible to create geo-rep session from
    M1(tv1) to S1(tv2) as well as from M1(tv1) to S2(tv2).
    
    When the Slave Host is alone modified, it is identified as a new geo-rep
    session (as slave host and slave volume together are identifying
    Slave side).
    
    Also, it is possible to create both root and non-root geo-rep session between
    same Master volume and Slave volume. This should also be avoided.
    
    Solution:
    This multiple geo-rep session creation must be avoided and
    in order to avoid, use Slave volume uuid to identify a Slave.
    This way, we can identify whether a session is already created for
    the same Slave volume and avoid creating again (using different host).
    
    When the session creation is forced in the above scenario, rename
    the existing geo-rep session directory with new Slave Host mentioned.
    
    Change-Id: I9239759cbc0d15dad63c48b8cf62950bb687c7c8
    BUG: 1336704
    Signed-off-by: Saravanakumar Arumugam <sarumuga@redhat.com>
    Signed-off-by: Aravinda VK <avishwan@redhat.com>
    Reviewed-on: http://review.gluster.org/13111
    Reviewed-by: Kotresh HR <khiremat@redhat.com>
    Tested-by: Kotresh HR <khiremat@redhat.com>
    Smoke: Gluster Build System <jenkins@build.gluster.com>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
    (cherry picked from commit a9128cda34b1f696b717ba09fa0ac5a929be8969)
    Reviewed-on: http://review.gluster.org/14372
Comment 3 Niels de Vos 2016-06-16 10:06:58 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.