Bug 1335728 - [geo-rep]: Multiple geo-rep session to the same slave is allowed for different users
[geo-rep]: Multiple geo-rep session to the same slave is allowed for differen...
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: geo-replication (Show other bugs)
3.7.11
x86_64 Linux
high Severity high
: ---
: ---
Assigned To: Aravinda VK
: Triaged, ZStream
Depends On: 1261838 1294813 1336704
Blocks:
  Show dependency treegraph
 
Reported: 2016-05-13 02:24 EDT by Aravinda VK
Modified: 2016-06-28 08:17 EDT (History)
9 users (show)

See Also:
Fixed In Version: glusterfs-3.7.12
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1294813
Environment:
Last Closed: 2016-06-28 08:17:24 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Aravinda VK 2016-05-13 02:24:01 EDT
+++ This bug was initially created as a clone of Bug #1294813 +++

+++ This bug was initially created as a clone of Bug #1261838 +++

Description of problem:
=======================

If the slave volume is part of any geo-rep session, then another attempt to use the same slave volume should be restricted. This is currently allowed if the session is already established as root and another attempt is made using user. 

In this case master volume info file is appended with both the slave entries one with slavehostname and another with <user>@slavehostname.

Now, if any one of the geo-rep session is deleted for example root session. All the session gets deleted.

Version-Release number of selected component (if applicable):
=============================================================



How reproducible:
=================

Always

Steps to Reproduce:
===================
1. Create and start geo-rep session between master and slave {root}
2. Create another geo-rep session between same master ans slave using same slave host but with user account {non-root}

Actual results:
===============

Able to create the session


Expected results:
=================

If slave is part of any geo-rep session the subsequent attempt to create different geo-rep session should fail

--- Additional comment from Red Hat Bugzilla Rules Engine on 2015-09-10 05:45:05 EDT ---

This bug is automatically being proposed for the current z-stream release of Red Hat Gluster Storage 3 by setting the release flag 'rhgs‑3.1.z' to '?'. 

If this bug should be proposed for a different release, please manually change the proposed release flag.

--- Additional comment from Rahul Hinduja on 2015-09-10 05:45:49 EDT ---

[root@georep1 ~]# gluster volume geo-replication master 10.70.46.167::slave status 
 
MASTER NODE    MASTER VOL    MASTER BRICK       SLAVE USER    SLAVE                  SLAVE NODE      STATUS     CRAWL STATUS       LAST_SYNCED                  
-----------------------------------------------------------------------------------------------------------------------------------------------------
georep1        master        /rhs/brick1/b1     root          10.70.46.167::slave    10.70.46.167    Active     Changelog Crawl    2015-09-10 08:47:07          
georep1        master        /rhs/brick2/b5     root          10.70.46.167::slave    10.70.46.167    Active     Changelog Crawl    2015-09-10 08:47:07          
georep1        master        /rhs/brick3/b9     root          10.70.46.167::slave    10.70.46.167    Active     Changelog Crawl    2015-09-10 08:47:07          
georep3        master        /rhs/brick1/b3     root          10.70.46.167::slave    10.70.46.165    Passive    N/A                N/A                          
georep3        master        /rhs/brick2/b7     root          10.70.46.167::slave    10.70.46.165    Passive    N/A                N/A                          
georep3        master        /rhs/brick3/b11    root          10.70.46.167::slave    10.70.46.165    Active     Changelog Crawl    2015-09-10 08:47:12          
georep2        master        /rhs/brick1/b2     root          10.70.46.167::slave    10.70.46.100    Passive    N/A                N/A                          
georep2        master        /rhs/brick2/b6     root          10.70.46.167::slave    10.70.46.100    Passive    N/A                N/A                          
georep2        master        /rhs/brick3/b10    root          10.70.46.167::slave    10.70.46.100    Passive    N/A                N/A                          
georep4        master        /rhs/brick1/b4     root          10.70.46.167::slave    10.70.46.142    Passive    N/A                N/A                          
georep4        master        /rhs/brick2/b8     root          10.70.46.167::slave    10.70.46.142    Passive    N/A                N/A                          
georep4        master        /rhs/brick3/b12    root          10.70.46.167::slave    10.70.46.142    Passive    N/A                N/A                          
[root@georep1 ~]# 
[root@georep1 ~]# 
[root@georep1 ~]# gluster volume geo-replication master geoaccount@SLAVENODE::slave create push-pem
Invalid Url: geoaccount@SLAVENODE::slave
geo-replication command failed
[root@georep1 ~]# gluster volume geo-replication master geoaccount@10.70.46.167::slave create push-pem
10.70.46.167::slave is not empty. Please delete existing files in 10.70.46.167::slave and retry, or use force to continue without deleting the existing files.
geo-replication command failed
[root@georep1 ~]# gluster volume geo-replication master geoaccount@10.70.46.167::slave create push-pem force
Creating geo-replication session between master & geoaccount@10.70.46.167::slave has been successful
[root@georep1 ~]#

[root@georep1 ~]# gluster volume geo-replication master 10.70.46.167::slave status 
 
MASTER NODE    MASTER VOL    MASTER BRICK       SLAVE USER    SLAVE                  SLAVE NODE      STATUS     CRAWL STATUS       LAST_SYNCED                  
-----------------------------------------------------------------------------------------------------------------------------------------------------
georep1        master        /rhs/brick1/b1     root          10.70.46.167::slave    10.70.46.167    Active     Changelog Crawl    2015-09-10 08:47:07          
georep1        master        /rhs/brick2/b5     root          10.70.46.167::slave    10.70.46.167    Active     Changelog Crawl    2015-09-10 08:47:07          
georep1        master        /rhs/brick3/b9     root          10.70.46.167::slave    10.70.46.167    Active     Changelog Crawl    2015-09-10 08:47:07          
georep3        master        /rhs/brick1/b3     root          10.70.46.167::slave    10.70.46.165    Passive    N/A                N/A                          
georep3        master        /rhs/brick2/b7     root          10.70.46.167::slave    10.70.46.165    Passive    N/A                N/A                          
georep3        master        /rhs/brick3/b11    root          10.70.46.167::slave    10.70.46.165    Active     Changelog Crawl    2015-09-10 08:47:12          
georep2        master        /rhs/brick1/b2     root          10.70.46.167::slave    10.70.46.100    Passive    N/A                N/A                          
georep2        master        /rhs/brick2/b6     root          10.70.46.167::slave    10.70.46.100    Passive    N/A                N/A                          
georep2        master        /rhs/brick3/b10    root          10.70.46.167::slave    10.70.46.100    Passive    N/A                N/A                          
georep4        master        /rhs/brick1/b4     root          10.70.46.167::slave    10.70.46.142    Passive    N/A                N/A                          
georep4        master        /rhs/brick2/b8     root          10.70.46.167::slave    10.70.46.142    Passive    N/A                N/A                          
georep4        master        /rhs/brick3/b12    root          10.70.46.167::slave    10.70.46.142    Passive    N/A                N/A                          
[root@georep1 ~]# gluster volume geo-replication master geoaccount@10.70.46.167::slave status 
 
MASTER NODE    MASTER VOL    MASTER BRICK       SLAVE USER    SLAVE                             SLAVE NODE    STATUS     CRAWL STATUS    LAST_SYNCED          
---------------------------------------------------------------------------------------------------------------------------------------------------
georep1        master        /rhs/brick1/b1     geoaccount    geoaccount@10.70.46.167::slave    N/A           Created    N/A             N/A                  
georep1        master        /rhs/brick2/b5     geoaccount    geoaccount@10.70.46.167::slave    N/A           Created    N/A             N/A                  
georep1        master        /rhs/brick3/b9     geoaccount    geoaccount@10.70.46.167::slave    N/A           Created    N/A             N/A                  
georep3        master        /rhs/brick1/b3     geoaccount    geoaccount@10.70.46.167::slave    N/A           Created    N/A             N/A                  
georep3        master        /rhs/brick2/b7     geoaccount    geoaccount@10.70.46.167::slave    N/A           Created    N/A             N/A                  
georep3        master        /rhs/brick3/b11    geoaccount    geoaccount@10.70.46.167::slave    N/A           Created    N/A             N/A                  
georep2        master        /rhs/brick1/b2     geoaccount    geoaccount@10.70.46.167::slave    N/A           Created    N/A             N/A                  
georep2        master        /rhs/brick2/b6     geoaccount    geoaccount@10.70.46.167::slave    N/A           Created    N/A             N/A                  
georep2        master        /rhs/brick3/b10    geoaccount    geoaccount@10.70.46.167::slave    N/A           Created    N/A             N/A                  
georep4        master        /rhs/brick1/b4     geoaccount    geoaccount@10.70.46.167::slave    N/A           Created    N/A             N/A                  
georep4        master        /rhs/brick2/b8     geoaccount    geoaccount@10.70.46.167::slave    N/A           Created    N/A             N/A                  
georep4        master        /rhs/brick3/b12    geoaccount    geoaccount@10.70.46.167::slave    N/A           Created    N/A             N/A                  
[root@georep1 ~]# 
[root@georep1 ~]# 
[root@georep1 ~]# 
[root@georep1 ~]# gluster volume geo-replication status 
 
MASTER NODE    MASTER VOL    MASTER BRICK       SLAVE USER    SLAVE                                   SLAVE NODE      STATUS     CRAWL STATUS       LAST_SYNCED                  
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
georep1        master        /rhs/brick1/b1     root          ssh://10.70.46.167::slave               10.70.46.167    Active     Changelog Crawl    2015-09-10 08:47:07          
georep1        master        /rhs/brick2/b5     root          ssh://10.70.46.167::slave               10.70.46.167    Active     Changelog Crawl    2015-09-10 08:47:07          
georep1        master        /rhs/brick3/b9     root          ssh://10.70.46.167::slave               10.70.46.167    Active     Changelog Crawl    2015-09-10 08:47:07          
georep3        master        /rhs/brick1/b3     root          ssh://10.70.46.167::slave               10.70.46.165    Passive    N/A                N/A                          
georep3        master        /rhs/brick2/b7     root          ssh://10.70.46.167::slave               10.70.46.165    Passive    N/A                N/A                          
georep3        master        /rhs/brick3/b11    root          ssh://10.70.46.167::slave               10.70.46.165    Active     Changelog Crawl    2015-09-10 08:47:12          
georep2        master        /rhs/brick1/b2     root          ssh://10.70.46.167::slave               10.70.46.100    Passive    N/A                N/A                          
georep2        master        /rhs/brick2/b6     root          ssh://10.70.46.167::slave               10.70.46.100    Passive    N/A                N/A                          
georep2        master        /rhs/brick3/b10    root          ssh://10.70.46.167::slave               10.70.46.100    Passive    N/A                N/A                          
georep4        master        /rhs/brick1/b4     root          ssh://10.70.46.167::slave               10.70.46.142    Passive    N/A                N/A                          
georep4        master        /rhs/brick2/b8     root          ssh://10.70.46.167::slave               10.70.46.142    Passive    N/A                N/A                          
georep4        master        /rhs/brick3/b12    root          ssh://10.70.46.167::slave               10.70.46.142    Passive    N/A                N/A                          
georep1        master        /rhs/brick1/b1     geoaccount    ssh://geoaccount@10.70.46.167::slave    N/A             Created    N/A                N/A                          
georep1        master        /rhs/brick2/b5     geoaccount    ssh://geoaccount@10.70.46.167::slave    N/A             Created    N/A                N/A                          
georep1        master        /rhs/brick3/b9     geoaccount    ssh://geoaccount@10.70.46.167::slave    N/A             Created    N/A                N/A                          
georep3        master        /rhs/brick1/b3     geoaccount    ssh://geoaccount@10.70.46.167::slave    N/A             Created    N/A                N/A                          
georep3        master        /rhs/brick2/b7     geoaccount    ssh://geoaccount@10.70.46.167::slave    N/A             Created    N/A                N/A                          
georep3        master        /rhs/brick3/b11    geoaccount    ssh://geoaccount@10.70.46.167::slave    N/A             Created    N/A                N/A                          
georep2        master        /rhs/brick1/b2     geoaccount    ssh://geoaccount@10.70.46.167::slave    N/A             Created    N/A                N/A                          
georep2        master        /rhs/brick2/b6     geoaccount    ssh://geoaccount@10.70.46.167::slave    N/A             Created    N/A                N/A                          
georep2        master        /rhs/brick3/b10    geoaccount    ssh://geoaccount@10.70.46.167::slave    N/A             Created    N/A                N/A                          
georep4        master        /rhs/brick1/b4     geoaccount    ssh://geoaccount@10.70.46.167::slave    N/A             Created    N/A                N/A                          
georep4        master        /rhs/brick2/b8     geoaccount    ssh://geoaccount@10.70.46.167::slave    N/A             Created    N/A                N/A                          
georep4        master        /rhs/brick3/b12    geoaccount    ssh://geoaccount@10.70.46.167::slave    N/A             Created    N/A                N/A                          
[root@georep1 ~]#








[root@georep1 ~]# #gluster volume geo-replication master 10.70.46.100::slave create push-pem force
[root@georep1 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.70.46.100
The authenticity of host '10.70.46.100 (10.70.46.100)' can't be established.
ECDSA key fingerprint is 90:6f:e9:10:13:74:f5:96:92:e5:1c:40:ba:49:89:a2.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@10.70.46.100's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@10.70.46.100'"
and check to make sure that only the key(s) you wanted were added.

[root@georep1 ~]# gluster volume geo-replication master 10.70.46.100::slave create push-pem force
Creating geo-replication session between master & 10.70.46.100::slave has been successful
[root@georep1 ~]# 
[root@georep1 ~]# gluster volume geo-replication status 
 
MASTER NODE    MASTER VOL    MASTER BRICK       SLAVE USER    SLAVE                                   SLAVE NODE      STATUS     CRAWL STATUS       LAST_SYNCED                  
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
georep1        master        /rhs/brick1/b1     root          ssh://10.70.46.167::slave               10.70.46.167    Active     Changelog Crawl    2015-09-10 08:47:07          
georep1        master        /rhs/brick2/b5     root          ssh://10.70.46.167::slave               10.70.46.167    Active     Changelog Crawl    2015-09-10 08:47:07          
georep1        master        /rhs/brick3/b9     root          ssh://10.70.46.167::slave               10.70.46.167    Active     Changelog Crawl    2015-09-10 08:47:07          
georep3        master        /rhs/brick1/b3     root          ssh://10.70.46.167::slave               10.70.46.165    Passive    N/A                N/A                          
georep3        master        /rhs/brick2/b7     root          ssh://10.70.46.167::slave               10.70.46.165    Passive    N/A                N/A                          
georep3        master        /rhs/brick3/b11    root          ssh://10.70.46.167::slave               10.70.46.165    Active     Changelog Crawl    2015-09-10 08:47:12          
georep2        master        /rhs/brick1/b2     root          ssh://10.70.46.167::slave               10.70.46.100    Passive    N/A                N/A                          
georep2        master        /rhs/brick2/b6     root          ssh://10.70.46.167::slave               10.70.46.100    Passive    N/A                N/A                          
georep2        master        /rhs/brick3/b10    root          ssh://10.70.46.167::slave               10.70.46.100    Passive    N/A                N/A                          
georep4        master        /rhs/brick1/b4     root          ssh://10.70.46.167::slave               10.70.46.142    Passive    N/A                N/A                          
georep4        master        /rhs/brick2/b8     root          ssh://10.70.46.167::slave               10.70.46.142    Passive    N/A                N/A                          
georep4        master        /rhs/brick3/b12    root          ssh://10.70.46.167::slave               10.70.46.142    Passive    N/A                N/A                          
georep1        master        /rhs/brick1/b1     geoaccount    ssh://geoaccount@10.70.46.167::slave    N/A             Created    N/A                N/A                          
georep1        master        /rhs/brick2/b5     geoaccount    ssh://geoaccount@10.70.46.167::slave    N/A             Created    N/A                N/A                          
georep1        master        /rhs/brick3/b9     geoaccount    ssh://geoaccount@10.70.46.167::slave    N/A             Created    N/A                N/A                          
georep3        master        /rhs/brick1/b3     geoaccount    ssh://geoaccount@10.70.46.167::slave    N/A             Created    N/A                N/A                          
georep3        master        /rhs/brick2/b7     geoaccount    ssh://geoaccount@10.70.46.167::slave    N/A             Created    N/A                N/A                          
georep3        master        /rhs/brick3/b11    geoaccount    ssh://geoaccount@10.70.46.167::slave    N/A             Created    N/A                N/A                          
georep2        master        /rhs/brick1/b2     geoaccount    ssh://geoaccount@10.70.46.167::slave    N/A             Created    N/A                N/A                          
georep2        master        /rhs/brick2/b6     geoaccount    ssh://geoaccount@10.70.46.167::slave    N/A             Created    N/A                N/A                          
georep2        master        /rhs/brick3/b10    geoaccount    ssh://geoaccount@10.70.46.167::slave    N/A             Created    N/A                N/A                          
georep4        master        /rhs/brick1/b4     geoaccount    ssh://geoaccount@10.70.46.167::slave    N/A             Created    N/A                N/A                          
georep4        master        /rhs/brick2/b8     geoaccount    ssh://geoaccount@10.70.46.167::slave    N/A             Created    N/A                N/A                          
georep4        master        /rhs/brick3/b12    geoaccount    ssh://geoaccount@10.70.46.167::slave    N/A             Created    N/A                N/A                          
georep1        master        /rhs/brick1/b1     root          ssh://10.70.46.100::slave               N/A             Created    N/A                N/A                          
georep1        master        /rhs/brick2/b5     root          ssh://10.70.46.100::slave               N/A             Created    N/A                N/A                          
georep1        master        /rhs/brick3/b9     root          ssh://10.70.46.100::slave               N/A             Created    N/A                N/A                          
georep3        master        /rhs/brick1/b3     root          ssh://10.70.46.100::slave               N/A             Created    N/A                N/A                          
georep3        master        /rhs/brick2/b7     root          ssh://10.70.46.100::slave               N/A             Created    N/A                N/A                          
georep3        master        /rhs/brick3/b11    root          ssh://10.70.46.100::slave               N/A             Created    N/A                N/A                          
georep2        master        /rhs/brick1/b2     root          ssh://10.70.46.100::slave               N/A             Created    N/A                N/A                          
georep2        master        /rhs/brick2/b6     root          ssh://10.70.46.100::slave               N/A             Created    N/A                N/A                          
georep2        master        /rhs/brick3/b10    root          ssh://10.70.46.100::slave               N/A             Created    N/A                N/A                          
georep4        master        /rhs/brick1/b4     root          ssh://10.70.46.100::slave               N/A             Created    N/A                N/A                          
georep4        master        /rhs/brick2/b8     root          ssh://10.70.46.100::slave               N/A             Created    N/A                N/A                          
georep4        master        /rhs/brick3/b12    root          ssh://10.70.46.100::slave               N/A             Created    N/A                N/A                          
[root@georep1 ~]#

--- Additional comment from Vijay Bellur on 2015-12-30 07:45:06 EST ---

REVIEW: http://review.gluster.org/13111 (geo-rep: involve slave vol uuid to geo-rep session [WIP]) posted (#4) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-01-05 10:37:40 EST ---

REVIEW: http://review.gluster.org/13111 (geo-rep: involve slave vol uuid to geo-rep session) posted (#6) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-01-06 07:49:07 EST ---

REVIEW: http://review.gluster.org/13111 (geo-rep: involve slave vol uuid to geo-rep session) posted (#7) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-02-18 06:12:05 EST ---

REVIEW: http://review.gluster.org/13111 (geo-rep: involve slave vol uuid to geo-rep session) posted (#8) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-02-22 08:49:26 EST ---

REVIEW: http://review.gluster.org/13111 (geo-rep: involve slave vol uuid to geo-rep session) posted (#9) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-02-24 10:34:39 EST ---

REVIEW: http://review.gluster.org/13111 (geo-rep: involve slave vol uuid to geo-rep session) posted (#10) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-03-03 09:00:18 EST ---

REVIEW: http://review.gluster.org/13111 (geo-rep: involve slave vol uuid to geo-rep session) posted (#11) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-03-07 03:39:13 EST ---

REVIEW: http://review.gluster.org/13111 (geo-rep: involve slave vol uuid in geo-rep session) posted (#12) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-03-14 04:43:45 EDT ---

REVIEW: http://review.gluster.org/13111 (geo-rep: involve slave vol uuid in geo-rep session) posted (#13) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-03-17 05:57:35 EDT ---

REVIEW: http://review.gluster.org/13111 (geo-rep: involve slave vol uuid in geo-rep session) posted (#14) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-03-22 07:15:38 EDT ---

REVIEW: http://review.gluster.org/13111 (geo-rep: involve slave volume uuid in geo-rep session) posted (#15) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Mike McCune on 2016-03-28 19:32:32 EDT ---

This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune@redhat.com with any questions

--- Additional comment from Vijay Bellur on 2016-03-30 06:15:22 EDT ---

REVIEW: http://review.gluster.org/13111 (geo-rep: involve slave volume uuid in geo-rep session) posted (#16) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-04-12 05:59:18 EDT ---

REVIEW: http://review.gluster.org/13111 (geo-rep: involve slave volume uuid in geo-rep session) posted (#17) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-04-14 03:50:21 EDT ---

REVIEW: http://review.gluster.org/13111 (geo-rep: involve slave volume uuid in geo-rep session) posted (#18) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-04-14 07:20:20 EDT ---

REVIEW: http://review.gluster.org/13111 (geo-rep: involve slave volume uuid in geo-rep session) posted (#19) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-05-03 11:13:10 EDT ---

REVIEW: http://review.gluster.org/13111 (geo-rep: involve slave volume uuid in geo-rep session) posted (#20) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-05-03 11:14:05 EDT ---

REVIEW: http://review.gluster.org/13111 (geo-rep: involve slave volume uuid in geo-rep session) posted (#21) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-05-03 11:28:27 EDT ---

REVIEW: http://review.gluster.org/13111 (geo-rep: involve slave volume uuid in geo-rep session) posted (#22) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-05-03 11:37:09 EDT ---

REVIEW: http://review.gluster.org/13111 (geo-rep: involve slave volume uuid in geo-rep session) posted (#23) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-05-04 08:40:57 EDT ---

REVIEW: http://review.gluster.org/13111 (geo-rep: slave volume uuid to identify a geo-rep session) posted (#24) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-05-04 08:43:59 EDT ---

REVIEW: http://review.gluster.org/13111 (geo-rep: slave volume uuid to identify a geo-rep session) posted (#25) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-05-04 08:49:31 EDT ---

REVIEW: http://review.gluster.org/13111 (geo-rep: slave volume uuid to identify a geo-rep session) posted (#26) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-05-04 09:04:52 EDT ---

REVIEW: http://review.gluster.org/13111 (geo-rep: slave volume uuid to identify a geo-rep session) posted (#27) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-05-05 05:33:12 EDT ---

REVIEW: http://review.gluster.org/13111 (geo-rep: slave volume uuid to identify a geo-rep session) posted (#28) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-05-06 08:06:57 EDT ---

REVIEW: http://review.gluster.org/13111 (geo-rep: slave volume uuid to identify a geo-rep session) posted (#29) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-05-08 15:16:08 EDT ---

REVIEW: http://review.gluster.org/13111 (geo-rep: slave volume uuid to identify a geo-rep session) posted (#30) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-05-09 07:35:49 EDT ---

REVIEW: http://review.gluster.org/13111 (geo-rep: slave volume uuid to identify a geo-rep session) posted (#31) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-05-10 15:56:26 EDT ---

REVIEW: http://review.gluster.org/13111 (geo-rep: slave volume uuid to identify a geo-rep session) posted (#32) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-05-11 11:15:22 EDT ---

REVIEW: http://review.gluster.org/13111 (geo-rep: slave volume uuid to identify a geo-rep session) posted (#33) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-05-12 00:39:15 EDT ---

REVIEW: http://review.gluster.org/13111 (geo-rep: slave volume uuid to identify a geo-rep session) posted (#34) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-05-12 07:04:31 EDT ---

REVIEW: http://review.gluster.org/13111 (geo-rep: slave volume uuid to identify a geo-rep session) posted (#35) for review on master by Saravanakumar Arumugam (sarumuga@redhat.com)

--- Additional comment from Vijay Bellur on 2016-05-12 09:39:12 EDT ---

REVIEW: http://review.gluster.org/13111 (glusterd/geo-rep: slave volume uuid to identify a geo-rep session) posted (#36) for review on master by Aravinda VK (avishwan@redhat.com)

--- Additional comment from Vijay Bellur on 2016-05-12 09:44:32 EDT ---

REVIEW: http://review.gluster.org/13111 (glusterd/geo-rep: slave volume uuid to identify a geo-rep session) posted (#37) for review on master by Aravinda VK (avishwan@redhat.com)

--- Additional comment from Vijay Bellur on 2016-05-13 02:23:02 EDT ---

COMMIT: http://review.gluster.org/13111 committed in master by Aravinda VK (avishwan@redhat.com) 
------
commit a9128cda34b1f696b717ba09fa0ac5a929be8969
Author: Saravanakumar Arumugam <sarumuga@redhat.com>
Date:   Tue Dec 29 19:22:36 2015 +0530

    glusterd/geo-rep: slave volume uuid to identify a geo-rep session
    
    Problem:
    Currently, it is possible to create multiple geo-rep session from
    the Master host to Slave host(s), where Slave host(s) belonging
    to the same volume.
    
    For example:
    Consider Master Host M1 having volume tv1 and Slave volume tv2,
    which spans across two Slave hosts S1 and S2.
    Currently, it is possible to create geo-rep session from
    M1(tv1) to S1(tv2) as well as from M1(tv1) to S2(tv2).
    
    When the Slave Host is alone modified, it is identified as a new geo-rep
    session (as slave host and slave volume together are identifying
    Slave side).
    
    Also, it is possible to create both root and non-root geo-rep session between
    same Master volume and Slave volume. This should also be avoided.
    
    Solution:
    This multiple geo-rep session creation must be avoided and
    in order to avoid, use Slave volume uuid to identify a Slave.
    This way, we can identify whether a session is already created for
    the same Slave volume and avoid creating again (using different host).
    
    When the session creation is forced in the above scenario, rename
    the existing geo-rep session directory with new Slave Host mentioned.
    
    Change-Id: I9239759cbc0d15dad63c48b8cf62950bb687c7c8
    BUG: 1294813
    Signed-off-by: Saravanakumar Arumugam <sarumuga@redhat.com>
    Signed-off-by: Aravinda VK <avishwan@redhat.com>
    Reviewed-on: http://review.gluster.org/13111
    Reviewed-by: Kotresh HR <khiremat@redhat.com>
    Tested-by: Kotresh HR <khiremat@redhat.com>
    Smoke: Gluster Build System <jenkins@build.gluster.com>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Comment 1 Vijay Bellur 2016-05-13 02:25:36 EDT
REVIEW: http://review.gluster.org/14322 (glusterd/geo-rep: slave volume uuid to identify a geo-rep session) posted (#1) for review on release-3.7 by Aravinda VK (avishwan@redhat.com)
Comment 2 Vijay Bellur 2016-05-19 02:47:13 EDT
COMMIT: http://review.gluster.org/14322 committed in release-3.7 by Aravinda VK (avishwan@redhat.com) 
------
commit 70192bfe5f7f956843d094ec9cb484b23ce45556
Author: Saravanakumar Arumugam <sarumuga@redhat.com>
Date:   Tue Dec 29 19:22:36 2015 +0530

    glusterd/geo-rep: slave volume uuid to identify a geo-rep session
    
    Problem:
    Currently, it is possible to create multiple geo-rep session from
    the Master host to Slave host(s), where Slave host(s) belonging
    to the same volume.
    
    For example:
    Consider Master Host M1 having volume tv1 and Slave volume tv2,
    which spans across two Slave hosts S1 and S2.
    Currently, it is possible to create geo-rep session from
    M1(tv1) to S1(tv2) as well as from M1(tv1) to S2(tv2).
    
    When the Slave Host is alone modified, it is identified as a new geo-rep
    session (as slave host and slave volume together are identifying
    Slave side).
    
    Also, it is possible to create both root and non-root geo-rep session between
    same Master volume and Slave volume. This should also be avoided.
    
    Solution:
    This multiple geo-rep session creation must be avoided and
    in order to avoid, use Slave volume uuid to identify a Slave.
    This way, we can identify whether a session is already created for
    the same Slave volume and avoid creating again (using different host).
    
    When the session creation is forced in the above scenario, rename
    the existing geo-rep session directory with new Slave Host mentioned.
    
    Change-Id: I9239759cbc0d15dad63c48b8cf62950bb687c7c8
    BUG: 1335728
    Signed-off-by: Saravanakumar Arumugam <sarumuga@redhat.com>
    Signed-off-by: Aravinda VK <avishwan@redhat.com>
    Reviewed-on: http://review.gluster.org/13111
    Reviewed-by: Kotresh HR <khiremat@redhat.com>
    Tested-by: Kotresh HR <khiremat@redhat.com>
    Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
     (cherry picked from commit a9128cda34b1f696b717ba09fa0ac5a929be8969)
    Reviewed-on: http://review.gluster.org/14322
    Smoke: Gluster Build System <jenkins@build.gluster.com>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
Comment 3 Kaushal 2016-06-28 08:17:24 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.12, please open a new bug report.

glusterfs-3.7.12 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-devel/2016-June/049918.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.