Bug 990057 - Dist-geo-rep : 'gluster volume geo <master_vol> <slave_ip>::<slave_vol> delete' command deletes all session having that slave_vol name ; even though those session are created using different slave_ip from the same salve cluster
Summary: Dist-geo-rep : 'gluster volume geo <master_vol> <slave_ip>::<slave_vol> delet...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: geo-replication
Version: 2.1
Hardware: x86_64
OS: Linux
low
medium
Target Milestone: ---
: ---
Assignee: Avra Sengupta
QA Contact: amainkar
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-07-30 10:09 UTC by Rachana Patel
Modified: 2015-04-20 11:57 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-08-01 10:05:15 UTC
Embargoed:


Attachments (Terms of Use)

Description Rachana Patel 2013-07-30 10:09:59 UTC
Description of problem:
Dist-geo-rep : 'gluster volume geo <master_vol> <slave_ip>::<slave_vol> delete' command deletes all session having that slave_vol name ; even though those session are created using different slave_ip from the same salve cluster

Version-Release number of selected component (if applicable):
always

How reproducible:
3.4.0.13rhs-1.el6rhs.x86_64

Steps to Reproduce:
1. slave cluster has 4 servers.
rhsauto018
rhsauto031
rhsauto026
rhsauto027

2. create session using rhsauto031

gluster volume geo test5  rhsauto031.lab.eng.blr.redhat.com::slave5 status
NODE                           MASTER   SLAVE                                        HEALTH         UPTIME       
-----------------------------------------------------------------------------------------------------------------
DVM4.lab.eng.blr.redhat.com    test5    rhsauto031.lab.eng.blr.redhat.com::slave5    Not Started    N/A          
DVM3.lab.eng.blr.redhat.com    test5    rhsauto031.lab.eng.blr.redhat.com::slave5    Not Started    N/A          
DVM5.lab.eng.blr.redhat.com    test5    rhsauto031.lab.eng.blr.redhat.com::slave5    Not Started    N/A          
DVM6.lab.eng.blr.redhat.com    test5    rhsauto031.lab.eng.blr.redhat.com::slave5    Not Started    N/A          
DVM1.lab.eng.blr.redhat.com    test5    rhsauto031.lab.eng.blr.redhat.com::slave5    Not Started    N/A          
DVM2.lab.eng.blr.redhat.com    test5    rhsauto031.lab.eng.blr.redhat.com::slave5    Not Started    N/A 

3. now create session using another IP from slave cluster but volume name should be same and start that session
[root@DVM4 geo-replication]#  gluster volume geo test5  rhsauto018.lab.eng.blr.redhat.com::slave5 status
NODE                           MASTER   SLAVE                                        HEALTH    UPTIME       
------------------------------------------------------------------------------------------------------------
DVM4.lab.eng.blr.redhat.com    test5    rhsauto018.lab.eng.blr.redhat.com::slave5    faulty    N/A          
DVM2.lab.eng.blr.redhat.com    test5    rhsauto018.lab.eng.blr.redhat.com::slave5    faulty    N/A          
DVM3.lab.eng.blr.redhat.com    test5    rhsauto018.lab.eng.blr.redhat.com::slave5    faulty    N/A          
DVM6.lab.eng.blr.redhat.com    test5    rhsauto018.lab.eng.blr.redhat.com::slave5    faulty    N/A          
DVM5.lab.eng.blr.redhat.com    test5    rhsauto018.lab.eng.blr.redhat.com::slave5    faulty    N/A          
DVM1.lab.eng.blr.redhat.com    test5    rhsauto018.lab.eng.blr.redhat.com::slave5    faulty    N/A    

4. delete the session created in step 2 and verify that it has also deleted session created in step 3

[root@DVM4 geo-replication]#  gluster volume geo test5  rhsauto031.lab.eng.blr.redhat.com::slave5 delete
Deleting geo-replication session between test5 & rhsauto031.lab.eng.blr.redhat.com::slave5 has been successful

[root@DVM4 geo-replication]# gluster volume geo test5 status
No active geo-replication sessions for test5


Actual results:
deleting all sessions having that slave volume name

Expected results:
should not delete all session

Additional info:

Comment 2 Amar Tumballi 2013-08-01 10:05:15 UTC
The geo-replication in the new code works with cluster-id, and not exactly on the 'IP/hostname' of the slave node. 'IP/hostname' of the slave is required just to fetch the information of the remote cluster, and afterwords the housekeeping all happens on cluster-id (an uuid).

If the IP address is changed, but the slave cluster is same, as per geo-replication, its still the same session, and hence it deletes both the sessions.


Note You need to log in before you can comment on or make changes to this bug.