Description of problem: Dist-geo-rep : 'gluster volume geo <master_vol> <slave_ip>::<slave_vol> delete' command deletes all session having that slave_vol name ; even though sessions are between same master and diffident slave cluster Version-Release number of selected component (if applicable): always How reproducible: 3.4.0.13rhs-1.el6rhs.x86_64 Steps to Reproduce: 1.had a 2 slave cluster; having same volume name slave cluster 1 :- rhsauto018 rhsauto031 rhsauto026 rhsauto027 volume name :- slave4 slave cluster 2 :- 10.70.43.147 - 10.70.43.114 10.70.43.115 10.70.43.103 volume name :- slave4 2. create session :- one with slave cluster 1 and another with slave cluster 2 start those sessions [root@DVM4 geo-replication]# gluster volume geo test4 rhsauto018.lab.eng.blr.redhat.com::slave4 create root@DVM4 geo-replication]# gluster volume geo test4 rhsauto018.lab.eng.blr.redhat.com::slave4 create Creating geo-replication session between test4 & rhsauto018.lab.eng.blr.redhat.com::slave4 has been successful [root@DVM4 geo-replication]# gluster volume geo test4 rhsauto018.lab.eng.blr.redhat.com::slave4 status NODE MASTER SLAVE HEALTH UPTIME ----------------------------------------------------------------------------------------------------------------- DVM5.lab.eng.blr.redhat.com test4 rhsauto018.lab.eng.blr.redhat.com::slave4 Not Started N/A DVM2.lab.eng.blr.redhat.com test4 rhsauto018.lab.eng.blr.redhat.com::slave4 Not Started N/A DVM6.lab.eng.blr.redhat.com test4 rhsauto018.lab.eng.blr.redhat.com::slave4 Not Started N/A DVM1.lab.eng.blr.redhat.com test4 rhsauto018.lab.eng.blr.redhat.com::slave4 Not Started N/A DVM3.lab.eng.blr.redhat.com test4 rhsauto018.lab.eng.blr.redhat.com::slave4 Not Started N/A [root@DVM4 geo-replication]# gluster volume geo test4 rhsauto018.lab.eng.blr.redhat.com::slave4 start Starting geo-replication session between test4 & rhsauto018.lab.eng.blr.redhat.com::slave4 has been successful [root@DVM4 geo-replication]# gluster volume geo test4 10.70.43.103::slave4 create push-pem Creating geo-replication session between test4 & 10.70.43.103::slave4 has been successful [root@DVM4 geo-replication]# gluster volume geo test4 10.70.43.103::slave4 status NODE MASTER SLAVE HEALTH UPTIME -------------------------------------------------------------------------------------------- DVM3.lab.eng.blr.redhat.com test4 10.70.43.103::slave4 Not Started N/A DVM1.lab.eng.blr.redhat.com test4 10.70.43.103::slave4 Not Started N/A DVM2.lab.eng.blr.redhat.com test4 10.70.43.103::slave4 Not Started N/A DVM5.lab.eng.blr.redhat.com test4 10.70.43.103::slave4 Not Started N/A DVM6.lab.eng.blr.redhat.com test4 10.70.43.103::slave4 Not Started N/A [root@DVM4 geo-replication]# gluster volume geo test4 10.70.43.103::slave4 start Starting geo-replication session between test4 & 10.70.43.103::slave4 has been successful [root@DVM4 geo-replication]# gluster volume geo test4 status NODE MASTER SLAVE HEALTH UPTIME --------------------------------------------------------------------------------------------------------------------------- DVM3.lab.eng.blr.redhat.com test4 ssh://10.70.43.103::slave4 Initializing... N/A DVM3.lab.eng.blr.redhat.com test4 ssh://rhsauto018.lab.eng.blr.redhat.com::slave4 faulty N/A DVM5.lab.eng.blr.redhat.com test4 ssh://10.70.43.103::slave4 Initializing... N/A DVM5.lab.eng.blr.redhat.com test4 ssh://rhsauto018.lab.eng.blr.redhat.com::slave4 faulty N/A DVM2.lab.eng.blr.redhat.com test4 ssh://10.70.43.103::slave4 Initializing... N/A DVM2.lab.eng.blr.redhat.com test4 ssh://rhsauto018.lab.eng.blr.redhat.com::slave4 faulty N/A DVM6.lab.eng.blr.redhat.com test4 ssh://10.70.43.103::slave4 Initializing... N/A DVM6.lab.eng.blr.redhat.com test4 ssh://rhsauto018.lab.eng.blr.redhat.com::slave4 faulty N/A DVM1.lab.eng.blr.redhat.com test4 ssh://10.70.43.103::slave4 Initializing... N/A DVM1.lab.eng.blr.redhat.com test4 ssh://rhsauto018.lab.eng.blr.redhat.com::slave4 faulty N/A 3. stop and delete one session and verify that it has also deleted another session having same vol_name. [root@DVM4 geo-replication]# gluster volume geo test4 10.70.43.103::slave4 stop Stopping geo-replication session between test4 & 10.70.43.103::slave4 has been successful [root@DVM4 geo-replication]# gluster volume geo test4 status NODE MASTER SLAVE HEALTH UPTIME ------------------------------------------------------------------------------------------------------------------- DVM5.lab.eng.blr.redhat.com test4 ssh://10.70.43.103::slave4 Stopped N/A DVM5.lab.eng.blr.redhat.com test4 ssh://rhsauto018.lab.eng.blr.redhat.com::slave4 faulty N/A DVM3.lab.eng.blr.redhat.com test4 ssh://10.70.43.103::slave4 Stopped N/A DVM3.lab.eng.blr.redhat.com test4 ssh://rhsauto018.lab.eng.blr.redhat.com::slave4 faulty N/A DVM2.lab.eng.blr.redhat.com test4 ssh://10.70.43.103::slave4 Stopped N/A DVM2.lab.eng.blr.redhat.com test4 ssh://rhsauto018.lab.eng.blr.redhat.com::slave4 faulty N/A DVM6.lab.eng.blr.redhat.com test4 ssh://10.70.43.103::slave4 Stopped N/A DVM6.lab.eng.blr.redhat.com test4 ssh://rhsauto018.lab.eng.blr.redhat.com::slave4 faulty N/A DVM1.lab.eng.blr.redhat.com test4 ssh://10.70.43.103::slave4 Stopped N/A DVM1.lab.eng.blr.redhat.com test4 ssh://rhsauto018.lab.eng.blr.redhat.com::slave4 faulty N/A [root@DVM4 geo-replication]# gluster volume geo test4 10.70.43.103::slave4 delete Deleting geo-replication session between test4 & 10.70.43.103::slave4 has been successful [root@DVM4 geo-replication]# gluster volume geo test4 status No active geo-replication sessions for test4 Actual results: deleting all sessions having that slave volume name Expected results: It should not delete all session Additional info:
The patch that fixes this bug, changes the path where the session info like config, and status details are stored. Hence with this, patch, when an rpm upgrade is performed, the previous session config will not be supported. New geo-rep session must be created after installing this patch.
https://code.engineering.redhat.com/gerrit/#/c/11073/
verified with 3.4.0.18rhs-1.el6rhs.x86_64 and working as per expected (not deleting all sessions)
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html