Bug 1341820

Summary: [geo-rep]: Upgrade from 3.1.2 to 3.1.3 breaks the existing geo-rep session
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Rahul Hinduja <rhinduja>
Component: geo-replicationAssignee: Saravanakumar <sarumuga>
Status: CLOSED ERRATA QA Contact: Rahul Hinduja <rhinduja>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: amukherj, avishwan, csaba, rcyriac
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 3.1.3   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.7.9-8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-06-23 05:25:34 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1311817    

Description Rahul Hinduja 2016-06-01 19:51:35 UTC
Description of problem:
=======================

Existing geo-rep session becomes invalid after upgrade from older version(3.1.2) to 3.1.3. This is due to the changes introduced in 3.1.3 to identify session with slave uuid. 

Existing 3.1.2 geo-rep session:
===============================

[root@dhcp37-83 scripts]# gluster volume geo-replication master 10.70.37.161::slave status
 
MASTER NODE                          MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE                  SLAVE NODE      STATUS    CRAWL STATUS       LAST_SYNCED                  
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
dhcp37-83.lab.eng.blr.redhat.com     master        /rhs/brick1/b1    root          10.70.37.161::slave    10.70.37.161    Active    Changelog Crawl    2016-06-02 01:04:24          
dhcp37-83.lab.eng.blr.redhat.com     master        /rhs/brick2/b3    root          10.70.37.161::slave    10.70.37.161    Active    Changelog Crawl    2016-06-02 01:04:24          
dhcp37-117.lab.eng.blr.redhat.com    master        /rhs/brick1/b2    root          10.70.37.161::slave    10.70.37.169    Active    Changelog Crawl    2016-06-02 01:04:28          
dhcp37-117.lab.eng.blr.redhat.com    master        /rhs/brick2/b4    root          10.70.37.161::slave    10.70.37.169    Active    Changelog Crawl    2016-06-02 01:04:28      
[root@dhcp37-83 scripts]#    
[root@dhcp37-83 scripts]# rpm -qa | grep geo-replication
glusterfs-geo-replication-3.7.5-19.el7rhgs.x86_64
[root@dhcp37-83 scripts]#

[root@dhcp37-83 scripts]# yum update gluster*
*    *    *    *    *    *    *    *    *    *   *
*    *    *    *    *    *    *    *    *    *   *
[root@dhcp37-83 scripts]# 
[root@dhcp37-83 master]# rpm -qa | grep geo-replication
glusterfs-geo-replication-3.7.9-7.el7rhgs.x86_64
[root@dhcp37-83 master]#
[root@dhcp37-83 master]# gluster volume geo-replication master 10.70.37.161::slave status
No active geo-replication sessions between master and 10.70.37.161::slave
[root@dhcp37-83 master]# gluster volume geo-replication master 10.70.37.161::slave start
Geo-replication session between master and 10.70.37.161::slave does not exist.
geo-replication command failed
[root@dhcp37-83 master]# 


Version-Release number of selected component (if applicable):
=============================================================
glusterfs-geo-replication-3.7.9-7


How reproducible:
=================
Always


Steps to Reproduce:
===================
1. Upgrade from 3.1.2 to 3.1.3 with existing geo-rep session

Actual results:
===============
Existing geo-rep session becomes invalid

Comment 4 Aravinda VK 2016-06-02 07:08:37 UTC
Upstream patch posted for the issue
http://review.gluster.org/#/c/14425/

Comment 9 Rahul Hinduja 2016-06-05 17:51:32 UTC
Verified with build: glusterfs-3.7.9-8

Upgraded from 3.1.2 cluster having geo-rep session to 3.1.3 setup. Able to start geo-rep session. Moving this bug to verified state.

Comment 12 errata-xmlrpc 2016-06-23 05:25:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1240