Bug 1342452

Summary: upgrade path when slave volume uuid used in geo-rep session
Product: [Community] GlusterFS Reporter: Saravanakumar <sarumuga>
Component: geo-replicationAssignee: Saravanakumar <sarumuga>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 3.8.0CC: bugs
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.8.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1337473 Environment:
Last Closed: 2016-06-16 12:33:36 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1337473, 1342453    
Bug Blocks:    

Description Saravanakumar 2016-06-03 09:55:13 UTC
Description of problem:

With commit heading, "slave volume uuid is involved as part of slave volume identification" slave volume uuid in involved as part of georep session.

When a volume is already configured for geo-replication(without above patch), upgrade path should be clearly defined.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. create geo-replication session without above mentioned patch.
2. install glusterfs with above mentioned patch.
3. geo-replication session should continue to work after upgrade.
4. We should be able to start geo-rep session again.
Actual results:


Expected results:


Additional info:

Comment 1 Vijay Bellur 2016-06-03 13:44:18 UTC
REVIEW: http://review.gluster.org/14640 (glusterd/geo-rep: upgrade path when slave vol uuid involved) posted (#2) for review on release-3.8 by Saravanakumar Arumugam (sarumuga)

Comment 2 Vijay Bellur 2016-06-03 13:46:53 UTC
REVIEW: http://review.gluster.org/14640 (glusterd/geo-rep: upgrade path when slave vol uuid involved) posted (#3) for review on release-3.8 by Saravanakumar Arumugam (sarumuga)

Comment 3 Vijay Bellur 2016-06-05 05:19:08 UTC
REVIEW: http://review.gluster.org/14640 (glusterd/geo-rep: upgrade path when slave vol uuid involved) posted (#4) for review on release-3.8 by Saravanakumar Arumugam (sarumuga)

Comment 4 Vijay Bellur 2016-06-06 05:47:27 UTC
REVIEW: http://review.gluster.org/14640 (glusterd/geo-rep: upgrade path when slave vol uuid involved) posted (#5) for review on release-3.8 by Saravanakumar Arumugam (sarumuga)

Comment 5 Vijay Bellur 2016-06-06 08:05:48 UTC
REVIEW: http://review.gluster.org/14640 (glusterd/geo-rep: upgrade path when slave vol uuid involved) posted (#6) for review on release-3.8 by Saravanakumar Arumugam (sarumuga)

Comment 6 Vijay Bellur 2016-06-09 09:32:53 UTC
REVIEW: http://review.gluster.org/14640 (glusterd/geo-rep: upgrade path when slave vol uuid involved) posted (#7) for review on release-3.8 by Saravanakumar Arumugam (sarumuga)

Comment 7 Vijay Bellur 2016-06-13 10:37:01 UTC
COMMIT: http://review.gluster.org/14640 committed in release-3.8 by Aravinda VK (avishwan) 
------
commit 4e553071de6455d36ea49cb1d41ff9e57ca43bc8
Author: Saravanakumar Arumugam <sarumuga>
Date:   Thu May 19 21:13:04 2016 +0530

    glusterd/geo-rep: upgrade path when slave vol uuid involved
    
    slave volume uuid is involved in identifying a geo-replication
    session.
    
    This patch addresses upgrade path, where existing geo-rep session
    is gracefully upgraded to involve slave volume uuid.
    
    Change-Id: Ib7ff5109b161592f24fc86fc7e93a407655fab86
    BUG: 1342452
    Reviewed-on: http://review.gluster.org/#/c/14425/
    Signed-off-by: Saravanakumar Arumugam <sarumuga>
    Reviewed-on: http://review.gluster.org/14640
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>
    Reviewed-by: Aravinda VK <avishwan>

Comment 8 Niels de Vos 2016-06-16 12:33:36 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user