Bug 1344607 - [geo-rep]: Add-Brick use case: create push-pem force on existing geo-rep fails
Summary: [geo-rep]: Add-Brick use case: create push-pem force on existing geo-rep fails
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: 3.8.0
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
Assignee: Saravanakumar
QA Contact:
URL:
Whiteboard:
Depends On: 1342938 1342979 1344605
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-06-10 07:29 UTC by Saravanakumar
Modified: 2016-06-16 12:34 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.8.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1344605
Environment:
Last Closed: 2016-06-16 12:34:04 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Vijay Bellur 2016-06-13 10:42:45 UTC
REVIEW: http://review.gluster.org/14711 (glusterd/geo-rep: Avoid started status check if same host) posted (#1) for review on release-3.8 by Saravanakumar Arumugam (sarumuga)

Comment 2 Vijay Bellur 2016-06-13 14:05:35 UTC
REVIEW: http://review.gluster.org/14711 (glusterd/geo-rep: Avoid started status check if same host) posted (#2) for review on release-3.8 by Saravanakumar Arumugam (sarumuga)

Comment 3 Vijay Bellur 2016-06-13 14:40:08 UTC
COMMIT: http://review.gluster.org/14711 committed in release-3.8 by Niels de Vos (ndevos) 
------
commit 1364929574c1af7784ac47088d2f7507ee0103e4
Author: Saravanakumar Arumugam <sarumuga>
Date:   Mon Jun 6 14:44:35 2016 +0530

    glusterd/geo-rep: Avoid started status check if same host
    
    After carrying out add-brick, session creation is carried out
    again, to involve new brick in the session. This needs to be done,
    even if the session is in Started state.
    
    While involving slave uuid as part of a session, User is warned
    if session is in Started state. This check needs to be avoided
    if it is same slave host and session creation needs to be proceeded.
    
    > Change-Id: Ic73edd5bd9e3ee55da96f5aceec0bafa14d3f3dd
    > BUG: 1342979
    > Signed-off-by: Saravanakumar Arumugam <sarumuga>
    > Reviewed-on: http://review.gluster.org/14653
    > CentOS-regression: Gluster Build System <jenkins.com>
    > Smoke: Gluster Build System <jenkins.com>
    > NetBSD-regression: NetBSD Build System <jenkins.org>
    > Reviewed-by: Aravinda VK <avishwan>
    (cherry picked from commit c62493efadbcf5085bbd65a409eed9391301c154)
    
    Change-Id: Ic73edd5bd9e3ee55da96f5aceec0bafa14d3f3dd
    BUG: 1344607
    Signed-off-by: Saravanakumar Arumugam <sarumuga>
    Reviewed-on: http://review.gluster.org/14711
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>
    Reviewed-by: Niels de Vos <ndevos>
    Smoke: Gluster Build System <jenkins.com>

Comment 4 Niels de Vos 2016-06-16 12:34:04 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.