Bug 1305755 - Start self-heal and display correct heal info after replace brick
Start self-heal and display correct heal info after replace brick
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: disperse (Show other bugs)
3.7.7
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Ashish Pandey
: ZStream
Depends On: 1254121 1278284 1304686
Blocks: 1258313
  Show dependency treegraph
 
Reported: 2016-02-09 02:25 EST by Ashish Pandey
Modified: 2016-04-19 03:25 EDT (History)
6 users (show)

See Also:
Fixed In Version: glusterfs-3.7.9
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1304686
Environment:
Last Closed: 2016-04-19 03:25:49 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Comment 1 Vijay Bellur 2016-02-09 02:41:44 EST
REVIEW: http://review.gluster.org/13403 (cluster/ec: Automate heal for replace brick) posted (#1) for review on release-3.7 by Ashish Pandey (aspandey@redhat.com)
Comment 2 Vijay Bellur 2016-02-10 03:31:16 EST
COMMIT: http://review.gluster.org/13403 committed in release-3.7 by Pranith Kumar Karampuri (pkarampu@redhat.com) 
------
commit 68c97f53561da413c80e6e22d364d00cfb3c8196
Author: Ashish Pandey <aspandey@redhat.com>
Date:   Thu Feb 4 12:07:36 2016 +0530

    cluster/ec: Automate heal for replace brick
    
    Problem:
    After a replace brick command, newly added
    brick does not contain data which existed
    on old brick.
    
    Solution:
    Do getxattr after initialization of all the
    bricks. This will trigger heal for brick root
    as soon as it finds the version mismatch on
    newly added brick.
    
    Removing tests from ec-new-entry.t which were
    required to simulate automation of heal after
    replace brick.
    
    master -
    http://review.gluster.org/#/c/13353/
    
    Change-Id: I08e3dfa565374097f6c08856325ea77727437e11
    BUG: 1305755
    Signed-off-by: Ashish Pandey <aspandey@redhat.com>
    Reviewed-on: http://review.gluster.org/13353
    Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
    Tested-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
    Smoke: Gluster Build System <jenkins@build.gluster.com>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.com>
    Signed-off-by: Ashish Pandey <aspandey@redhat.com>
    Reviewed-on: http://review.gluster.org/13403
    Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
    Tested-by: Xavier Hernandez <xhernandez@datalab.es>
Comment 3 Mike McCune 2016-03-28 18:17:27 EDT
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune@redhat.com with any questions
Comment 4 Kaushal 2016-04-19 03:25:49 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.9, please open a new bug report.

glusterfs-3.7.9 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-users/2016-March/025922.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.