Bug 1256245 - AFR: gluster v restart force or brick process restart doesn't heal the files
Summary: AFR: gluster v restart force or brick process restart doesn't heal the files
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: 3.6.5
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
Assignee: Ravishankar N
QA Contact:
URL:
Whiteboard:
Depends On: 1239021 1253309
Blocks: 1223636 1255690 glusterfs-3.6.6
TreeView+ depends on / blocked
 
Reported: 2015-08-24 07:18 UTC by Ravishankar N
Modified: 2015-12-01 16:45 UTC (History)
1 user (show)

Fixed In Version: glusterfs-3.6.6
Doc Type: Bug Fix
Doc Text:
Clone Of: 1253309
Environment:
Last Closed: 2015-09-30 12:15:13 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Ravishankar N 2015-08-24 07:18:23 UTC
+++ This bug was initially created as a clone of Bug #1253309 +++

Description of problem:

When one of the replica brick is down and do some file operation, gluster vol restart or brick process restart doesn't heal the files which needs to be healed.

Version-Release number of selected component (if applicable):

glusterfs-3.7.1-7.el6rhs.x86_64


How reproducible:

100%

Steps to Reproduce:

1. Create 2*2 distribute replicate volume
2. Do fuse mount 
3. create some files on mount point
4. kill one of the replica brick
5. rename the file from the mount point
6. check gluster v heal <volname> info
7. restart the volume or restart the brick process


Actual results:

Files are not healed


Expected results:

volume restart or brick process restart should heal the files which need to be healed

Additional info:

Volume Name: vol0
Type: Distributed-Replicate
Volume ID: 53c64343-c537-428c-b7b7-a45f198c42a0
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.33.214:/rhs/brick1/b001
Brick2: 10.70.33.219:/rhs/brick1/b002
Brick3: 10.70.33.225:/rhs/brick1/b003
Brick4: 10.70.44.13:/rhs/brick1/b004
Options Reconfigured:
performance.readdir-ahead: on
features.uss: enable
features.quota: on
features.inode-quota: on
features.quota-deem-statfs: on
server.allow-insecure: on
features.barrier: disable
cluster.enable-shared-storage: enable


--- Additional comment from Ravishankar N on 2015-07-03 05:45:57 EDT ---

Currently in AFR-v2, when a CHILD_UP notification is received, the index heal is triggered only on that particular child. The fix is to trigger the index heal on all local children.

While this is a bug, it is not a blocker because the files will eventually get healed in 10 minutes (default heal timeout value) or when the heal command is explicitly launched via the gluster CLI.

--- Additional comment from Ravishankar N on 2015-08-13 09:15:30 EDT ---

http://review.gluster.org/#/c/11912/

--- Additional comment from Anand Avati on 2015-08-14 05:14:59 EDT ---

REVIEW: http://review.gluster.org/11912 (afr: launch index heal on local subvols up on a child-up event) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)

--- Additional comment from Anand Avati on 2015-08-14 05:44:03 EDT ---

REVIEW: http://review.gluster.org/11912 (afr: launch index heal on local subvols up on a child-up event) posted (#3) for review on master by Pranith Kumar Karampuri (pkarampu)

--- Additional comment from Anand Avati on 2015-08-21 01:29:18 EDT ---

REVIEW: http://review.gluster.org/11912 (afr: launch index heal on local subvols up on a child-up event) posted (#4) for review on master by Ravishankar N (ravishankar)

--- Additional comment from Anand Avati on 2015-08-21 06:50:18 EDT ---

COMMIT: http://review.gluster.org/11912 committed in master by Pranith Kumar Karampuri (pkarampu) 
------
commit e4cefd6c5915dd47c6b42098236df3901665f93a
Author: Ravishankar N <ravishankar>
Date:   Thu Aug 13 18:33:08 2015 +0530

    afr: launch index heal on local subvols up on a child-up event
    
    Problem:
    When a replica's child goes down and comes up, the index heal is
    triggered only on the child that just came up. This does not serve the
    intended purpose as the list of files that need to be healed
    to this child is actually captured on the other child of the replica.
    
    Fix:
    Launch index-heal on all local children of the replica xlator which just
    received a child up. Note that afr_selfheal_childup() eventually calls
    afr_shd_index_healer() which will not run the heal on non-local
    children.
    
    Signed-off-by: Ravishankar N <ravishankar>
    
    Change-Id: Ia23e47d197f983c695ec0bcd283e74931119ee55
    BUG: 1253309
    Reviewed-on: http://review.gluster.org/11912
    Tested-by: NetBSD Build System <jenkins.org>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Krutika Dhananjay <kdhananj>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu>

Comment 1 Anand Avati 2015-08-24 07:24:26 UTC
REVIEW: http://review.gluster.org/11994 (afr: launch index heal on local subvols up on a child-up event) posted (#1) for review on release-3.6 by Ravishankar N (ravishankar)

Comment 2 Anand Avati 2015-08-27 07:05:39 UTC
REVIEW: http://review.gluster.org/11994 (afr: launch index heal on local subvols up on a child-up event) posted (#2) for review on release-3.6 by Ravishankar N (ravishankar)

Comment 3 Raghavendra Bhat 2015-09-30 12:15:13 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.6, please open a new bug report.

glusterfs-3.6.6 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-devel/2015-September/046821.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.