Bug 1402841 - Files remain unhealed forever if shd is disabled and re-enabled while healing is in progress.
Summary: Files remain unhealed forever if shd is disabled and re-enabled while healing...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Ravishankar N
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1403120 1403187 1403192
TreeView+ depends on / blocked
 
Reported: 2016-12-08 12:52 UTC by Ravishankar N
Modified: 2017-03-06 17:39 UTC (History)
1 user (show)

Fixed In Version: glusterfs-3.10.0
Clone Of:
: 1403120 1403187 1403192 (view as bug list)
Environment:
Last Closed: 2017-03-06 17:39:10 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Ravishankar N 2016-12-08 12:52:46 UTC
Description of problem:

1. Create a 1x2 replica vol using a 2 node cluster.
2. Fuse mount the vol and create 2000 files
3. Bring one brick down, write to those files, leading to 2000 pending data heals.
4. Bring back the brick and launch index heal
5. The shd log on the source brick prints completed heals for the the processed files.
6. Before the heal completes, do a `gluster vol set volname self-heal-daemon off`
7. The heal stops as expected.
8. Re-enable the shd: `gluster vol set volname self-heal-daemon on`
9. Observe the shd log, we don't see any files getting healed.
10. Launching index heal manually also has no effect.

The only workaround is to restart shd with a `volume start force`.

Comment 1 Worker Ant 2016-12-08 12:55:33 UTC
REVIEW: http://review.gluster.org/16073 (syncop: fix conditional wait bug in parallel dir scan) posted (#1) for review on master by Ravishankar N (ravishankar)

Comment 2 Worker Ant 2016-12-09 05:27:13 UTC
REVIEW: http://review.gluster.org/16073 (syncop: fix conditional wait bug in parallel dir scan) posted (#2) for review on master by Ravishankar N (ravishankar)

Comment 3 Worker Ant 2016-12-09 10:24:25 UTC
COMMIT: http://review.gluster.org/16073 committed in master by Pranith Kumar Karampuri (pkarampu) 
------
commit 2d012c4558046afd6adb3992ff88f937c5f835e4
Author: Ravishankar N <ravishankar>
Date:   Fri Dec 9 09:50:43 2016 +0530

    syncop: fix conditional wait bug in parallel dir scan
    
    Problem:
    The issue as seen by the user is detailed in the BZ but what is
    happening is if the no. of items in the wait queue == max-qlen,
    syncop_mt_dir_scan() does a pthread_cond_wait until the launched
    synctask workers dequeue the queue. But if for some reason the worker
    fails, the queue is never emptied due to which further invocations of
    syncop_mt_dir_scan() are blocked forever.
    
    Fix: Made some changes to _dir_scan_job_fn
    
    - If a worker encounters error while processing an entry, notify the
      readdir loop in syncop_mt_dir_scan() of the error but continue to process
      other entries in the queue, decrementing the qlen as and when we dequeue
      elements, and ending only when the queue is empty.
    
    - If the readdir loop in syncop_mt_dir_scan() gets an error form the
      worker, stop the readdir+queueing of further entries.
    
    Change-Id: I39ce073e01a68c7ff18a0e9227389245a6f75b88
    BUG: 1402841
    Signed-off-by: Ravishankar N <ravishankar>
    Reviewed-on: http://review.gluster.org/16073
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu>

Comment 4 Shyamsundar 2017-03-06 17:39:10 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report.

glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.