Bug 1555201

Summary: After a replace brick command, self-heal takes some time to start healing files on disperse volumes
Product: [Community] GlusterFS Reporter: Xavi Hernandez <jahernan>
Component: disperseAssignee: Xavi Hernandez <jahernan>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 3.12CC: bugs
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.12.8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1547662 Environment:
Last Closed: 2018-04-24 06:53:38 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1547662    
Bug Blocks:    

Description Xavi Hernandez 2018-03-14 07:07:21 UTC
+++ This bug was initially created as a clone of Bug #1547662 +++

Description of problem:

After a replace brick, self-heal takes some time to start reconstructing the files, and when it starts, sometimes it pauses for a while.

Version-Release number of selected component (if applicable): mainline


How reproducible:

always

Steps to Reproduce:
1. create a disperse volume
2. replace one brick
3. check new brick contents

Actual results:

New brick is not being filled immediately. It can take some time to start moving files, and it might make pauses.

Expected results:

Self-heal should trigger immediately after replacing the brick and don't stop until finished healing all files.

Additional info:

Comment 1 Worker Ant 2018-03-14 07:10:09 UTC
REVIEW: https://review.gluster.org/19718 (cluster/ec: avoid delays in self-heal) posted (#1) for review on release-3.12 by Xavi Hernandez

Comment 2 Worker Ant 2018-04-06 12:49:06 UTC
COMMIT: https://review.gluster.org/19718 committed in release-3.12 by "jiffin tony Thottan" <jthottan> with a commit message- cluster/ec: avoid delays in self-heal

Self-heal creates a thread per brick to sweep the index looking for
files that need to be healed. These threads are started before the
volume comes online, so nothing is done but waiting for the next
sweep. This happens once per minute.

When a replace brick command is executed, the new graph is loaded and
all index sweeper threads started. When all bricks have reported, a
getxattr request is sent to the root directory of the volume. This
causes a heal on it (because the new brick doesn't have good data),
and marks its contents as pending to be healed. This is done by the
index sweeper thread on the next round, one minute later.

This patch solves this problem by waking all index sweeper threads
after a successful check on the root directory.

Additionally, the index sweep thread scans the index directory
sequentially, but it might happen that after healing a directory entry
more index entries are created but skipped by the current directory
scan. This causes the remaining entries to be processed on the next
round, one minute later. The same can happen in the next round, so
the heal is running in bursts and taking a lot to finish, specially
on volumes with many directory levels.

This patch solves this problem by immediately restarting the index
sweep if a directory has been healed.

Backport of:
> BUG: 1547662

Change-Id: I58d9ab6ef17b30f704dc322e1d3d53b904e5f30e
BUG: 1555201
Signed-off-by: Xavi Hernandez <jahernan>

Comment 3 Jiffin 2018-04-24 06:53:38 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.8, please open a new bug report.

glusterfs-3.12.8 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-devel/2018-April/054749.html
[2] https://www.gluster.org/pipermail/gluster-users/