Bug 1555203 - After a replace brick command, self-heal takes some time to start healing files on disperse volumes
Summary: After a replace brick command, self-heal takes some time to start healing fil...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: 3.10
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Xavi Hernandez
QA Contact:
URL:
Whiteboard:
Depends On: 1547662
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-03-14 07:10 UTC by Xavi Hernandez
Modified: 2018-05-07 15:05 UTC (History)
1 user (show)

Fixed In Version: glusterfs-3.10.12
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1547662
Environment:
Last Closed: 2018-05-07 15:05:04 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Xavi Hernandez 2018-03-14 07:10:58 UTC
+++ This bug was initially created as a clone of Bug #1547662 +++

Description of problem:

After a replace brick, self-heal takes some time to start reconstructing the files, and when it starts, sometimes it pauses for a while.

Version-Release number of selected component (if applicable): mainline


How reproducible:

always

Steps to Reproduce:
1. create a disperse volume
2. replace one brick
3. check new brick contents

Actual results:

New brick is not being filled immediately. It can take some time to start moving files, and it might make pauses.

Expected results:

Self-heal should trigger immediately after replacing the brick and don't stop until finished healing all files.

Additional info:

Comment 1 Worker Ant 2018-03-14 07:13:00 UTC
REVIEW: https://review.gluster.org/19719 (cluster/ec: avoid delays in self-heal) posted (#1) for review on release-3.10 by Xavi Hernandez

Comment 2 Worker Ant 2018-03-16 13:36:16 UTC
COMMIT: https://review.gluster.org/19719 committed in release-3.10 by "Xavi Hernandez" <xhernandez> with a commit message- cluster/ec: avoid delays in self-heal

Self-heal creates a thread per brick to sweep the index looking for
files that need to be healed. These threads are started before the
volume comes online, so nothing is done but waiting for the next
sweep. This happens once per minute.

When a replace brick command is executed, the new graph is loaded and
all index sweeper threads started. When all bricks have reported, a
getxattr request is sent to the root directory of the volume. This
causes a heal on it (because the new brick doesn't have good data),
and marks its contents as pending to be healed. This is done by the
index sweeper thread on the next round, one minute later.

This patch solves this problem by waking all index sweeper threads
after a successful check on the root directory.

Additionally, the index sweep thread scans the index directory
sequentially, but it might happen that after healing a directory entry
more index entries are created but skipped by the current directory
scan. This causes the remaining entries to be processed on the next
round, one minute later. The same can happen in the next round, so
the heal is running in bursts and taking a lot to finish, specially
on volumes with many directory levels.

This patch solves this problem by immediately restarting the index
sweep if a directory has been healed.

Backport of:
> BUG: 1547662

Change-Id: I58d9ab6ef17b30f704dc322e1d3d53b904e5f30e
BUG: 1555203
Signed-off-by: Xavi Hernandez <jahernan>

Comment 3 Shyamsundar 2018-05-07 15:05:04 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.12, please open a new bug report.

glusterfs-3.10.12 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-April/000095.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.