Bug 1547662 - After a replace brick command, self-heal takes some time to start healing files on disperse volumes
Summary: After a replace brick command, self-heal takes some time to start healing fil...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Xavi Hernandez
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1555198 1555201 1555203 1555261
TreeView+ depends on / blocked
 
Reported: 2018-02-21 16:44 UTC by Xavi Hernandez
Modified: 2018-08-29 22:25 UTC (History)
2 users (show)

Fixed In Version: glusterfs-v4.1.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1555198 1555201 1555203 1555261 (view as bug list)
Environment:
Last Closed: 2018-06-20 18:01:20 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Xavi Hernandez 2018-02-21 16:44:26 UTC
Description of problem:

After a replace brick, self-heal takes some time to start reconstructing the files, and when it starts, sometimes it pauses for a while.

Version-Release number of selected component (if applicable): mainline


How reproducible:

always

Steps to Reproduce:
1. create a disperse volume
2. replace one brick
3. check new brick contents

Actual results:

New brick is not being filled immediately. It can take some time to start moving files, and it might make pauses.

Expected results:

Self-heal should trigger immediately after replacing the brick and don't stop until finished healing all files.

Additional info:

Comment 1 Worker Ant 2018-02-21 17:04:29 UTC
REVIEW: https://review.gluster.org/19609 (cluster/ec: avoid delays in self-heal) posted (#1) for review on master by Xavier Hernandez

Comment 2 Worker Ant 2018-03-14 03:13:00 UTC
COMMIT: https://review.gluster.org/19609 committed in master by "Amar Tumballi" <amarts@redhat.com> with a commit message- cluster/ec: avoid delays in self-heal

Self-heal creates a thread per brick to sweep the index looking for
files that need to be healed. These threads are started before the
volume comes online, so nothing is done but waiting for the next
sweep. This happens once per minute.

When a replace brick command is executed, the new graph is loaded and
all index sweeper threads started. When all bricks have reported, a
getxattr request is sent to the root directory of the volume. This
causes a heal on it (because the new brick doesn't have good data),
and marks its contents as pending to be healed. This is done by the
index sweeper thread on the next round, one minute later.

This patch solves this problem by waking all index sweeper threads
after a successful check on the root directory.

Additionally, the index sweep thread scans the index directory
sequentially, but it might happen that after healing a directory entry
more index entries are created but skipped by the current directory
scan. This causes the remaining entries to be processed on the next
round, one minute later. The same can happen in the next round, so
the heal is running in bursts and taking a lot to finish, specially
on volumes with many directory levels.

This patch solves this problem by immediately restarting the index
sweep if a directory has been healed.

Change-Id: I58d9ab6ef17b30f704dc322e1d3d53b904e5f30e
BUG: 1547662
Signed-off-by: Xavi Hernandez <jahernan@redhat.com>

Comment 3 Shyamsundar 2018-06-20 18:01:20 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report.

glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.