+++ This bug was initially created as a clone of Bug #1547662 +++ Description of problem: After a replace brick, self-heal takes some time to start reconstructing the files, and when it starts, sometimes it pauses for a while. Version-Release number of selected component (if applicable): mainline How reproducible: always Steps to Reproduce: 1. create a disperse volume 2. replace one brick 3. check new brick contents Actual results: New brick is not being filled immediately. It can take some time to start moving files, and it might make pauses. Expected results: Self-heal should trigger immediately after replacing the brick and don't stop until finished healing all files. Additional info:
REVIEW: https://review.gluster.org/19719 (cluster/ec: avoid delays in self-heal) posted (#1) for review on release-3.10 by Xavi Hernandez
COMMIT: https://review.gluster.org/19719 committed in release-3.10 by "Xavi Hernandez" <xhernandez> with a commit message- cluster/ec: avoid delays in self-heal Self-heal creates a thread per brick to sweep the index looking for files that need to be healed. These threads are started before the volume comes online, so nothing is done but waiting for the next sweep. This happens once per minute. When a replace brick command is executed, the new graph is loaded and all index sweeper threads started. When all bricks have reported, a getxattr request is sent to the root directory of the volume. This causes a heal on it (because the new brick doesn't have good data), and marks its contents as pending to be healed. This is done by the index sweeper thread on the next round, one minute later. This patch solves this problem by waking all index sweeper threads after a successful check on the root directory. Additionally, the index sweep thread scans the index directory sequentially, but it might happen that after healing a directory entry more index entries are created but skipped by the current directory scan. This causes the remaining entries to be processed on the next round, one minute later. The same can happen in the next round, so the heal is running in bursts and taking a lot to finish, specially on volumes with many directory levels. This patch solves this problem by immediately restarting the index sweep if a directory has been healed. Backport of: > BUG: 1547662 Change-Id: I58d9ab6ef17b30f704dc322e1d3d53b904e5f30e BUG: 1555203 Signed-off-by: Xavi Hernandez <jahernan>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.12, please open a new bug report. glusterfs-3.10.12 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2018-April/000095.html [2] https://www.gluster.org/pipermail/gluster-users/