Bug 1651525 - Issuing a "heal ... full" on a disperse volume causes permanent high CPU utilization.
Summary: Issuing a "heal ... full" on a disperse volume causes permanent high CPU util...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: 5
Hardware: Unspecified
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: Ashish Pandey
QA Contact:
URL:
Whiteboard:
Depends On: 1636631
Blocks: 1644681
TreeView+ depends on / blocked
 
Reported: 2018-11-20 09:34 UTC by Ashish Pandey
Modified: 2019-03-27 13:40 UTC (History)
4 users (show)

Fixed In Version: glusterfs-5.2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1636631
Environment:
Last Closed: 2019-03-27 13:40:34 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Gluster.org Gerrit 21526 None None None 2018-11-20 09:34:37 UTC
Gluster.org Gerrit 21691 None Merged cluster/ec: prevent infinite loop in self-heal full 2018-11-29 15:32:30 UTC

Description Ashish Pandey 2018-11-20 09:34:38 UTC
+++ This bug was initially created as a clone of Bug #1636631 +++

Issuing a "heal ... full" on a disperse volume causes permanent
high CPU utilization. 

This occurs even when the volume is completely empty. The CPU usage
is not due to healing I/O activity.

This only happens on disperse volumes, not on replica volumes. 

It happens in GlusterFS version 3.12.14, but does not happen
in version 3.7.18.

The high CPU utilization is by the 'glusterfs' SHD (self heal
daemon) process and is easily noticed using 'top'.

The 'glustershd.log' file shows that the disperse volume full
sweep keeps restarting and running forever:

[2018-10-06 00:56:11.245106] I [MSGID: 122059] [ec-heald.c:415:ec_shd_full_healer] 0-disperse-vol-disperse-0: finished full sweep on subvol disperse-vol-client-0
The message "I [MSGID: 122059] [ec-heald.c:406:ec_shd_full_healer] 0-disperse-vol-disperse-0: starting full sweep on subvol disperse-vol-client-0" repeated 2 times between [2018-10-06 00:56:11.243637] and [2018-10-06 00:56:11.246885]
[2018-10-06 00:56:11.247966] I [MSGID: 122059] [ec-heald.c:415:ec_shd_full_healer] 0-disperse-vol-disperse-0: finished full sweep on subvol disperse-vol-client-2
The message "I [MSGID: 122059] [ec-heald.c:406:ec_shd_full_healer] 0-disperse-vol-disperse-0: starting full sweep on subvol disperse-vol-client-1" repeated 3 times between [2018-10-06 00:56:11.239731] and [2018-10-06 00:56:11.248470]
[2018-10-06 00:56:11.248553] I [MSGID: 122059] [ec-heald.c:406:ec_shd_full_healer] 0-disperse-vol-disperse-0: starting full sweep on subvol disperse-vol-client-0
The message "I [MSGID: 122059] [ec-heald.c:406:ec_shd_full_healer] 0-disperse-vol-disperse-0: starting full sweep on subvol disperse-vol-client-2" repeated 3 times between [2018-10-06 00:56:11.242392] and [2018-10-06 00:56:11.251262]
[2018-10-06 00:56:11.251330] I [MSGID: 122059] [ec-heald.c:406:ec_shd_full_healer] 0-disperse-vol-disperse-0: starting full sweep on subvol disperse-vol-client-1
The message "I [MSGID: 122059] [ec-heald.c:415:ec_shd_full_healer] 0-disperse-vol-disperse-0: finished full sweep on subvol disperse-vol-client-2" repeated 2 times between [2018-10-06 00:56:11.247966] and [2018-10-06 00:56:11.253675]
[2018-10-06 00:56:11.253916] I [MSGID: 122059] [ec-heald.c:406:ec_shd_full_healer] 0-disperse-vol-disperse-0: starting full sweep on subvol disperse-vol-client-2
The message "I [MSGID: 122059] [ec-heald.c:406:ec_shd_full_healer] 0-disperse-vol-disperse-0: starting full sweep on subvol disperse-vol-client-0" repeated 5 times between [2018-10-06 00:56:11.248553] and [2018-10-06 00:56:11.256142]
[2018-10-06 00:56:11.256490] I [MSGID: 122059] [ec-heald.c:415:ec_shd_full_healer] 0-disperse-vol-disperse-0: finished full sweep on subvol disperse-vol-client-2
The message "I [MSGID: 122059] [ec-heald.c:415:ec_shd_full_healer] 0-disperse-vol-disperse-0: finished full sweep on subvol disperse-vol-client-0" repeated 8 times between [2018-10-06 00:56:11.245106] and [2018-10-06 00:56:11.257386]
[2018-10-06 00:56:11.257585] I [MSGID: 122059] [ec-heald.c:406:ec_shd_full_healer] 0-disperse-vol-disperse-0: starting full sweep on subvol disperse-vol-client-0
[2018-10-06 00:56:11.258907] I [MSGID: 122059] [ec-heald.c:415:ec_shd_full_healer] 0-disperse-vol-disperse-0: finished full sweep on subvol disperse-vol-client-0
[2018-10-06 00:56:11.259098] I [MSGID: 122059] [ec-heald.c:406:ec_shd_full_healer] 0-disperse-vol-disperse-0: starting full sweep on subvol disperse-vol-client-0
The message "I [MSGID: 122059] [ec-heald.c:406:ec_shd_full_healer] 0-disperse-vol-disperse-0: starting full sweep on subvol disperse-vol-client-1" repeated 3 times between [2018-10-06 00:56:11.251330] and [2018-10-06 00:56:11.259751]
[2018-10-06 00:56:11.261599] I [MSGID: 122059] [ec-heald.c:415:ec_shd_full_healer] 0-disperse-vol-disperse-0: finished full sweep on subvol disperse-vol-client-0

The only way to reduce the shd glusterfs process high CPU
utilization is to kill it, and restart it. It is then fine
until the next disperse volume "heal ... full".

--- Additional comment from Shyamsundar on 2018-10-23 10:54:18 EDT ---

Release 3.12 has been EOLd and this bug was still found to be in the NEW state, hence moving the version to mainline, to triage the same and take appropriate actions.

--- Additional comment from Xavi Hernandez on 2018-10-31 07:50:19 EDT ---

I've found the problem. Currently, when a directory is healed, a flag is set that forces heal to be retried. This is necessary after a replace brick because after healing a directory, new entries to be healed could appear (the only bad entry just after a replace brick is the root directory). In this case, a new iteration of the heal process will immediately take those new entries and heal them, instead of going idle after completing a full sweep of the (previous) list of bad entries.

However this approach on a full self-heal causes it to run endless. First it tries to heal the root directory, which succeeds. This causes the flag to be set, even if no entries have been really added to be healed.

--- Additional comment from Worker Ant on 2018-10-31 07:52:30 EDT ---

REVIEW: https://review.gluster.org/21526 (cluster/ec: prevent infinite loop in self-heal full) posted (#1) for review on master by Xavi Hernandez

--- Additional comment from Worker Ant on 2018-10-31 12:32:31 EDT ---

REVIEW: https://review.gluster.org/21526 (cluster/ec: prevent infinite loop in self-heal full) posted (#1) for review on master by Xavi Hernandez

Comment 1 Worker Ant 2018-11-20 09:37:23 UTC
REVIEW: https://review.gluster.org/21691 (cluster/ec: prevent infinite loop in self-heal full) posted (#1) for review on release-5 by Ashish Pandey

Comment 2 Worker Ant 2018-11-29 15:32:28 UTC
REVIEW: https://review.gluster.org/21691 (cluster/ec: prevent infinite loop in self-heal full) posted (#2) for review on release-5 by Shyamsundar Ranganathan

Comment 3 Shyamsundar 2019-03-27 13:40:34 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.2, please open a new bug report.

glusterfs-5.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2018-December/000117.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.