Bug 1636631 - Issuing a "heal ... full" on a disperse volume causes permanent high CPU utilization.
Summary: Issuing a "heal ... full" on a disperse volume causes permanent high CPU util...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: mainline
Hardware: Unspecified
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: Xavi Hernandez
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1644681 1651525
TreeView+ depends on / blocked
 
Reported: 2018-10-06 01:04 UTC by Jeff Byers
Modified: 2019-03-27 13:40 UTC (History)
4 users (show)

Fixed In Version: glusterfs-5.2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1644681 1651525 (view as bug list)
Environment:
Last Closed: 2019-03-25 16:31:17 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gluster.org Gerrit 21526 0 None Merged cluster/ec: prevent infinite loop in self-heal full 2018-10-31 16:32:33 UTC

Description Jeff Byers 2018-10-06 01:04:10 UTC
Issuing a "heal ... full" on a disperse volume causes permanent
high CPU utilization. 

This occurs even when the volume is completely empty. The CPU usage
is not due to healing I/O activity.

This only happens on disperse volumes, not on replica volumes. 

It happens in GlusterFS version 3.12.14, but does not happen
in version 3.7.18.

The high CPU utilization is by the 'glusterfs' SHD (self heal
daemon) process and is easily noticed using 'top'.

The 'glustershd.log' file shows that the disperse volume full
sweep keeps restarting and running forever:

[2018-10-06 00:56:11.245106] I [MSGID: 122059] [ec-heald.c:415:ec_shd_full_healer] 0-disperse-vol-disperse-0: finished full sweep on subvol disperse-vol-client-0
The message "I [MSGID: 122059] [ec-heald.c:406:ec_shd_full_healer] 0-disperse-vol-disperse-0: starting full sweep on subvol disperse-vol-client-0" repeated 2 times between [2018-10-06 00:56:11.243637] and [2018-10-06 00:56:11.246885]
[2018-10-06 00:56:11.247966] I [MSGID: 122059] [ec-heald.c:415:ec_shd_full_healer] 0-disperse-vol-disperse-0: finished full sweep on subvol disperse-vol-client-2
The message "I [MSGID: 122059] [ec-heald.c:406:ec_shd_full_healer] 0-disperse-vol-disperse-0: starting full sweep on subvol disperse-vol-client-1" repeated 3 times between [2018-10-06 00:56:11.239731] and [2018-10-06 00:56:11.248470]
[2018-10-06 00:56:11.248553] I [MSGID: 122059] [ec-heald.c:406:ec_shd_full_healer] 0-disperse-vol-disperse-0: starting full sweep on subvol disperse-vol-client-0
The message "I [MSGID: 122059] [ec-heald.c:406:ec_shd_full_healer] 0-disperse-vol-disperse-0: starting full sweep on subvol disperse-vol-client-2" repeated 3 times between [2018-10-06 00:56:11.242392] and [2018-10-06 00:56:11.251262]
[2018-10-06 00:56:11.251330] I [MSGID: 122059] [ec-heald.c:406:ec_shd_full_healer] 0-disperse-vol-disperse-0: starting full sweep on subvol disperse-vol-client-1
The message "I [MSGID: 122059] [ec-heald.c:415:ec_shd_full_healer] 0-disperse-vol-disperse-0: finished full sweep on subvol disperse-vol-client-2" repeated 2 times between [2018-10-06 00:56:11.247966] and [2018-10-06 00:56:11.253675]
[2018-10-06 00:56:11.253916] I [MSGID: 122059] [ec-heald.c:406:ec_shd_full_healer] 0-disperse-vol-disperse-0: starting full sweep on subvol disperse-vol-client-2
The message "I [MSGID: 122059] [ec-heald.c:406:ec_shd_full_healer] 0-disperse-vol-disperse-0: starting full sweep on subvol disperse-vol-client-0" repeated 5 times between [2018-10-06 00:56:11.248553] and [2018-10-06 00:56:11.256142]
[2018-10-06 00:56:11.256490] I [MSGID: 122059] [ec-heald.c:415:ec_shd_full_healer] 0-disperse-vol-disperse-0: finished full sweep on subvol disperse-vol-client-2
The message "I [MSGID: 122059] [ec-heald.c:415:ec_shd_full_healer] 0-disperse-vol-disperse-0: finished full sweep on subvol disperse-vol-client-0" repeated 8 times between [2018-10-06 00:56:11.245106] and [2018-10-06 00:56:11.257386]
[2018-10-06 00:56:11.257585] I [MSGID: 122059] [ec-heald.c:406:ec_shd_full_healer] 0-disperse-vol-disperse-0: starting full sweep on subvol disperse-vol-client-0
[2018-10-06 00:56:11.258907] I [MSGID: 122059] [ec-heald.c:415:ec_shd_full_healer] 0-disperse-vol-disperse-0: finished full sweep on subvol disperse-vol-client-0
[2018-10-06 00:56:11.259098] I [MSGID: 122059] [ec-heald.c:406:ec_shd_full_healer] 0-disperse-vol-disperse-0: starting full sweep on subvol disperse-vol-client-0
The message "I [MSGID: 122059] [ec-heald.c:406:ec_shd_full_healer] 0-disperse-vol-disperse-0: starting full sweep on subvol disperse-vol-client-1" repeated 3 times between [2018-10-06 00:56:11.251330] and [2018-10-06 00:56:11.259751]
[2018-10-06 00:56:11.261599] I [MSGID: 122059] [ec-heald.c:415:ec_shd_full_healer] 0-disperse-vol-disperse-0: finished full sweep on subvol disperse-vol-client-0

The only way to reduce the shd glusterfs process high CPU
utilization is to kill it, and restart it. It is then fine
until the next disperse volume "heal ... full".

Comment 1 Shyamsundar 2018-10-23 14:54:18 UTC
Release 3.12 has been EOLd and this bug was still found to be in the NEW state, hence moving the version to mainline, to triage the same and take appropriate actions.

Comment 2 Xavi Hernandez 2018-10-31 11:50:19 UTC
I've found the problem. Currently, when a directory is healed, a flag is set that forces heal to be retried. This is necessary after a replace brick because after healing a directory, new entries to be healed could appear (the only bad entry just after a replace brick is the root directory). In this case, a new iteration of the heal process will immediately take those new entries and heal them, instead of going idle after completing a full sweep of the (previous) list of bad entries.

However this approach on a full self-heal causes it to run endless. First it tries to heal the root directory, which succeeds. This causes the flag to be set, even if no entries have been really added to be healed.

Comment 3 Worker Ant 2018-10-31 11:52:30 UTC
REVIEW: https://review.gluster.org/21526 (cluster/ec: prevent infinite loop in self-heal full) posted (#1) for review on master by Xavi Hernandez

Comment 4 Worker Ant 2018-10-31 16:32:31 UTC
REVIEW: https://review.gluster.org/21526 (cluster/ec: prevent infinite loop in self-heal full) posted (#1) for review on master by Xavi Hernandez

Comment 5 Shyamsundar 2019-03-25 16:31:17 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/

Comment 6 Shyamsundar 2019-03-27 13:40:34 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.2, please open a new bug report.

glusterfs-5.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2018-December/000117.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.