Bug 1870660
| Summary: | High CPU load caused by indexW kernel threads while vdo volume is unused | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | Andy Walsh <awalsh> |
| Component: | kmod-kvdo | Assignee: | Ken Raeburn <raeburn> |
| Status: | CLOSED ERRATA | QA Contact: | Filip Suba <fsuba> |
| Severity: | unspecified | Docs Contact: | Marek Suchánek <msuchane> |
| Priority: | unspecified | ||
| Version: | 8.3 | CC: | awalsh, bgurney, corwin, fsuba, nikolay, zinchukpavlo |
| Target Milestone: | rc | Flags: | pm-rhel:
mirror+
|
| Target Release: | 8.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | 6.2.4.11 | Doc Type: | Bug Fix |
| Doc Text: |
Suggested text: In certain cases, the index kernel threads for a VDO volume would use a high amount of CPU time while idle. The behavior of the index threads have been adjusted to reduce CPU usage while idle.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2021-05-18 14:39:44 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Andy Walsh
2020-08-20 14:27:48 UTC
Thanks for the patches. I have compiled 6.2.4.14 and the indexW thread has got quiet. So this seems to have been resolved. However, now that I look at the number of context switches, starting VDO volumes that are not in use on an idle system increases the number of context switches with around 8000 per second. VDO devices are not started: 90 context switches / second VDO devices are started and not used (2 devices) : 8000 context switches / second The threads that consume the CPU seem to be: [kvdo2:packerQ] [kvdo2:ackQ] [kvdo2:cpuQ0] [kvdo2:cpuQ1] [kvdo3:dedupeQ] [kvdo3:journalQ] [kvdo3:hashQ0] [kvdo3:bioQ0] [kvdo3:bioQ1] [kvdo3:bioQ2] [kvdo3:bioQ3] [kvdo3:cpuQ0] [kvdo3:cpuQ1] Do we need to implement something similar for those too or would that be too much of a performance hit? Verified with vdo-6.2.4.14-14.el8. Regression testing passed. I've tested with package vdo-6.2.4.14-14.el8 Issue is still persist. Occurred high CPU context switching. Filip, can you please recheck again, because issue is not fixed? (In reply to Filip Suba from comment #6) > Verified with vdo-6.2.4.14-14.el8. Regression testing passed. (In reply to Pavel Zinchuk from comment #7) > I've tested with package vdo-6.2.4.14-14.el8 > Issue is still persist. Occurred high CPU context switching. > Filip, can you please recheck again, because issue is not fixed? The context switches from the other threads have a different cause and are tracked in BZ1886738. They will cause a small CPU load but it should not be a high one -- I typically see something like 0.3% per thread depending on the platform. The package vdo-6.2.4.14-14.el8 should fix the much higher context switch load that was previously being caused by the indexW thread, and the high CPU load that it triggered. Are you seeing a high CPU load with this version of the package? Yes, I see high CPU load in the VM of oVirt virtualization. It doesn't matter how many CPU cores I allocate to the VM, 4 or 8. oVirt always detect CPU Load in the range 50-80% when enabled VDO but not used. I see constant high CPU context switching from vdo. This cause high load in the oVirt Virtualization. High CPU load stop only when I disable VDO service. It is shouldn't be like this I guess. (In reply to Pavel Zinchuk from comment #9) > Yes, > I see high CPU load in the VM of oVirt virtualization. It doesn't matter how > many CPU cores I allocate to the VM, 4 or 8. > oVirt always detect CPU Load in the range 50-80% when enabled VDO but not > used. I see constant high CPU context switching from vdo. This cause high > load in the oVirt Virtualization. High CPU load stop only when I disable VDO > service. > It is shouldn't be like this I guess. Just to clarify -- a high load reported *within* the virtual machine, or a high load in the hypervisor environment, caused by the high timer interrupt rate keeping virtualization threads busy? I've seen the latter happen (and it's part of the reason BZ11886738 needs fixing), but if it's the former, what threads within the VM are active and how busy are they? (In reply to Ken Raeburn from comment #10) > (In reply to Pavel Zinchuk from comment #9) > > Yes, > > I see high CPU load in the VM of oVirt virtualization. It doesn't matter how > > many CPU cores I allocate to the VM, 4 or 8. > > oVirt always detect CPU Load in the range 50-80% when enabled VDO but not > > used. I see constant high CPU context switching from vdo. This cause high > > load in the oVirt Virtualization. High CPU load stop only when I disable VDO > > service. > > It is shouldn't be like this I guess. > > Just to clarify -- a high load reported *within* the virtual machine, or a > high load in the hypervisor environment, caused by the high timer interrupt > rate keeping virtualization threads busy? I've seen the latter happen (and > it's part of the reason BZ11886738 needs fixing), but if it's the former, > what threads within the VM are active and how busy are they? Hi Ken, High load reported by hypervisor environment. Inside VM i don't see high really CPU usage, inside VM I see only high amount of CPU context switching. Hi CPU context switching cause load for hypervisor environment. This is reason why hypervisor report about high CPU usage for VM. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (kmod-kvdo bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:1588 |