Bug 1306341 - spinning rt tasks: hung of jbd2 kworkers
spinning rt tasks: hung of jbd2 kworkers
Status: CLOSED WONTFIX
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: kernel-rt (Show other bugs)
7.1
x86_64 Linux
unspecified Severity high
: rc
: 7.3
Assigned To: Clark Williams
Jiri Kastner
:
Depends On:
Blocks: 1442258
  Show dependency treegraph
 
Reported: 2016-02-10 10:40 EST by Daniel Bristot de Oliveira
Modified: 2017-11-29 11:55 EST (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-11-29 11:55:27 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Daniel Bristot de Oliveira 2016-02-10 10:40:53 EST
Description of problem:

Even isolating a CPU it is not possible to avoid some kworkers jobs like jbd2 on a given CPU. Hence, if a rt spinning thread is running for a long time, it can cause IO stucks and hung task messages.

One possible workaround is to increase kworker's priority, but ask kworkers
are created on demand under sched other, it is not possible to have a clean
workaround - for example, one may need to create a periodic script to check
kworker's priority.

For the rt spinning users, a perfect fix would be be able to avoid kworkers
like jbd2 on isolated CPUs.

I will post a crash dump analysis of a report from a customer.

Version-Release number of selected component (if applicable):
last seem on kernel 3.10.0-229.rt56.147.el6rt.x86_64

But I already saw it on many older kernels, and I never saw a solution for it, even upstream.

How reproducible:
Not easily reproducible, for now only on customer's workload.

Steps to Reproduce:
1.
2.
3.

Actual results:
jbd2 hung tasks.

Expected results:
no hung tasks.

Additional info:
I am working on a vmcore RCA for this problem, reported by a customer.
Comment 2 Luiz Capitulino 2017-05-30 14:59:30 EDT
We're debugging a KVM-RT issue that looks similar:

Bug 1448770 - several tasks blocked for more than 600 seconds
(see stack trace in bug 1448770 comment 25)

However, we haven't been able to get a working vmcore yet. And I haven't been able to reproduce myself.

Do you have a reproducer?
Comment 3 Daniel Bristot de Oliveira 2017-05-31 03:58:01 EDT
Unfortunately, we do not have a reproducer. Should we talk to storage/fs people?
Comment 4 Luiz Capitulino 2017-05-31 09:41:48 EDT
If they can help getting a reproducer, yes. But I think it's possible that bug 1448770 is the same issue and we have a reproducer for that one.

I also suspect that this issue is caused by workqueue numa scheduling, but I don't have enough data to confirm this yet (which would be very good news, since workqueue numa scheduling can be easily disabled).
Comment 5 Luiz Capitulino 2017-05-31 09:45:39 EDT
Never mind the workqueue numa scheduling hypothesis, at least for bug 1448770. The issue can be reproduced even when workqueue numa scheduling is disabled.
Comment 6 Clark Williams 2017-11-29 11:55:27 EST
This bug has not been seen in months and can be worked around with the RT_RUNTIME_GREED feature. An actual fix to avoid starving kworkers/softirqd threads will require upstream RT architecture changes. Closing WONTFIX

Note You need to log in before you can comment on or make changes to this bug.