Bug 1306341

Summary: spinning rt tasks: hung of jbd2 kworkers
Product: Red Hat Enterprise Linux 7 Reporter: Daniel Bristot de Oliveira <daolivei>
Component: kernel-rtAssignee: Clark Williams <williams>
kernel-rt sub component: Other QA Contact: Jiri Kastner <jkastner>
Status: CLOSED WONTFIX Docs Contact:
Severity: high    
Priority: unspecified CC: bhu, lcapitulino
Version: 7.1   
Target Milestone: rc   
Target Release: 7.3   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-11-29 16:55:27 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1442258    

Description Daniel Bristot de Oliveira 2016-02-10 15:40:53 UTC
Description of problem:

Even isolating a CPU it is not possible to avoid some kworkers jobs like jbd2 on a given CPU. Hence, if a rt spinning thread is running for a long time, it can cause IO stucks and hung task messages.

One possible workaround is to increase kworker's priority, but ask kworkers
are created on demand under sched other, it is not possible to have a clean
workaround - for example, one may need to create a periodic script to check
kworker's priority.

For the rt spinning users, a perfect fix would be be able to avoid kworkers
like jbd2 on isolated CPUs.

I will post a crash dump analysis of a report from a customer.

Version-Release number of selected component (if applicable):
last seem on kernel 3.10.0-229.rt56.147.el6rt.x86_64

But I already saw it on many older kernels, and I never saw a solution for it, even upstream.

How reproducible:
Not easily reproducible, for now only on customer's workload.

Steps to Reproduce:
1.
2.
3.

Actual results:
jbd2 hung tasks.

Expected results:
no hung tasks.

Additional info:
I am working on a vmcore RCA for this problem, reported by a customer.

Comment 2 Luiz Capitulino 2017-05-30 18:59:30 UTC
We're debugging a KVM-RT issue that looks similar:

Bug 1448770 - several tasks blocked for more than 600 seconds
(see stack trace in bug 1448770 comment 25)

However, we haven't been able to get a working vmcore yet. And I haven't been able to reproduce myself.

Do you have a reproducer?

Comment 3 Daniel Bristot de Oliveira 2017-05-31 07:58:01 UTC
Unfortunately, we do not have a reproducer. Should we talk to storage/fs people?

Comment 4 Luiz Capitulino 2017-05-31 13:41:48 UTC
If they can help getting a reproducer, yes. But I think it's possible that bug 1448770 is the same issue and we have a reproducer for that one.

I also suspect that this issue is caused by workqueue numa scheduling, but I don't have enough data to confirm this yet (which would be very good news, since workqueue numa scheduling can be easily disabled).

Comment 5 Luiz Capitulino 2017-05-31 13:45:39 UTC
Never mind the workqueue numa scheduling hypothesis, at least for bug 1448770. The issue can be reproduced even when workqueue numa scheduling is disabled.

Comment 6 Clark Williams 2017-11-29 16:55:27 UTC
This bug has not been seen in months and can be worked around with the RT_RUNTIME_GREED feature. An actual fix to avoid starving kworkers/softirqd threads will require upstream RT architecture changes. Closing WONTFIX