Hide Forgot
Description of problem: Even isolating a CPU it is not possible to avoid some kworkers jobs like jbd2 on a given CPU. Hence, if a rt spinning thread is running for a long time, it can cause IO stucks and hung task messages. One possible workaround is to increase kworker's priority, but ask kworkers are created on demand under sched other, it is not possible to have a clean workaround - for example, one may need to create a periodic script to check kworker's priority. For the rt spinning users, a perfect fix would be be able to avoid kworkers like jbd2 on isolated CPUs. I will post a crash dump analysis of a report from a customer. Version-Release number of selected component (if applicable): last seem on kernel 3.10.0-229.rt56.147.el6rt.x86_64 But I already saw it on many older kernels, and I never saw a solution for it, even upstream. How reproducible: Not easily reproducible, for now only on customer's workload. Steps to Reproduce: 1. 2. 3. Actual results: jbd2 hung tasks. Expected results: no hung tasks. Additional info: I am working on a vmcore RCA for this problem, reported by a customer.
We're debugging a KVM-RT issue that looks similar: Bug 1448770 - several tasks blocked for more than 600 seconds (see stack trace in bug 1448770 comment 25) However, we haven't been able to get a working vmcore yet. And I haven't been able to reproduce myself. Do you have a reproducer?
Unfortunately, we do not have a reproducer. Should we talk to storage/fs people?
If they can help getting a reproducer, yes. But I think it's possible that bug 1448770 is the same issue and we have a reproducer for that one. I also suspect that this issue is caused by workqueue numa scheduling, but I don't have enough data to confirm this yet (which would be very good news, since workqueue numa scheduling can be easily disabled).
Never mind the workqueue numa scheduling hypothesis, at least for bug 1448770. The issue can be reproduced even when workqueue numa scheduling is disabled.
This bug has not been seen in months and can be worked around with the RT_RUNTIME_GREED feature. An actual fix to avoid starving kworkers/softirqd threads will require upstream RT architecture changes. Closing WONTFIX