I am runing 3.6.7-4.fc17.x86_64 on a 12-core/24-threads Intel Xeon X5670 with 12GB RAM. When under load, I saw 163 root 39 19 0 0 0 R 100.6 0.0 301:55.25 khugepaged and machine became very unresponsive.
Downgrade to 3.6.7-3.fc17.x86_64 seems to fix the problem.
3.6.7-3.fc17.x86_64 has the same problem.
I am also seeing this on serveral multicore boxes (8-64 Cores) but more regularly on boxes with lots of memory (>64GB). Programs running there will suddenly start having a very high CPU usage (e.g. 1400% compared to 'normal' usage of 400-600%), and khugepaged will be running at 100%. In such a state, most other processes on the system cannot finish, e.g. a yum is stuck after the dependency checking, ps is working, ps auxfw gets stuck (and cannot be killed/backgrounded by Ctrl-[you name it]). Killing (-9) the ultra-high-cpu processes will result in khugepaged returning to <1% CPU-time, processes on hold (such as stuck yum or ps auxfw, most other processes) suddenly can finish as intended. I have seen that behavior on 3.6.7 (thought it would fix it) 3.5.5 (first seen it) and 3.6.6. Currently we are testing a box with 3.3.4, with hopes of not seeing the bugs. The problem is always reproducability: It seems to occur with memory intense applications (>10GB RAM), sometimes using multicores, but not necessarily.
Btw. One of the programs I have seen as being never affected from 'being stuck' is top. Is there anything I could do to help solving this bug, when I next see the issue?
Additional info, because I just stumbled upon it: On such a stuck system, with khugepaged at 100%, and one or two big jobs being stuck for above reasons, running the following sync && echo 3 > /proc/sys/vm/drop_caches will make khugepaged happy = go away. All sofar stuck jobs will resume/finish, follow-up behaviour is as usual/expected. Interesting!
We're seeing pretty much the same thing as Jonathan described. See <a href="https://bugzilla.redhat.com/show_bug.cgi?id=888380">888380</a>. This is on a box with 108GB of RAM.
*** Bug 888380 has been marked as a duplicate of this bug. ***
*** Bug 892212 has been marked as a duplicate of this bug. ***
Bug is still around in 3.6.11, had a bunch of cases this week..
I reproducibly run into this error on my machine. We have a self-made build cluster that forks a lot of ccproxy's. Some point in time the processes get stuck in a "futex(0x3c58f00a10, FUTEX_WAKE_PRIVATE, 2147483647) = 0" and take like forever to return from that situation. In "perf top" I also see a high load on "_raw_spin_lock_irqsave". My machine is a dual-processor Xeon(R) CPU X5650 (hexa-core) with 24GB of RAM. Switching off hyperthreading seemed to make thinks a little bit more stable (but this could be nonsense) Java VM also triggers this behaviour if many threads are started: https://bbs.archlinux.org/viewtopic.php?id=155537
See also: http://forums.fedoraforum.org/showthread.php?t=285246
One additional piece of information not included above: on my machine, this bug only triggers after the machine uptime is at least a few hours. For the first little while after a reboot, everything is fine.
The 'sync ; echo 3 >/proc/sys/vm/drop_caches ; sync' didn't help for me. But I think i found what is causing this for me: I also found this: https://lkml.org/lkml/2012/6/27/565 > 64.86% [k] _raw_spin_lock_irqsave > | > |--97.94%-- isolate_freepages > | compaction_alloc > | unmap_and_move > | migrate_pages > | compact_zone > | | > | |--99.56%-- try_to_compact_pages Regarding to this isolate_freepages tries to defrag the ram to create contiguous memory space for the transparent_hugepages. After echo 0 > /sys/kernel/mm/transparent_hugepage/khugepaged/defrag echo never > /sys/kernel/mm/transparent_hugepage/defrag echo never > /sys/kernel/mm/transparent_hugepage/enabled I don't see this problem on my system anymore...could someone double check please?
Yes, as per http://forums.fedoraforum.org/showthread.php?t=285246, the following is a successful work-around for the issue: echo never > /sys/kernel/mm/transparent_hugepage/defrag
Disabled defrag and added this to root's crontab: @reboot echo never | tee /sys/kernel/mm/transparent_hugepage/defrag Haven't seen the problem reoccur since.
I am seeing this on a 64 core AMD system with 128 GB of ram while running bunch of not so memory hungry programs (load average hovering around 55 and plenty of free memory available). In my case setting defrag to never did not have any effect (at least immediate, waited for couple of mins. and no change) but setting drop_caches to 3 did. So maybe there are two different problems here?
It would seem this bug has also been "backported" to RHEL6 kernels? On RHEL6, I've now observed several instances of java (with several hundred active threads) hang repeatedly for several minutes with near 100% CPU utilization over all cores on the system. It would seem most of the time is spent in system/kernel mode (according to top). Workaround on RHEL6 seems to be very similar: # echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag Just observed a java instance that was using ~3000% of CPU time (according to top) recover almost instantaneously after issuing the above command... Btw, it would seem like this is much more likely to happen on systems with "large" amounts of cores (> 8) and memory (> 16GB). And happenns even with very low memory utilization....
This message is a reminder that Fedora 17 is nearing its end of life. Approximately 4 (four) weeks from now Fedora will stop maintaining and issuing updates for Fedora 17. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as WONTFIX if it remains open with a Fedora 'version' of '17'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version prior to Fedora 17's end of life. Bug Reporter: Thank you for reporting this issue and we are sorry that we may not be able to fix it before Fedora 17 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior to Fedora 17's end of life. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete.
It would seem that this bug has been relegated to the ignore pile...
Yes, it seems so. However, with the last 3, maybe 4-5 versions of Fedora Kernels I haven't encountered that issue again. So mayhaps it has been a silent - or accidental - fix?
Agreed, not seen it yet on 3.8.13-100.fc17.x86_64 (i did see it on 3.6.10, on a 48core, 128GB RAM, used for varied scientific simulations by several users)
Glad to hear that it seems fixed. I hadn't checked in a while because I couldn't be bothered to have to check every time a new kernel came out. I was expecting an update on the ticket to let us know that the issue had been addressed. Given that there is no indication in the comments or status (still NEW) that anyone ever actually looked at the bug, we have no idea whether the issue was intentionally or inadvertently fixed.
(In reply to Danny Ciarniello from comment #22) > Glad to hear that it seems fixed. I hadn't checked in a while because I > couldn't be bothered to have to check every time a new kernel came out. I > was expecting an update on the ticket to let us know that the issue had been > addressed. Given that there is no indication in the comments or status > (still NEW) that anyone ever actually looked at the bug, we have no idea > whether the issue was intentionally or inadvertently fixed. To be perfectly honest, sometimes neither do we. The rate of change in the upstream kernel is ridiculously fast and our bug count is high, which means the 3 maintainers we have are pretty overloaded. I can understand your frustration and expectations, but we simply aren't in a position to track down every possible bug fix in new kernel releases. We humbly ask the reporters to test the new versions and participate in the process of getting the bugs resolved. We rebase to newer kernel versions to pick up all of the fixes that come with them, and sometimes that means bugs get fixed without us knowing exactly what the fix was.
Seems legit :)
(In reply to Josh Boyer from comment #23) > > To be perfectly honest, sometimes neither do we. > > The rate of change in the upstream kernel is ridiculously fast and our bug > count is high, which means the 3 maintainers we have are pretty overloaded. > I can understand your frustration and expectations, but we simply aren't in > a position to track down every possible bug fix in new kernel releases. We > humbly ask the reporters to test the new versions and participate in the > process of getting the bugs resolved. We rebase to newer kernel versions to > pick up all of the fixes that come with them, and sometimes that means bugs > get fixed without us knowing exactly what the fix was. Fair enough. It _is_ frustrating when it looks like bugs that one is interested in look like they are being ignored (unfortunately, I've seen more than one get to "end of life" without being fixed). In the future I will keep in mind what you have said here.
Fedora 17 changed to end-of-life (EOL) status on 2013-07-30. Fedora 17 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. Thank you for reporting this bug and we are sorry it could not be fixed.