Bug 879801 - khugepaged eating 100%CPU
Summary: khugepaged eating 100%CPU
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Fedora
Classification: Fedora
Component: kernel
Version: 17
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Kernel Maintainer List
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
: 888380 892212 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-11-24 14:43 UTC by H.J. Lu
Modified: 2013-08-01 02:20 UTC (History)
17 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-08-01 02:20:29 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)

Description H.J. Lu 2012-11-24 14:43:37 UTC
I am runing 3.6.7-4.fc17.x86_64 on a 12-core/24-threads
Intel Xeon X5670 with 12GB RAM.  When under load, I saw

163 root      39  19     0    0    0 R 100.6  0.0 301:55.25 khugepaged

and machine became very unresponsive.

Comment 1 H.J. Lu 2012-11-25 16:41:45 UTC
Downgrade to 3.6.7-3.fc17.x86_64 seems to fix the problem.

Comment 2 H.J. Lu 2012-11-25 20:43:12 UTC
3.6.7-3.fc17.x86_64 has the same problem.

Comment 3 Jonathan Hoser 2012-11-30 10:24:16 UTC
I am also seeing this on serveral multicore boxes (8-64 Cores) but more regularly on boxes with lots of memory (>64GB).

Programs running there will suddenly start having a very high CPU usage (e.g. 1400% compared to 'normal' usage of 400-600%), and khugepaged will be running at 100%.

In such a state, most other processes on the system cannot finish,
e.g. a yum is stuck after the dependency checking,
ps is working, ps auxfw gets stuck (and cannot be killed/backgrounded by Ctrl-[you name it]).

Killing (-9) the ultra-high-cpu processes will result in khugepaged returning to <1% CPU-time, processes on hold (such as stuck yum or ps auxfw, most other processes) suddenly can finish as intended.

I have seen that behavior on 
3.6.7 (thought it would fix it)
3.5.5 (first seen it) and 
3.6.6.

Currently we are testing a box with 3.3.4, with hopes of not seeing the bugs.

The problem is always reproducability:
It seems to occur with memory intense applications (>10GB RAM), sometimes using multicores, but not necessarily.

Comment 4 Jonathan Hoser 2012-11-30 10:27:20 UTC
Btw. One of the programs I have seen as being never affected from 'being stuck' is top.
Is there anything I could do to help solving this bug, when I next see the issue?

Comment 5 Jonathan Hoser 2012-12-03 07:26:32 UTC
Additional info, because I just stumbled upon it:

On such a stuck system, with khugepaged at 100%, and one or two big jobs being stuck for above reasons, running the following

sync && echo 3 > /proc/sys/vm/drop_caches

will make khugepaged happy = go away.
All sofar stuck jobs will resume/finish,
follow-up behaviour is as usual/expected. Interesting!

Comment 6 r3obh 2012-12-19 14:43:26 UTC
We're seeing pretty much the same thing as Jonathan described.  See <a href="https://bugzilla.redhat.com/show_bug.cgi?id=888380">888380</a>.
This is on a box with 108GB of RAM.

Comment 7 Josh Boyer 2013-01-03 18:33:00 UTC
*** Bug 888380 has been marked as a duplicate of this bug. ***

Comment 8 Josh Boyer 2013-01-07 13:51:26 UTC
*** Bug 892212 has been marked as a duplicate of this bug. ***

Comment 9 Jonathan Hoser 2013-01-15 22:31:48 UTC
Bug is still around in 3.6.11,
had a bunch of cases this week..

Comment 10 Sascha Zorn 2013-01-21 16:44:51 UTC
I reproducibly run into this error on my machine. We have a self-made build cluster that forks a lot of ccproxy's. Some point in time the processes get stuck in a "futex(0x3c58f00a10, FUTEX_WAKE_PRIVATE, 2147483647) = 0" and take like forever to return from that situation.

In "perf top" I also see a high load on "_raw_spin_lock_irqsave".

My machine is a dual-processor Xeon(R) CPU X5650 (hexa-core) with 24GB of RAM. Switching off hyperthreading seemed to make thinks a little bit more stable (but this could be nonsense)

Java VM also triggers this behaviour if many threads are started: https://bbs.archlinux.org/viewtopic.php?id=155537

Comment 11 Raman Gupta 2013-01-21 16:47:07 UTC
See also:

http://forums.fedoraforum.org/showthread.php?t=285246

Comment 12 Raman Gupta 2013-01-21 16:54:37 UTC
One additional piece of information not included above: on my machine, this bug only triggers after the machine uptime is at least a few hours. For the first little while after a reboot, everything is fine.

Comment 13 Sascha Zorn 2013-01-21 18:04:36 UTC
The 'sync ; echo 3 >/proc/sys/vm/drop_caches ; sync' didn't help for me. But I think i found what is causing this for me:

I also found this:
https://lkml.org/lkml/2012/6/27/565

> 64.86% [k] _raw_spin_lock_irqsave
> |
> |--97.94%-- isolate_freepages
> | compaction_alloc
> | unmap_and_move
> | migrate_pages
> | compact_zone
> | |
> | |--99.56%-- try_to_compact_pages

Regarding to this isolate_freepages tries to defrag the ram to create contiguous memory space for the transparent_hugepages. After

echo 0 > /sys/kernel/mm/transparent_hugepage/khugepaged/defrag
echo never > /sys/kernel/mm/transparent_hugepage/defrag
echo never > /sys/kernel/mm/transparent_hugepage/enabled

I don't see this problem on my system anymore...could someone double check please?

Comment 14 Raman Gupta 2013-01-21 18:24:31 UTC
Yes, as per http://forums.fedoraforum.org/showthread.php?t=285246, the following is a successful work-around for the issue:

echo never > /sys/kernel/mm/transparent_hugepage/defrag

Comment 15 r3obh 2013-01-21 18:29:41 UTC
Disabled defrag and added this to root's crontab:

    @reboot echo never | tee /sys/kernel/mm/transparent_hugepage/defrag

Haven't seen the problem reoccur since.

Comment 16 Jussi Eloranta 2013-02-13 18:22:54 UTC
I am seeing this on a 64 core AMD system with 128 GB of ram while running bunch of not so memory hungry programs (load average hovering around 55 and plenty of free memory available). In my case setting defrag to never did not have any effect (at least immediate, waited for couple of mins. and no change) but setting drop_caches to 3 did. So maybe there are two different problems here?

Comment 17 Timo Kokkonen 2013-05-29 00:41:27 UTC
It would seem this bug has also been "backported" to RHEL6 kernels?  
On RHEL6, I've now observed several instances of java (with several hundred active threads) hang repeatedly for several minutes with near 100% CPU utilization over all cores on the system. It would seem most of the time is spent in system/kernel mode (according to top). 

Workaround on RHEL6 seems to be very similar:

# echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag

Just observed a java instance that was using ~3000% of CPU time (according to top) recover almost instantaneously after issuing the above command...


Btw, it would seem like this is much more likely to happen on systems with "large" amounts of cores (> 8) and memory (> 16GB). And happenns even with very low memory utilization....

Comment 18 Fedora End Of Life 2013-07-03 23:33:26 UTC
This message is a reminder that Fedora 17 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 17. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '17'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 17's end of life.

Bug Reporter:  Thank you for reporting this issue and we are sorry that 
we may not be able to fix it before Fedora 17 is end of life. If you 
would still like  to see this bug fixed and are able to reproduce it 
against a later version  of Fedora, you are encouraged  change the 
'version' to a later Fedora version prior to Fedora 17's end of life.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 19 Danny Ciarniello 2013-07-04 00:43:39 UTC
It would seem that this bug has been relegated to the ignore pile...

Comment 20 Jonathan Hoser 2013-07-04 05:56:29 UTC
Yes, it seems so.
However, with the last 3, maybe 4-5 versions of Fedora Kernels I haven't encountered that issue again.
So mayhaps it has been a silent - or accidental - fix?

Comment 21 Sam Tygier 2013-07-04 07:31:23 UTC
Agreed, not seen it yet on 3.8.13-100.fc17.x86_64 (i did see it on 3.6.10, on a 48core, 128GB RAM, used for varied scientific simulations by several users)

Comment 22 Danny Ciarniello 2013-07-04 15:14:30 UTC
Glad to hear that it seems fixed.  I hadn't checked in a while because I couldn't be bothered to have to check every time a new kernel came out.  I was expecting an update on the ticket to let us know that the issue had been addressed.  Given that there is no indication in the comments or status (still NEW) that anyone ever actually looked at the bug, we have no idea whether the issue was intentionally or inadvertently fixed.

Comment 23 Josh Boyer 2013-07-05 12:39:22 UTC
(In reply to Danny Ciarniello from comment #22)
> Glad to hear that it seems fixed.  I hadn't checked in a while because I
> couldn't be bothered to have to check every time a new kernel came out.  I
> was expecting an update on the ticket to let us know that the issue had been
> addressed.  Given that there is no indication in the comments or status
> (still NEW) that anyone ever actually looked at the bug, we have no idea
> whether the issue was intentionally or inadvertently fixed.

To be perfectly honest, sometimes neither do we.

The rate of change in the upstream kernel is ridiculously fast and our bug count is high, which means the 3 maintainers we have are pretty overloaded.  I can understand your frustration and expectations, but we simply aren't in a position to track down every possible bug fix in new kernel releases.  We humbly ask the reporters to test the new versions and participate in the process of getting the bugs resolved.  We rebase to newer kernel versions to pick up all of the fixes that come with them, and sometimes that means bugs get fixed without us knowing exactly what the fix was.

Comment 24 Sascha Zorn 2013-07-05 13:31:11 UTC
Seems legit :)

Comment 25 Danny Ciarniello 2013-07-06 00:53:12 UTC
(In reply to Josh Boyer from comment #23)
> 
> To be perfectly honest, sometimes neither do we.
> 
> The rate of change in the upstream kernel is ridiculously fast and our bug
> count is high, which means the 3 maintainers we have are pretty overloaded. 
> I can understand your frustration and expectations, but we simply aren't in
> a position to track down every possible bug fix in new kernel releases.  We
> humbly ask the reporters to test the new versions and participate in the
> process of getting the bugs resolved.  We rebase to newer kernel versions to
> pick up all of the fixes that come with them, and sometimes that means bugs
> get fixed without us knowing exactly what the fix was.

Fair enough.  

It _is_ frustrating when it looks like bugs that one is interested in look like they are being ignored (unfortunately, I've seen more than one get to "end of life" without being fixed).  In the future I will keep in mind what you have said here.

Comment 26 Fedora End Of Life 2013-08-01 02:20:36 UTC
Fedora 17 changed to end-of-life (EOL) status on 2013-07-30. Fedora 17 is 
no longer maintained, which means that it will not receive any further 
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of 
Fedora please feel free to reopen this bug against that version.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.