Bug 498510 - don't OOM kill task during fresh huge page allocation
Summary: don't OOM kill task during fresh huge page allocation
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: kernel
Version: 5.3
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Larry Woodman
QA Contact: Red Hat Kernel QE team
URL:
Whiteboard:
Depends On:
Blocks: 526775 533192
TreeView+ depends on / blocked
 
Reported: 2009-04-30 19:50 UTC by Doug Chapman
Modified: 2010-03-30 07:20 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-03-30 07:20:59 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2010:0178 0 normal SHIPPED_LIVE Important: Red Hat Enterprise Linux 5.5 kernel security and bug fix update 2010-03-29 12:18:21 UTC

Description Doug Chapman 2009-04-30 19:50:10 UTC
Description of problem:


Currently, RHEL5.3 will oom kill a task attempting to increase the number
of huge pages in the huge page free pool when it can't allocate memory from
a given node.  This is not really an out of memory condition, and it is not
under the user/administrator's control, as the kernel will unconditionally
attempt to distribute allocations across all on-line nodes, whether or not
the node has sufficient memory.

This can be avoided by having the huge page free pool allocator--
alloc_fresh_huge_page_node() pass a new flag--__GFP_NO_OOM--to the call
to alloc_pages_thisnode(), used only for huge page allocations.  With this
flag, __alloc_pages() will not call out_of_memory() before restarting the
allocation loop.

However, since we don't call out_of_memory(), the task will not have the
TF_MEMDIE flag set.  So, we need to indicate that it WOULD have oom-killed
the task so that we don't loop forever, attempting to allocate huge pages.

Finally, pass __GFP_NOMEMALLOC flag as we don't want to dip into the reserves
for filling the huge page free pool.

Note:  mainline has undergong significant rework in this area and does not
suffer this symptom.  This seems to be minimal patch to avoid the problem
in RHEL5.x


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:

Comment 2 RHEL Program Management 2009-05-12 17:39:17 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
release.

Comment 3 RHEL Program Management 2009-09-25 17:40:11 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
release.

Comment 5 Jan Tluka 2009-10-14 10:12:12 UTC
Hello Doug, Larry seems busy, so I'm asking for any reproducer steps to verify this bugfix. 

Thanks.

Comment 6 Doug Chapman 2009-10-14 12:28:10 UTC
This is in needinfo from me but I don't see a question.  I assume the question is in a private comment which I cannot see.  Please make sure any comments that you expect me to see are not private.

Comment 7 Jan Tluka 2009-10-14 12:34:20 UTC
Sorry Doug, see comment#5.

Comment 8 Doug Chapman 2009-10-14 13:12:19 UTC
Lee, do you know of a good way to reproduce this issue?

Comment 9 Doug Chapman 2009-11-09 15:29:39 UTC
Lee replied to me via email:

On an 8640, for example, in 100%CLM mode, all you need to do is try to
allocate more huge pages than nodes.  When it tries to allocate from
node 4--the interleaved node--it will fail and OOM kill the task that is
modifying the vm.nr_hugepages sysctl--your shell or sysctl.

On x86[_64] I believe you can reproduce it by allocating sufficient
memory on one node so that you can't allocate a huge page there, then
try to allocate more huge pages than nodes so that it will try to
allocate pages from each node.  The "fresh huge page allocation" will
fail on the node with no huge pages avail and, I think, cause OOM kill
there, as well.  I don't recall whether I tested this, but I vaguely
recall doing so.  It was a while back that I reported this.

Comment 10 Don Zickus 2009-11-17 21:55:49 UTC
in kernel-2.6.18-174.el5
You can download this test kernel from http://people.redhat.com/dzickus/el5

Please do NOT transition this bugzilla state to VERIFIED until our QE team
has sent specific instructions indicating when to do so.  However feel free
to provide a comment indicating that this fix has been verified.

Comment 12 Chris Ward 2010-02-11 10:29:31 UTC
~~ Attention Customers and Partners - RHEL 5.5 Beta is now available on RHN ~~

RHEL 5.5 Beta has been released! There should be a fix present in this 
release that addresses your request. Please test and report back results 
here, by March 3rd 2010 (2010-03-03) or sooner.

Upon successful verification of this request, post your results and update 
the Verified field in Bugzilla with the appropriate value.

If you encounter any issues while testing, please describe them and set 
this bug into NEED_INFO. If you encounter new defects or have additional 
patch(es) to request for inclusion, please clone this bug per each request
and escalate through your support representative.

Comment 14 Doug Chapman 2010-03-03 15:20:18 UTC
It appears someone asked me a question in a private comment and put this in "needinfo" state.  I cannot see private BZ comments so please either email me the question or open up the comment.

thanks,

- Doug

Comment 15 Igor Zhang 2010-03-11 07:53:26 UTC
Hi, can anyone tell me why following ops cann't reproduce this bug? Thanks.

First, one program was eating system memory quickly; then do following ops:
[root@intel-sunriseridge-01 bz498510]# uname -rm
2.6.18-128.el5 x86_64
[root@intel-sunriseridge-01 bz498510]# while true; do numactl --hardware|grep free; sleep 2; echo; done
...
node 0 free: 7550 MB
node 1 free: 4002 MB
node 2 free: 8 MB
node 3 free: 2999 MB
...

Seeing node 2 had only 8M, just then I increased the number of huge pages:
[root@intel-sunriseridge-01 bz498510]# echo 40 > /proc/sys/vm/nr_hugepages
[root@intel-sunriseridge-01 bz498510]# n=0; while [ $n -le 3 ]; do cat /sys/devices/system/node/node$n/meminfo |grep  HugePages_Total; let n++; done
Node 0 HugePages_Total:    13
Node 1 HugePages_Total:    13
Node 2 HugePages_Total:     2
Node 3 HugePages_Total:    12

But OOM killer on Node 2 didn't occur. BTW, huge page size was 2048 kB.

Comment 18 errata-xmlrpc 2010-03-30 07:20:59 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHSA-2010-0178.html


Note You need to log in before you can comment on or make changes to this bug.