Description of problem: Currently, RHEL5.3 will oom kill a task attempting to increase the number of huge pages in the huge page free pool when it can't allocate memory from a given node. This is not really an out of memory condition, and it is not under the user/administrator's control, as the kernel will unconditionally attempt to distribute allocations across all on-line nodes, whether or not the node has sufficient memory. This can be avoided by having the huge page free pool allocator-- alloc_fresh_huge_page_node() pass a new flag--__GFP_NO_OOM--to the call to alloc_pages_thisnode(), used only for huge page allocations. With this flag, __alloc_pages() will not call out_of_memory() before restarting the allocation loop. However, since we don't call out_of_memory(), the task will not have the TF_MEMDIE flag set. So, we need to indicate that it WOULD have oom-killed the task so that we don't loop forever, attempting to allocate huge pages. Finally, pass __GFP_NOMEMALLOC flag as we don't want to dip into the reserves for filling the huge page free pool. Note: mainline has undergong significant rework in this area and does not suffer this symptom. This seems to be minimal patch to avoid the problem in RHEL5.x Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
This request was evaluated by Red Hat Product Management for inclusion in a Red Hat Enterprise Linux maintenance release. Product Management has requested further review of this request by Red Hat Engineering, for potential inclusion in a Red Hat Enterprise Linux Update release for currently deployed products. This request is not yet committed for inclusion in an Update release.
Hello Doug, Larry seems busy, so I'm asking for any reproducer steps to verify this bugfix. Thanks.
This is in needinfo from me but I don't see a question. I assume the question is in a private comment which I cannot see. Please make sure any comments that you expect me to see are not private.
Sorry Doug, see comment#5.
Lee, do you know of a good way to reproduce this issue?
Lee replied to me via email: On an 8640, for example, in 100%CLM mode, all you need to do is try to allocate more huge pages than nodes. When it tries to allocate from node 4--the interleaved node--it will fail and OOM kill the task that is modifying the vm.nr_hugepages sysctl--your shell or sysctl. On x86[_64] I believe you can reproduce it by allocating sufficient memory on one node so that you can't allocate a huge page there, then try to allocate more huge pages than nodes so that it will try to allocate pages from each node. The "fresh huge page allocation" will fail on the node with no huge pages avail and, I think, cause OOM kill there, as well. I don't recall whether I tested this, but I vaguely recall doing so. It was a while back that I reported this.
in kernel-2.6.18-174.el5 You can download this test kernel from http://people.redhat.com/dzickus/el5 Please do NOT transition this bugzilla state to VERIFIED until our QE team has sent specific instructions indicating when to do so. However feel free to provide a comment indicating that this fix has been verified.
~~ Attention Customers and Partners - RHEL 5.5 Beta is now available on RHN ~~ RHEL 5.5 Beta has been released! There should be a fix present in this release that addresses your request. Please test and report back results here, by March 3rd 2010 (2010-03-03) or sooner. Upon successful verification of this request, post your results and update the Verified field in Bugzilla with the appropriate value. If you encounter any issues while testing, please describe them and set this bug into NEED_INFO. If you encounter new defects or have additional patch(es) to request for inclusion, please clone this bug per each request and escalate through your support representative.
It appears someone asked me a question in a private comment and put this in "needinfo" state. I cannot see private BZ comments so please either email me the question or open up the comment. thanks, - Doug
Hi, can anyone tell me why following ops cann't reproduce this bug? Thanks. First, one program was eating system memory quickly; then do following ops: [root@intel-sunriseridge-01 bz498510]# uname -rm 2.6.18-128.el5 x86_64 [root@intel-sunriseridge-01 bz498510]# while true; do numactl --hardware|grep free; sleep 2; echo; done ... node 0 free: 7550 MB node 1 free: 4002 MB node 2 free: 8 MB node 3 free: 2999 MB ... Seeing node 2 had only 8M, just then I increased the number of huge pages: [root@intel-sunriseridge-01 bz498510]# echo 40 > /proc/sys/vm/nr_hugepages [root@intel-sunriseridge-01 bz498510]# n=0; while [ $n -le 3 ]; do cat /sys/devices/system/node/node$n/meminfo |grep HugePages_Total; let n++; done Node 0 HugePages_Total: 13 Node 1 HugePages_Total: 13 Node 2 HugePages_Total: 2 Node 3 HugePages_Total: 12 But OOM killer on Node 2 didn't occur. BTW, huge page size was 2048 kB.
An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHSA-2010-0178.html