Red Hat Bugzilla – Bug 495442
vmscan: bail out of direct reclaim after swap_cluster_max pages
Last modified: 2010-10-23 04:55:11 EDT
Created attachment 339296 [details]
backport of upstream patches to make vmscan bail out of direct reclaim after SWAP_CLUSTER_MAX pages have been reclaimed
Description of problem:
The RHEL 5 VM suffers from a problem the upstream kernel has had for a while: under some workloads, the pageout code will deplete the page cache and then suddenly the VM hits a wall.
Upstream has a potential fix for this issue, though the fix upstream was tested on top of the split LRU code.
From the commit message of a79311c14eae4bb946a97af25f3e1b17d625985d:
When the VM is under pressure, it can happen that several direct reclaim
processes are in the pageout code simultaneously. It also happens that
the reclaiming processes run into mostly referenced, mapped and dirty
pages in the first round.
This results in multiple direct reclaim processes having a lower
pageout priority, which corresponds to a higher target of pages to
This in turn can result in each direct reclaim process freeing
many pages. Together, they can end up freeing way too many pages.
This kicks useful data out of memory (in some cases more than half
of all memory is swapped out). It also impacts performance by
keeping tasks stuck in the pageout code for too long.
A 30% improvement in hackbench has been observed with this patch.
The fix is relatively simple: in shrink_zone() we can check how many
pages we have already freed, direct reclaim tasks break out of the
scanning loop if they have already freed enough pages and have reached
a lower priority level.
Version-Release number of selected component (if applicable):
All current RHEL 5 kernels.
Steps to Reproduce:
1. echo 40 > /proc/sys/vm/swappiness (or any other reasonable desktop value)
2. slowly run out of memory
3. start up a new process
4. wait for the system to thrash to a crawl
5. when the system comes back, see between 1/4 and 1/2 of memory free
The VM frees just what it needs, resulting in way lower application latencies.
I developed the patch upstream that fixes the issue there. I am currently compiling a test RPM with the attached patch to see if it fixes the problem on RHEL 5.
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release. Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products. This request is not yet committed for inclusion in an Update
Posted after 3 days of testing and careful review of all the surrounding code.
You can download this test kernel from http://people.redhat.com/dzickus/el5
Please do NOT transition this bugzilla state to VERIFIED until our QE team
has sent specific instructions indicating when to do so. However feel free
to provide a comment indicating that this fix has been verified.
~~ Attention - RHEL 5.4 Beta Released! ~~
RHEL 5.4 Beta has been released! There should be a fix present in the Beta release that addresses this particular request. Please test and report back results here, at your earliest convenience. RHEL 5.4 General Availability release is just around the corner!
If you encounter any issues while testing Beta, please describe the issues you have encountered and set the bug into NEED_INFO. If you encounter new issues, please clone this bug to open a new issue and request it be reviewed for inclusion in RHEL 5.4 or a later update, if it is not of urgent severity.
Please do not flip the bug status to VERIFIED. Only post your verification results, and if available, update Verified field with the appropriate value.
Questions can be posted to this bug or your customer or partner representative.
Verified that the patch to this bug is included in kernel-2.6.18-162.el5
The configuration of test environment is a difficult for me, I do code review first and will test it a few days later.
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.