Bug 495442

Summary: vmscan: bail out of direct reclaim after swap_cluster_max pages
Product: Red Hat Enterprise Linux 5 Reporter: Rik van Riel <riel>
Component: kernelAssignee: Rik van Riel <riel>
Status: CLOSED ERRATA QA Contact: Red Hat Kernel QE team <kernel-qe>
Severity: high Docs Contact:
Priority: high    
Version: 5.3CC: bmr, cward, czhang, dzickus, kernel-mgr, lwoodman, pzijlstr, tao
Target Milestone: rc   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2009-09-02 08:35:47 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
backport of upstream patches to make vmscan bail out of direct reclaim after SWAP_CLUSTER_MAX pages have been reclaimed none

Description Rik van Riel 2009-04-13 07:56:38 UTC
Created attachment 339296 [details]
backport of upstream patches to make vmscan bail out of direct reclaim after SWAP_CLUSTER_MAX pages have been reclaimed

Description of problem:

The RHEL 5 VM suffers from a problem the upstream kernel has had for a while: under some workloads, the pageout code will deplete the page cache and then suddenly the VM hits a wall.

Upstream has a potential fix for this issue, though the fix upstream was tested on top of the split LRU code.

From the commit message of a79311c14eae4bb946a97af25f3e1b17d625985d:

    When the VM is under pressure, it can happen that several direct reclaim
    processes are in the pageout code simultaneously.  It also happens that
    the reclaiming processes run into mostly referenced, mapped and dirty
    pages in the first round.
    
    This results in multiple direct reclaim processes having a lower
    pageout priority, which corresponds to a higher target of pages to
    scan.
    
    This in turn can result in each direct reclaim process freeing
    many pages.  Together, they can end up freeing way too many pages.
    
    This kicks useful data out of memory (in some cases more than half
    of all memory is swapped out).  It also impacts performance by
    keeping tasks stuck in the pageout code for too long.
    
    A 30% improvement in hackbench has been observed with this patch.
    
    The fix is relatively simple: in shrink_zone() we can check how many
    pages we have already freed, direct reclaim tasks break out of the
    scanning loop if they have already freed enough pages and have reached
    a lower priority level.


Version-Release number of selected component (if applicable):

All current RHEL 5 kernels.

Steps to Reproduce:
1. echo 40 > /proc/sys/vm/swappiness  (or any other reasonable desktop value)
2. slowly run out of memory
3. start up a new process
4. wait for the system to thrash to a crawl
5. when the system comes back, see between 1/4 and 1/2 of memory free
  
Expected results:

The VM frees just what it needs, resulting in way lower application latencies.

Additional info:

I developed the patch upstream that fixes the issue there.  I am currently compiling a test RPM with the attached patch to see if it fixes the problem on RHEL 5.

Comment 1 RHEL Program Management 2009-04-13 14:49:35 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
release.

Comment 2 Rik van Riel 2009-04-16 15:05:32 UTC
Posted after 3 days of testing and careful review of all the surrounding code.

Comment 3 Don Zickus 2009-05-06 17:17:46 UTC
in kernel-2.6.18-144.el5
You can download this test kernel from http://people.redhat.com/dzickus/el5

Please do NOT transition this bugzilla state to VERIFIED until our QE team
has sent specific instructions indicating when to do so.  However feel free
to provide a comment indicating that this fix has been verified.

Comment 5 Chris Ward 2009-07-03 18:41:29 UTC
~~ Attention - RHEL 5.4 Beta Released! ~~

RHEL 5.4 Beta has been released! There should be a fix present in the Beta release that addresses this particular request. Please test and report back results here, at your earliest convenience. RHEL 5.4 General Availability release is just around the corner!

If you encounter any issues while testing Beta, please describe the issues you have encountered and set the bug into NEED_INFO. If you encounter new issues, please clone this bug to open a new issue and request it be reviewed for inclusion in RHEL 5.4 or a later update, if it is not of urgent severity.

Please do not flip the bug status to VERIFIED. Only post your verification results, and if available, update Verified field with the appropriate value.

Questions can be posted to this bug or your customer or partner representative.

Comment 6 Caspar Zhang 2009-08-10 10:04:02 UTC
Verified that the patch to this bug is included in kernel-2.6.18-162.el5

The configuration of test environment is a difficult for me, I do code review first and will test it a few days later.

Comment 8 errata-xmlrpc 2009-09-02 08:35:47 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHSA-2009-1243.html