Bug 536734 - LowFree exhausted
Summary: LowFree exhausted
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: kernel
Version: 5.4
Hardware: i686
OS: Linux
low
medium
Target Milestone: ---
: ---
Assignee: Danny Feng
QA Contact: Red Hat Kernel QE team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-11-11 06:54 UTC by Yury Stankevich
Modified: 2017-09-18 11:20 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-06-02 13:17:26 UTC
Target Upstream Version:
Embargoed:
urykhy: needinfo-


Attachments (Terms of Use)
zone info (3.19 KB, text/plain)
2009-11-11 06:54 UTC, Yury Stankevich
no flags Details

Description Yury Stankevich 2009-11-11 06:54:44 UTC
Created attachment 368993 [details]
zone info

Description of problem:
after some time of using box
LowFree memory drops to few MB,s (say 8MB)


Version-Release number of selected component (if applicable):
2.6.18-164.el5 (the same for -128 kernel)


How reproducible:
looks like it caused by updatedb if a lot of files in directory (say we have few folders where > 50K files in)

  
Actual results:
$cat /proc/meminfo | grep Low
LowTotal:       875312 kB
LowFree:          9312 kB


Expected results:
i expect that LowFree should not drop below ~100M


Additional info:
box have 4G ram, 32bit kernel,
vm.lowmem_reserve_ratio = 256   256     32
filesystem - ext3

Impact:
LowFree memory cause problems in using of PACKET_MMAP feature in kernel:
 - OOM messages in dmesg
 - small ring buffer for PACKET_MMAP is allocated -> bad performance

Known workaround:
echo 3 > /proc/sys/vm/drop_caches

Comment 1 Danny Feng 2009-12-29 09:27:31 UTC
Since you're using 32bit kernel, so no matter how much rams you have, 32bit kernel will only use less than 1Gb as low memory. 

When low memory is exhausted, it doesn't matter how much high memory is
available, the oom-killer will begin whacking processes to keep the
server alive.

There are a couple of solutions to this problem:
1) upgrade to 64bit kernel, this is the best solution because all memory becomes low memory.

2) If limited to 32bit kernel, the best solution is to run the hugemem
kernel.  This kernel splits low/high memory differently, and in most
cases should provide enough low memory to map high memory.

3) If running the 32-bit hugemem kernel isn't an option either, you can try
setting /proc/sys/vm/lower_zone_protection to a value of 250 or more.
This will cause the kernel to try to be more aggressive in defending the
low zone from allocating memory that could potentially be allocated in
the high memory zone.

You can check & set this value on the fly via:
  # cat /proc/sys/vm/lower_zone_protection
  # echo "250" > /proc/sys/vm/lower_zone_protection

Comment 2 Danny Feng 2009-12-29 09:53:09 UTC
(In reply to comment #1)
> Since you're using 32bit kernel, so no matter how much rams you have, 32bit
> kernel will only use less than 1Gb as low memory. 
> 
> When low memory is exhausted, it doesn't matter how much high memory is
> available, the oom-killer will begin whacking processes to keep the
> server alive.
> 
> There are a couple of solutions to this problem:
> 1) upgrade to 64bit kernel, this is the best solution because all memory
> becomes low memory.
> 
> 2) If limited to 32bit kernel, the best solution is to run the hugemem
> kernel.  This kernel splits low/high memory differently, and in most
> cases should provide enough low memory to map high memory.
> 
> 3) If running the 32-bit hugemem kernel isn't an option either, you can try
> setting /proc/sys/vm/lower_zone_protection to a value of 250 or more.
> This will cause the kernel to try to be more aggressive in defending the
> low zone from allocating memory that could potentially be allocated in
> the high memory zone.

Sorry, for RHEL5, this should be 32-bit PAE kernel

> 
> You can check & set this value on the fly via:
>   # cat /proc/sys/vm/lower_zone_protection
>   # echo "250" > /proc/sys/vm/lower_zone_protection  

Sorry again, for RHEL5, this should be /proc/sys/vm/lowmem_reserve_ratio
echo "256 256 250" > /proc/sys/vm/lowmem_reserve_ratio

Comment 3 Yury Stankevich 2010-01-13 06:14:35 UTC
$cat /proc/sys/vm/lowmem_reserve_ratio
256     256     250

$cat /proc/meminfo | grep Low
LowTotal:       875312 kB
LowFree:          8812 kB

not really helps

Comment 4 Yury Stankevich 2010-01-13 06:33:44 UTC
after program start:

LowFree:        612692 kB

but in dmesg few OOM messages for `packet_set_ring+0xd4/0x2f9`
so i think that kernel is trying to get some memory, but can't find continious region for order:5 allocation.

Comment 5 Danny Feng 2010-01-13 06:38:32 UTC
(In reply to comment #4)
> after program start:
> 
> LowFree:        612692 kB
> 
> but in dmesg few OOM messages for `packet_set_ring+0xd4/0x2f9`

Could you please show me those messages?

> so i think that kernel is trying to get some memory, but can't find continious
> region for order:5 allocation.

Comment 6 Yury Stankevich 2010-01-13 06:50:40 UTC
Jan 13 09:30:25 dev kernel: XXX: page allocation failure. order:5, mode:0xc0d0
Jan 13 09:30:25 dev kernel:  [<c0459b7f>] __alloc_pages+0x283/0x297
Jan 13 09:30:25 dev kernel:  [<c0459bb8>] __get_free_pages+0x25/0x31
Jan 13 09:30:25 dev kernel:  [<c0611acc>] packet_set_ring+0xd4/0x2f9
Jan 13 09:30:25 dev kernel:  [<c0612b81>] packet_setsockopt+0x246/0x2d0
Jan 13 09:30:25 dev kernel:  [<c04832d9>] do_ioctl+0x1c/0x5d
Jan 13 09:30:25 dev kernel:  [<c05b14f4>] sys_setsockopt+0x76/0x95
Jan 13 09:30:25 dev kernel:  [<c05b27de>] sys_socketcall+0x15c/0x19e
Jan 13 09:30:25 dev kernel:  [<c0404f17>] syscall_call+0x7/0xb
Jan 13 09:30:25 dev kernel:  =======================
Jan 13 09:30:25 dev kernel: Mem-info:
Jan 13 09:30:25 dev kernel: DMA per-cpu:
Jan 13 09:30:25 dev kernel: cpu 0 hot: high 0, batch 1 used:0
Jan 13 09:30:25 dev kernel: cpu 0 cold: high 0, batch 1 used:0
Jan 13 09:30:25 dev kernel: cpu 1 hot: high 0, batch 1 used:0
Jan 13 09:30:25 dev kernel: cpu 1 cold: high 0, batch 1 used:0
Jan 13 09:30:25 dev kernel: cpu 2 hot: high 0, batch 1 used:0
Jan 13 09:30:25 dev kernel: cpu 2 cold: high 0, batch 1 used:0
Jan 13 09:30:25 dev kernel: cpu 3 hot: high 0, batch 1 used:0
Jan 13 09:30:25 dev kernel: cpu 3 cold: high 0, batch 1 used:0
Jan 13 09:30:25 dev kernel: DMA32 per-cpu: empty
Jan 13 09:30:25 dev kernel: Normal per-cpu:
Jan 13 09:30:25 dev kernel: cpu 0 hot: high 186, batch 31 used:20
Jan 13 09:30:25 dev kernel: cpu 0 cold: high 62, batch 15 used:55
Jan 13 09:30:25 dev kernel: cpu 1 hot: high 186, batch 31 used:89
Jan 13 09:30:25 dev kernel: cpu 1 cold: high 62, batch 15 used:52
Jan 13 09:30:25 dev kernel: cpu 2 hot: high 186, batch 31 used:4
Jan 13 09:30:25 dev kernel: cpu 2 cold: high 62, batch 15 used:60
Jan 13 09:30:25 dev kernel: cpu 3 hot: high 186, batch 31 used:11
Jan 13 09:30:25 dev kernel: cpu 3 cold: high 62, batch 15 used:47
Jan 13 09:30:25 dev kernel: HighMem per-cpu:
Jan 13 09:30:25 dev kernel: cpu 0 hot: high 186, batch 31 used:28
Jan 13 09:30:25 dev kernel: cpu 0 cold: high 62, batch 15 used:11
Jan 13 09:30:25 dev kernel: cpu 1 hot: high 186, batch 31 used:30
Jan 13 09:30:25 dev kernel: cpu 1 cold: high 62, batch 15 used:0
Jan 13 09:30:25 dev kernel: cpu 2 hot: high 186, batch 31 used:7
Jan 13 09:30:25 dev kernel: cpu 2 cold: high 62, batch 15 used:8
Jan 13 09:30:25 dev kernel: cpu 3 hot: high 186, batch 31 used:145
Jan 13 09:30:25 dev kernel: cpu 3 cold: high 62, batch 15 used:11
Jan 13 09:30:25 dev kernel: Free pages:     2299384kB (1608328kB HighMem)
lab:16353 mapped-file:30602 mapped-anon:355463 pagetables:6519
:16384kB pages_scanned:0 all_unreclaimable? yes
Jan 13 09:30:25 dev kernel: lowmem_reserve[]: 0 0 880 4592
pages_scanned:0 all_unreclaimable? no
Jan 13 09:30:25 dev kernel: lowmem_reserve[]: 0 0 880 4592
ve:40456kB present:901120kB pages_scanned:0 all_unreclaimable? no
Jan 13 09:30:25 dev kernel: lowmem_reserve[]: 0 0 0 3801
ctive:96980kB present:3801088kB pages_scanned:0 all_unreclaimable? no
Jan 13 09:30:25 dev kernel: lowmem_reserve[]: 0 0 0 0
kB 0*2048kB 0*4096kB = 7044kB
Jan 13 09:30:25 dev kernel: DMA32: empty
*512kB 0*1024kB 0*2048kB 0*4096kB = 684012kB
B 0*512kB 0*1024kB 0*2048kB 1*4096kB = 1608328kB
Jan 13 09:30:25 dev kernel: 74408 pagecache pages
Jan 13 09:30:25 dev kernel: Swap cache: add 31, delete 31, find 0/0, race 0+0
Jan 13 09:30:25 dev kernel: Free swap  = 4192816kB
Jan 13 09:30:25 dev kernel: Total swap = 4192924kB
Jan 13 09:30:25 dev kernel: Free swap:       4192816kB
Jan 13 09:30:25 dev kernel: 1179648 pages of RAM
Jan 13 09:30:25 dev kernel: 950272 pages of HIGHMEM
Jan 13 09:30:25 dev kernel: 141652 reserved pages
Jan 13 09:30:25 dev kernel: 479871 pages shared
Jan 13 09:30:25 dev kernel: 0 pages swap cached
Jan 13 09:30:25 dev kernel: 631 pages dirty
Jan 13 09:30:25 dev kernel: 18 pages writeback
Jan 13 09:30:25 dev kernel: 30602 pages mapped
Jan 13 09:30:25 dev kernel: 16353 pages slab
Jan 13 09:30:25 dev kernel: 6519 pages pagetables

Comment 7 RHEL Program Management 2014-03-07 13:42:50 UTC
This bug/component is not included in scope for RHEL-5.11.0 which is the last RHEL5 minor release. This Bugzilla will soon be CLOSED as WONTFIX (at the end of RHEL5.11 development phase (Apr 22, 2014)). Please contact your account manager or support representative in case you need to escalate this bug.

Comment 8 RHEL Program Management 2014-06-02 13:17:26 UTC
Thank you for submitting this request for inclusion in Red Hat Enterprise Linux 5. We've carefully evaluated the request, but are unable to include it in RHEL5 stream. If the issue is critical for your business, please provide additional business justification through the appropriate support channels (https://access.redhat.com/site/support).


Note You need to log in before you can comment on or make changes to this bug.