Bug 150945 - extreme slab memory use
Summary: extreme slab memory use
Keywords:
Status: CLOSED RAWHIDE
Alias: None
Product: Fedora
Classification: Fedora
Component: kernel
Version: rawhide
Hardware: i686
OS: Linux
medium
high
Target Milestone: ---
Assignee: Dave Jones
QA Contact: Brian Brock
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2005-03-12 11:27 UTC by Todd Mokros
Modified: 2015-01-04 22:17 UTC (History)
4 users (show)

Fixed In Version: kernel-2.6.11-1.1225_FC4
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2005-04-03 12:24:04 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Todd Mokros 2005-03-12 11:27:49 UTC
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.6) Gecko/20050311 Firefox/1.0.1 Fedora/1.0.1-5

Description of problem:
I'm seeing an issue where any process that walks a large directory tree(du, updatedb) causes extreme slab memory use.  Here's a snapshot of slabtop:

 Active / Total Objects (% used)    : 239664 / 263056 (91.1%)
 Active / Total Slabs (% used)      : 104598 / 104644 (100.0%)
 Active / Total Caches (% used)     : 83 / 122 (68.0%)
 Active / Total Size (% used)       : 416779.61K / 418294.85K (99.6%)
 Minimum / Average / Maximum Object : 0.02K / 1.59K / 128.00K

  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME                   
112728 104720  92%    0.04K   1281       88      5124K size-32
 48897  48884  99%    4.00K  48897        1    195588K ext3_inode_cache
 35288  35263  99%    4.00K  35288        1    141152K dentry_cache
 19764  10362  52%    0.06K    324       61      1296K buffer_head
 13299  12052  90%    0.10K    341       39      1364K vm_area_struct
  5183   5183 100%    4.00K   5183        1     20732K radix_tree_node
  4576   3810  83%    0.04K     52       88       208K anon_vma
  4250   4250 100%    4.00K   4250        1     17000K filp
  3321   3263  98%    0.05K     41       81       164K sysfs_dir_cache
  2674   2669  99%    4.00K   2674        1     10696K size-128
  2548   1168  45%    0.07K     49       52       196K size-64
  1170    109   9%    0.06K     18       65        72K journal_head
   910    910 100%    4.00K    910        1      3640K inode_cache
   637    637 100%    4.00K    637        1      2548K shmem_inode_cache
   478    476  99%    4.00K    478        1      1912K size-512
   463    463 100%    4.00K    463        1      1852K sock_inode_cache


After the process that causes the problem completes, the slab usage will slowly drop over time.  But this eat's up 2/3 of my physical memory, and kswapd starts eating up cpu.  General system slowdown is also noticeable, with increased sys cpu time.  I started seeing this issue with the fedora dev 2.6.10 kernels.  If I boot into kernel 2.6.9-1.1047_FC4, the issue goes away.

Version-Release number of selected component (if applicable):
kernel-2.6.11-1.1177_FC4

How reproducible:
Always

Steps to Reproduce:
1. Boot into any fedora dev kernel, 2.6.10 through 2.6.11-1.1177_FC4
2. Run either updatedb, or du -s /
3. 
  

Actual Results:  Watch available memory drop in top, and slab usage skyrocket in slabtop.

Additional info:

P3 550, 640 MB ram.

Comment 1 Matthias Hensler 2005-03-27 18:32:19 UTC
I have a similar problem with all Fedora kernels (>=2.6.11). While all 2.6.10
kernels worked fine, every newer kernel (from Fedora development, as well as the
2.6.11-1.7_FC3 from Testing FC3) causes my memory to go away.

It is enough to boot the kernel to runlevel 3, without starting X or running any
commands (note: there is no updatedb running in that moment) nearly 1/3 to half
of my memory is gone.

With 2.6.10 and below my slab usage is around 15-20 MB. slabtop lists all
objects with a usage between 80-90%.

Running any 2.6.11 the slab usage goes up to 200 MB where dentry_cache,
ext3_inode_cache and size-128 each use nearly 60 MB. Also the usage is shown as
100% then.

Output from slabtop running 2.6.10-1.770_FC3 with X/WindowMaker:
 Active / Total Objects (% used)    : 77404 / 91105 (85.0%)
 Active / Total Slabs (% used)      : 4734 / 4734 (100.0%)
 Active / Total Caches (% used)     : 90 / 125 (72.0%)
 Active / Total Size (% used)       : 16229.79K / 18259.95K (88.9%)
 Minimum / Average / Maximum Object : 0.01K / 0.20K / 128.00K

  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME                   
 17025  15977  93%    0.05K    227       75       908K buffer_head
 13144  10273  78%    0.12K    424       31      1696K size-128
 12168   8304  68%    0.16K    507       24      2028K dentry_cache
  6730   6448  95%    0.68K   1346        5      5384K ext3_inode_cache
  6468   5268  81%    0.27K    462       14      1848K radix_tree_node
  6016   5954  98%    0.08K    128       47       512K vm_area_struct
  5795   5674  97%    0.06K     95       61       380K size-64
  5593   5473  97%    0.03K     47      119       188K size-32
  1785   1605  89%    0.03K     15      119        60K anon_vma
  1720   1660  96%    0.19K     86       20       344K filp
  1356   1293  95%    0.02K      6      226        24K dm_io
  1356   1293  95%    0.02K      6      226        24K dm_tio
  1143    864  75%    0.43K    127        9       508K inode_cache
   750    150  20%    0.05K     10       75        40K avc_node
   665    661  99%    0.54K     95        7       380K shmem_inode_cache
   536    329  61%    0.44K     67        8       268K proc_inode_cache

Output from slabtop running 2.6.11-1.7_FC3 with X:
 Active / Total Objects (% used)    : 116007 / 118956 (97.5%)
 Active / Total Slabs (% used)      : 49070 / 49080 (100.0%)
 Active / Total Caches (% used)     : 85 / 120 (70.8%)
 Active / Total Size (% used)       : 196048.48K / 196237.89K (99.9%)
 Minimum / Average / Maximum Object : 0.02K / 1.65K / 128.00K

  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
 52008  51936  99%    0.04K    591       88      2364K size-32
 14946  14946 100%    4.00K  14946        1     59784K size-128
 14051  14051 100%    4.00K  14051        1     56204K dentry_cache
  9897   9896  99%    4.00K   9897        1     39588K ext3_inode_cache
  3978   3834  96%    0.10K    102       39       408K vm_area_struct
  3050   3007  98%    0.06K     50       61       200K buffer_head
  2997   2842  94%    0.05K     37       81       148K sysfs_dir_cache
  2550   2549  99%    4.00K   2550        1     10200K radix_tree_node
  2288   2108  92%    0.07K     44       52       176K size-64
  2041   2040  99%    4.00K   2041        1      8164K inode_cache
  1080   1030  95%    0.03K      8      135        32K dm_io
  1080   1030  95%    0.03K      8      135        32K dm_tio
  1056    768  72%    0.04K     12       88        48K anon_vma
   837    837 100%    4.00K    837        1      3348K shmem_inode_cache
   820    820 100%    4.00K    820        1      3280K filp
   431    431 100%    4.00K    431        1      1724K proc_inode_cache

All this is running on an Asus M5678NWP notebook with Intel Centrino and 512 MB ram.

Comment 2 Todd Mokros 2005-04-03 12:24:04 UTC
After seeing the following changelog in kernel-2.6.11-1.1225_FC4 I thought I'd
give it a try to see if the severe system performance issues were fixed.

* Thu Mar 31 2005 Rik van Riel <riel>
[snip] 
- for performance reasons, disable CONFIG_DEBUG_PAGEALLOC for FC4t2

My initial experience is that the performance issues I was seeing have been
resolved.  Slab memory also now seems to be reclaimed quickly when needed.  This
bug appears to be resolved from my point of view.


Comment 3 Matthias Hensler 2005-04-03 13:10:49 UTC
Thanks, with 2.6.11-1.1225_FC4 the problem is solved for me too.

Comment 4 Damian Menscher 2005-05-30 06:27:31 UTC
I'm seeing this under FC3... I guess it needs to be opened as another bug?


Note You need to log in before you can comment on or make changes to this bug.