Bug 119758 - ext3_inode_cache hogs memory
ext3_inode_cache hogs memory
Status: CLOSED DUPLICATE of bug 100666
Product: Fedora
Classification: Fedora
Component: kernel (Show other bugs)
rawhide
x86_64 Linux
medium Severity medium
: ---
: ---
Assigned To: Arjan van de Ven
Brian Brock
:
Depends On:
Blocks: FC2Target
  Show dependency treegraph
 
Reported: 2004-04-01 19:40 EST by Arun Sharma
Modified: 2007-11-30 17:10 EST (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2006-02-21 14:02:20 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Arun Sharma 2004-04-01 19:40:57 EST
Description of problem:

When a cron job kicks off "updatedb", it seems to allocate a lot of
inodes that never get freed. This results in the system becoming very
slow and unresponsive.

Version-Release number of selected component (if applicable):

2.6.3-2.1.253.2.1
2.6.4-1.300

How reproducible:

Run updatedb.

 
Actual results:

On a 512MB machine:

$ vmstat 1
procs -----------memory---------- ---swap-- -----io---- --system--
----cpu----
r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us
sy id wa
0  1   1772  11368   1180  54572    0    0    15     9   29   273  8 
1 60 31
0  1   1772  11552    980  54548    0    0     4     0 1044   324  1 
0  0 99
0  1   1772  11552    980  54544    0    0     0     0 1039   409  0 
0  0 100

# cat /proc/slabinfo | sed 's/:.*//' | awk '{print $0, $3 * $4}' |
sort +6rn | head -10
ext3_inode_cache  159575 159642   1128    7    2  180076176
dentry_cache      137420 138924    408    9    1  56680992
size-256          162326 162386    280   14    1  45468080
size-64           265608 265869     88   43    1  23396472
pte_chain          10841  12600    128   30    1  1612800
size-4096            305    305   4096    1    1  1249280
radix_tree_node      897   2177    544    7    1  1184288
inode_cache         1350   1350    848    9    2  1144800
biovec-BIO_MAX_PAGES    256    256   4096    1    1  1048576
vm_area_struct      6459   6475    152   25    1  984200

Alt+sysrq+t shows:

Apr  1 12:18:49 arun-desktop kernel: updatedb      D 00000100175339f4
    0  5839   5813                     (NOTLB)
Apr  1 12:18:49 arun-desktop kernel: 0000010017533a50 0000000000000006
000000501c378920 0000010017533a34
Apr  1 12:18:49 arun-desktop kernel:        0000010017d25240
000000000001d9a1 00004c4d31079f84 ffffffff803d9860
Apr  1 12:18:49 arun-desktop kernel:        0000010017533b90
0000000000000246
Apr  1 12:18:49 arun-desktop kernel: Call
Trace:<ffffffff80143e13>{schedule_timeout+216}
<ffffffff80143d36>{process_timeout+0}
Apr  1 12:18:49 arun-desktop kernel:       
<ffffffff801362c3>{io_schedule_timeout+15}
<ffffffff80256877>{blk_congestion_wait+125}
Apr  1 12:18:49 arun-desktop kernel:       
<ffffffff80136e57>{autoremove_wake_function+0}
<ffffffff80136e57>{autoremove_wake_function+0}
Apr  1 12:18:49 arun-desktop kernel:       
<ffffffff8016312f>{__alloc_pages+724}
<ffffffff80163197>{__get_free_pages+31}
Apr  1 12:18:49 arun-desktop kernel:       
<ffffffff80167835>{cache_grow+479}
<ffffffff8016802c>{cache_alloc_refill+1101}
Apr  1 12:18:49 arun-desktop kernel:       
<ffffffff801685b7>{kmem_cache_alloc+75}
<ffffffffa0056b71>{:ext3:ext3_alloc_inode+19}
Apr  1 12:18:49 arun-desktop kernel:       
<ffffffff801a7571>{alloc_inode+21}
<ffffffff801a89d3>{get_new_inode_fast+21}
Apr  1 12:18:49 arun-desktop kernel:       
<ffffffffa0053e95>{:ext3:ext3_lookup+90}
<ffffffff801978f8>{real_lookup+111}
Apr  1 12:18:49 arun-desktop kernel:       
<ffffffff80197d11>{do_lookup+84} <ffffffff80198aae>{link_path_walk+3429}
Apr  1 12:18:49 arun-desktop kernel:       
<ffffffff80197237>{getname+31} <ffffffff80199024>{path_lookup+359}
Apr  1 12:18:49 arun-desktop kernel:       
<ffffffff801991ae>{__user_walk+47} <ffffffff8019297e>{vfs_lstat+21}
Apr  1 12:18:49 arun-desktop kernel:       
<ffffffff80125a87>{sys32_lstat64+17}
<ffffffff80125317>{sysenter_do_call+27} 

Additional info:

It looks like ext3_inode_cache object size is much bigger than the
base kernels. Regardless of that, the number of objects seems to be
monotonically increasing.
Comment 1 Arjan van de Ven 2004-04-02 02:49:11 EST
are there any oopses that happened ?
Comment 2 Arun Sharma 2004-04-02 13:09:10 EST
No oopses. Only slow behavior due to a large amount of memory locked 
up in ext3_inode_cache slab.
Comment 3 Arun Sharma 2004-04-08 02:01:45 EDT
BTW, I was using the 32 bit updatedb. It's possible that the leak is
in the 32 bit syscall layer - but I'm not sure.
Comment 4 Dave Jones 2004-05-25 12:28:14 EDT
this should be fixed in the final FC2 kernel.
Comment 5 Dave Jones 2004-06-14 20:13:33 EDT

*** This bug has been marked as a duplicate of 100666 ***
Comment 6 Red Hat Bugzilla 2006-02-21 14:02:20 EST
Changed to 'CLOSED' state since 'RESOLVED' has been deprecated.

Note You need to log in before you can comment on or make changes to this bug.