Bug 231690

Summary: LSPP: system hangs under audit stress testing
Product: Red Hat Enterprise Linux 5 Reporter: Loulwa Salem <loulwa>
Component: kernelAssignee: Eric Paris <eparis>
Status: CLOSED DUPLICATE QA Contact: Martin Jenner <mjenner>
Severity: high Docs Contact:
Priority: medium    
Version: 5.0CC: eparis, iboverma, krisw, linda.knippers, sgrubb
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2007-03-20 19:57:15 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 224041    

Description Loulwa Salem 2007-03-09 23:28:04 UTC
Description of problem:
Running audit stress hangs the system

Version-Release number of selected component (if applicable):
audit-libs-1.3.1-1.el5
2.6.18-6.el5.lspp.64

How reproducible:
always

Steps to Reproduce:
1. Add the following audit rules
   auditctl -a entry,always -S mkdir
   auditctl -a entry,always -S chmod
   auditctl -a entry,always -S rmdir
   auditctl -a exit,always -S mkdir
   auditctl -a exit,always -S chmod
   auditctl -a exit,always -S rmdir

2. stress the audit system by using the following scenario
   As three different programs (so the directory name is unique per program) 
running as different users loop over the following three statements 6,000,000 
times
   syscall(__NR_mkdir,dirname,mode);
   syscall(__NR_chmod,dirname,mode);
   syscall(__NR_rmdir,dirname);
  
Actual results:
System hangs, numerous oom-killer messages appear in /var/log/messages

Expected results:
Test should complete


Additional info:
Some early examples of the oom-killer message
Mar  8 07:20:51 3455racer1 kernel: klogd invoked oom-killer: gfp_mask=0x200d2, 
order=0, oomkilladj=0
Mar  8 07:20:51 3455racer1 kernel:
Mar  8 07:20:51 3455racer1 kernel: Call Trace:
Mar  8 07:20:52 3455racer1 kernel:  [<ffffffff800ba488>] 
out_of_memory+0x53/0x267
Mar  8 07:20:52 3455racer1 kernel:  [<ffffffff8000f012>] 
__alloc_pages+0x229/0x2b2
Mar  8 07:20:52 3455racer1 kernel:  [<ffffffff80031f4b>] 
read_swap_cache_async+0x45/0xd8
Mar  8 07:20:52 3455racer1 kernel:  [<ffffffff800bf61f>] 
swapin_readahead+0x60/0xd3
Mar  8 07:20:52 3455racer1 kernel:  [<ffffffff80008f3a>] 
__handle_mm_fault+0x952/0xdf2
Mar  8 07:20:52 3455racer1 kernel:  [<ffffffff8013b330>] __next_cpu+0x19/0x28
Mar  8 07:20:52 3455racer1 kernel:  [<ffffffff800857f7>] 
find_busiest_group+0x20d/0x621
Mar  8 07:20:52 3455racer1 kernel:  [<ffffffff800645a5>] 
do_page_fault+0x4b8/0x81d
Mar  8 07:20:52 3455racer1 kernel:  [<ffffffff80060ab8>] thread_return+0x0/0xea
Mar  8 07:20:54 3455racer1 kernel:  [<ffffffff8005be1d>] error_exit+0x0/0x84
Mar  8 07:20:56 3455racer1 kernel:  [<ffffffff800c3eae>] 
swap_unplug_io_fn+0x0/0xa4
Mar  8 07:20:59 3455racer1 kernel:  [<ffffffff8008bfb3>] do_syslog+0x173/0x3ae
Mar  8 07:21:03 3455racer1 kernel:  [<ffffffff8008bf81>] do_syslog+0x141/0x3ae
Mar  8 07:21:06 3455racer1 kernel:  [<ffffffff8009b666>] 
autoremove_wake_function+0x0/0x2e
Mar  8 07:21:06 3455racer1 kernel:  [<ffffffff800f663f>] kmsg_read+0x3a/0x44
Mar  8 07:21:09 3455racer1 kernel:  [<ffffffff8000b212>] vfs_read+0xcb/0x171
Mar  8 07:21:11 3455racer1 kernel:  [<ffffffff8001145c>] sys_read+0x45/0x6e
Mar  8 07:21:12 3455racer1 kernel:  [<ffffffff8005b2c1>] tracesys+0xd1/0xdc


Mar  8 07:33:00 3455racer1 kernel: auditd invoked oom-killer: 
gfp_mask=0x200d2, order=0, oomkilladj=-17
Mar  8 07:33:03 3455racer1 kernel:
Mar  8 07:33:04 3455racer1 kernel: Call Trace:
Mar  8 07:33:06 3455racer1 kernel:  [<ffffffff800ba488>] 
out_of_memory+0x53/0x267
Mar  8 07:33:12 3455racer1 kernel:  [<ffffffff8000f012>] 
__alloc_pages+0x229/0x2b2
Mar  8 07:33:15 3455racer1 kernel:  [<ffffffff80031f4b>] 
read_swap_cache_async+0x45/0xd8
Mar  8 07:33:16 3455racer1 kernel:  [<ffffffff8009b694>] 
wake_bit_function+0x0/0x23
Mar  8 07:33:20 3455racer1 kernel:  [<ffffffff800bf61f>] 
swapin_readahead+0x60/0xd3
Mar  8 07:33:22 3455racer1 kernel:  [<ffffffff80008f3a>] 
__handle_mm_fault+0x952/0xdf2
Mar  8 07:33:24 3455racer1 kernel:  [<ffffffff800645a5>] 
do_page_fault+0x4b8/0x81d
Mar  8 07:33:28 3455racer1 kernel:  [<ffffffff8005be1d>] error_exit+0x0/0x84


mem info associated with very last oom-killer message:
Mar  8 13:08:51 3455racer1 kernel: Mem-info:
Mar  8 13:08:51 3455racer1 kernel: Node 0 DMA per-cpu:
Mar  8 13:08:51 3455racer1 kernel: cpu 0 hot: high 0, batch 1 used:0
Mar  8 13:08:51 3455racer1 kernel: cpu 0 cold: high 0, batch 1 used:0
Mar  8 13:08:51 3455racer1 kernel: cpu 1 hot: high 0, batch 1 used:0
Mar  8 13:08:54 3455racer1 kernel: cpu 1 cold: high 0, batch 1 used:0
Mar  8 13:08:54 3455racer1 kernel: cpu 2 hot: high 0, batch 1 used:0
Mar  8 13:08:56 3455racer1 kernel: cpu 2 cold: high 0, batch 1 used:0
Mar  8 13:08:57 3455racer1 kernel: cpu 3 hot: high 0, batch 1 used:0
Mar  8 13:08:59 3455racer1 kernel: cpu 3 cold: high 0, batch 1 used:0
Mar  8 13:09:01 3455racer1 kernel: Node 0 l_unreclaimable? yes
Mar  8 13:09:03 3455racer1 kernel: lowmem_reserve[]: 0 2003 2003 2003
Mar  8 13:09:04 3455racer1 kernel: Node 0 DMA32 free:5704kB min:5712kB 
low:7140kB high:8568kB active:0kB inactive:2684kB present:2051312kB 
pages_scanned:3096 all_unreclaimable? yes
Mar  8 13:09:04 3455racer1 kernel: lowmem_reserve[]: 0 0 0 0
Mar  8 13:09:05 3455racer1 kernel: Node 0 Normal free:0kB min:0kB low:0kB 
high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 
all_unreclaimable? no
Mar  8 13:09:07 3455racer1 kernel: lowmem_reserve[]: 0 0 0 0
Mar  8 13:09:07 3455racer1 kernel: Node 0 HighMem free:0kB min:128kB low:128kB 
high:128kB active:0kB inactive:0kB present:0kB pages_scanned:0 
all_unreclaimable? no
Mar  8 13:09:08 3455racer1 kernel: lowmem_reserve[]: 0 0 0 0
Mar  8 13:09:08 3455racer1 kernel: Node 0 DMA: 0*4kB 1*8kB 0*16kB 1*32kB 
1*64kB 0*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 1*4096kB = 8040kB
Mar  8 13:09:08 3455racer1 kernel: Node 0 DMA32: 16*4kB 7*8kB 63*16kB 11*32kB 
2*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 1*4096kB = 5704kB
Mar  8 13:09:12 3455racer1 kernel: Node 0 Normal: empty
Mar  8 13:09:12 3455racer1 kernel: Node 0 HighMem: empty
Mar  8 13:09:13 3455racer1 kernel: Swap cache: add 1905856, delete 1905490, 
find 284772/646913, race 29+90
Mar  8 13:09:17 3455racer1 kernel: Free swap  = 208476kB
Mar  8 13:09:19 3455racer1 kernel: Total swap = 524280kB
Mar  8 13:09:19 3455racer1 kernel: Free swap:       208476kB
Mar  8 13:09:22 3455racer1 kernel: 524032 pages of RAM
Mar  8 13:09:23 3455racer1 kernel: 9311 reserved pages
Mar  8 13:09:25 3455racer1 kernel: 73 pages shared
Mar  8 13:09:25 3455racer1 kernel: 366 pages swap cached

Comment 1 Alexander Viro 2007-03-12 13:45:25 UTC
I'd like to see the contents of /proc/slab_allocators through that...
It gives the breakdown of allocated objects by allocators, so the
fast-growing line in there will point to the leak.

Comment 2 Loulwa Salem 2007-03-12 18:28:35 UTC
This bug was found by Tom, and I opened the report for him, so I'll do mybest 
to answer questions regard it.

well, while I am running the test, I tried to look for 
the /proc/slaballocators. There is no such file on there, the only slabfile 
there is the /proc/slabinfo which doesn't change as the tests are being run.

Is there somewhere else I can look at to find such information?

Comment 3 Issue Tracker 2007-03-12 18:42:38 UTC
IBM,

The file you should be seeing is actually /proc/slab_allocators, with a
"_" underscore.

Thanks
Jeremy West


This event sent from IssueTracker by jwest 
 issue 115885

Comment 4 Issue Tracker 2007-03-12 18:47:04 UTC
I'm attaching the slab_allocators output from a test that I'm running. 
At this point in my test I'm seeing the amount of free memory continually
decrease as the script continues to run.

I'll update this BZ with a slab_allocators output closer to the end of the
test.

Thanks
Jeremy West


This event sent from IssueTracker by jwest 
 issue 115885
it_file 85392

Comment 5 Eric Paris 2007-03-12 19:50:51 UTC
Two big allocators
size-64: 322317 audit_log_task_context+0x46/0xad
avtab_node: 165759 avtab_insert_node+0x28/0x89

Something tells me it isn't in the AVC or we would have found it long ago. 
Since Al's recent patch to audit_log_task_context does away with this allocation
completely lets test on .68 and find out what we get.

Comment 6 Steve Grubb 2007-03-12 19:55:35 UTC
This appears to be a kernel memory leak fixed by a patch submitted by Al today.
Transferring bug report to kernel team.

Comment 7 Eric Paris 2007-03-12 20:22:05 UTC
Comment #6 is refering to BZ 228409

May mark as dup with testing results

Comment 10 Loulwa Salem 2007-03-19 15:03:00 UTC
Is there a kernel update that we should try and verify this bug on?

Comment 11 Eric Paris 2007-03-19 15:08:35 UTC
Sorry, should be fixed in .68

Comment 12 Loulwa Salem 2007-03-20 18:57:43 UTC
Thanks, I stress tested with the .68 kernel and it seems to be working fine. 

This bug can be closed.

Comment 13 Eric Paris 2007-03-20 19:57:15 UTC
closing as a dup of 228409 since that patch also fixed this problem

*** This bug has been marked as a duplicate of 228409 ***