Red Hat Bugzilla – Bug 251160
[RHEL 5.1] Memory leak in audit tree watch code
Last modified: 2007-11-30 17:07:46 EST
I've got a script which just basically does
insert 50 directory watches
When run in /proc/slabinfo you can just watch size 32, 64, and 256 slabs fly out
the window and never come back.
watch "grep size /proc/slabinfo | grep -v DMA"
eventually the box ran out of memory and started oomkilling.
Created attachment 160817 [details]
script to stress audit tree code
part of slabinfo after only a few moments.
size-256 135255 135255 256 15 1 : tunables 120 60 8 :
slabdata 9017 9017 60
size-64 26074 26137 64 59 1 : tunables 120 60 8 :
slabdata 443 443 60
size-128 111552 111570 128 30 1 : tunables 120 60 8 :
slabdata 3719 3719 60
size-32 270592 270704 32 112 1 : tunables 120 60 8 :
slabdata 2417 2417 60
size-256: 231129 alloc_chunk+0x1f/0x7f
size-256: 22316 audit_make_tree+0x70/0xf0
size-64: 44618 audit_unpack_string+0x3b/0x6f
size-128: 208813 audit_make_tree+0x70/0xf0
size-32: 417496 audit_unpack_string+0x3b/0x6f
at least some of the audit_unpack_string leaks are from alloc_tree() where we do
but most of the audit code just does tree->pathname = s; and frees it when the
object gets cleaned up.
Eh? tree->pathname = s won't compile (it's an array). But yeah,
we should free the result of audit_unpack_string() unconditionally
there. It still doesn't account for audit_make_tree() leak or
alloc_chunk() one. The latter is the consequence of the former,
so the question is WTF do we leak audit_tree there.
Created attachment 160863 [details]
You can download this test kernel from http://people.redhat.com/dzickus/el5
confirmed fix is in the -43 kernel, tested with the attached audit stress script.
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.