Red Hat Bugzilla – Bug 173283
memory leak in bash
Last modified: 2009-11-12 04:11:33 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20050922 Fedora/1.0.7-1.1.fc4 Firefox/1.0.7
Description of problem:
There is a memory leak in bash. It can be triggered by putting read command in a loop where the read fails due to no input (pipe for stdin).
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Described above
Actual Results: OOM killer zaps random processes and top shows memory consumption grow.
Expected Results: top steady & no OOM killer.
I will attach a patch addressing this.
Created attachment 121097 [details]
Patch fixing memory leaks
Please apply to rawhide.
Steve, this is how you said to reproduce the problem:
If you wanted to see the bug, install the latest rawhide audit package.
In /etc/auditd.conf, add this line:
dispatcher = /home/homedir/autest
Then save this script to that location:
while [ 1 ]
read -t 5 line 2>/dev/null
if [ ! -z "$line" ] ; then
tmp=`echo $line | grep $msg_type`
if [ x"$tmp" != "x" ] ; then
echo "Test script AVC: $tmp"
Restart the audit daemon. Using top, you should see the memory usage climbing
This is quite a long-winded method! Is there a simpler way to trigger it,
perhaps by just running that script on some large input file you can provide?
Ideally I'd like an RHTS test case for this bug, so we can see what other
releases are affected and make sure we don't regress on it in the future.
A file being piped in won't trigger it since the file is likely to be readable.
The bug/leak is triggered when the input source is not ready to send data and
the reader (bash) determines it to be an error. I'll see if there's another way
to trigger it.
My guess is that some prior versions of bash 3 are affected by this.
This patch doesn't look necessary to me, and here's why:
/* This *must* be the top unwind-protect on the stack, so the manipulation
of the unwind-protect stack after the realloc() works right. */
add_unwind_protect (xfree, input_string);
This line is meant to deal with freeing up the allocating string I think.
I tried tickling two of the three bugs:
declare -r foo
echo a | read foo
sleep 2 | read -t 1
under valgrind, and could not see any resulting memory leaks. I couldn't quite
work out how to tickle the third case.
Of course, your original method does indeed show that there is a leak. I'd just
like to be able to pin it down a bit more.
OK, its good to see that you can reproduce the bug. I was going to attach a
valgrind trace to prove the bug really is there. The code in auditd is simply
creating a pipe, forking, dup'ing the pipe to stdin, and then exec'ing the bash
script. It ocassionally writes to the pipe. Not sure how the differs from the
Ah, it might be to do with O_NONBLOCK, looking at the strace results.
Okay, I've managed to reproduce it from a simplified test case.
Created attachment 121988 [details]
ulimit -v 10240
./nonblock bash -c 'while :; do read -t 1 line; done'
if [ "$?" -eq 0 ]; then
Created attachment 121989 [details]
Here's the fix for this particular case. The 'find_or_make_array_variable'
case in your patch does look correct, but I can't quite work out how to
demonstrate that leak. I'll mention it upstream.
The read-from-non-blocking-fd leak has been fixed in bash-3.0-40.
However, does auditd really need to give a non-blocking file descriptor to the
dispatcher process? It means that bash will busy-loop when reading -- the shell
generally has no reason to use select(). I've filed bug #175267 for this.
Leaving this bug report open until I can get upstream feedback on the other
Tim, thanks for patching bash.
The audit daemon has to absolutely guarantee logging to disk. Dispatching events
are on a best effort basis hence the non-blocking descriptor. This is required
for meeting certification. The bash bug was discovered when I was doing some
troubleshooting since the audit dispatcher is still under development. I just
needed a quick and dirty script to detect something.
Still no feedback from upstream regarding this patch, so keeping this open.
I've prodded upstream about this again.