From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.8) Gecko/20050512 Red Hat/1.0.4-1.4.1 Firefox/1.0.4
Description of problem:
Bash version 3.0 and 2.05b use alloca() to store the linked list structure that includes pointers to the malloc()ed strings of filenames that match a globbing pattern. Since alloca() makes an unchecked allocation of stack memory, this can cause a stack overflow. While version 3.0 includes a null check on the returned pointer, this check doesn't actually check for a failed allocation, since alloca() won't return null even if it overflows the stack. While both versions return the expected "Argument list too long" error for large globs, they overflow the stack and segfault before hitting this check for extremely large globs. The size of the glob required to trigger the overflow is version-dependent, and probably architecture-dependent due to how alloca() is compiled. Adjusting the stack size from the ulimit default of 10MB will change the number of glob matches required to trigger a segfault. On my testing on RHEL 4 x86, the number is around 380,000. On RHEL 3 x86 (bash 2.05b) the number is around 600,000-700,000.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. for i in `seq 1 400000`; do touch $i; done
2. rm *
Actual Results: Bash exited on signal 11.
Expected Results: Bash should have said "Argument list too long."
This bug probably affects all recent versions of GNU Bash on all architectures and distributions. It does not seem to affect ksh, tcsh, or zsh.
The bug could be mitigated by calculating command line length during the matching process, but using a more intelligent memory allocation scheme would probably be no more work. While alloca() is ostensibly faster, the overhead it introduces and the inherent wastefulness of allocating a linked list one node at a time in a manner guaranteed to be sequential in memory probably eliminates any such benefit in the cases where it will take long enough for performance to be noticeable.
It's worth noting that setting 'ulimit -s unlimited' makes this issue go away.
I do admit this probably isn't proper behavior though.
*** This bug has been marked as a duplicate of 221381 ***