Red Hat Bugzilla – Bug 171931
Having more than 400 files in a directory causes a segmentation fault during mmap
Last modified: 2007-11-30 17:07:08 EST
Description of problem:
We are working on the certification of Oracle 10g on GFS 6.0/RHEL3. The oracle
tests create directories with around 1100 files. However, a segmentation fault
occurs when more than 400 files exist in an EXT3 or GFS filesystem directory.
The problem happens exactly when the file count reaches 401. It does not happen
if the file count is 400 or less.
The EXT3 file system is using the cciss driver. The GFS filesystem is using pool.
The system is a DL585 with AMD processors and 16G of memory.
Version-Release number of selected component (if applicable):
The kernel version is: 2.4.21-34.ELsmp
The output from uname -a is:
Linux spa65 2.4.21-34.ELsmp #1 SMP Thu Jul 28 23:28:35 EDT 2005 x86_64 x86_64
Create a directory with more than 400 files on an EXT3 or GFS filesystem using
the x86_64 version of RHEL3.
An ls on a directory with 401 or greater files causes a segmentation fault when
ls is performed. The seg fault happens during mmap().
ls should return file names.
I collected an strace when the seg fault happens with 401 files and when it
dosen't happen with 400 files. See attachments for strace data.
Created attachment 120489 [details]
strace during segmentation fault
Created attachment 120490 [details]
strace with 400 files - does not cause seg fault.
If an "ls" command segfaults, it is not likely to be a kernel problem.
But I'll assign this to PeterS for investigation first (in case the
kernel is providing bogus data with the getdents64() syscall), and he
can bounce it to the appropriate component if he finds that it's indeed
a bug somewhere else.
I tried creating directories of 676 files, with varying file names from 2 bytes
up to 202 bytes. I don't see a failure. Is there a reproducer for this
We ran an up2date on the system, which fixed the problem on EXT3. We will also
test GFS to see if the problem went away there as well.
I am going to close this, but if it occurs again or more information
becomes available, please reopen this report and I will look at it then.