Red Hat Bugzilla – Bug 51741
Apache is unable to handle more than 1024 fd.
Last modified: 2007-03-26 23:48:01 EDT
Description of problem:
Apache is compiled with a FD_SIZE of 1024. If you have many virtual hosts (
510+) with seperate log files all filedescriptors are used for the log
files. ( ulimit -H -n 8000, so the kernel is not the problem)
Apache can request more fds from the kernel and for the logfiles it does
work. However for allocating the sockets Apache checks FD_SIZE and dies.
Steps to Reproduce:
1. Create let4s say 100 virtual hosts with seperate log files.
2. try to start apache
Actual Results: No open sockets, Apache dead.
Expected Results: Apache available.
This problem occurs of course only for larger ISPs.
since ulimits protects the system perfectly IMHO the FD_SIZE limit can be
or at least enlarged to a higher value, or set dynamicly acording to the
actual ulimit/max_files settings of the kernel
mmm, typo in the original bug form.
Actually it is FD_SETSIZE that is offending.
Comes from /usr/include/linux/posix_types.h __FD_SET_SIZE
Thanks for the report. This bug is no longer present in the Apache
httpd 2.0 packages in Red Hat Enterprise Linux and Fedora Core.