Hi, previously I was running Redhat 5.2 on our server and recently we have
upgraded to Redhat 7.0 . I have already applied all the patches available
on Redhat's website. However, I'm encountering the following error
message after the system is running for about 2 days or more:
"Too many open files in system"
I suspect that Redhat 7 may not be closing unused files and thus resulting
in too many open files.
What could be causing it? I'm running the following applications on my
1) proftpd 1.2.0pre2
2) mysql 3.23.26-beta
3) qmail 1.03
Can anyone please provide some advice on how to fix this?
btw... ProFTPD 1.2.0pre2 has massive security holes. Upgrade to 1.2.0pre10 or
current CVS code (best).
Sorry, I'm actually running ProFTPD 1.2.0rc2
I can think of a few things here.
Firstly if you have registered for the Red Hat Network see the errata on this
and update the daemon in question as it has a file handle leak
Second time. Look in /proc/[0-9]*/fd and you'll see who has how many handles
open. A culprit will show up clearly enough.
Finally you can increase the number of handles if needed (ie it really is using
that many) via the /proc/sys interface or with tools like powertweak
Will the following increase the number of handles?
echo 32768 > /proc/sys/fs/file-max
echo 65536 > /proc/sys/fs/inode-max
I've increased the file-max to 32768 and inode-max to 65536. However I'm still
encountering the "Too many open files in system error". I've checked /proc/[0-
9]*/fd as you mentioned and found that alot of sockets are open in the
directories. Any idea what's the cause of this?
Below is a capture of a small part of the fd directory:
lrwx------ 1 root root 64 Nov 2 18:11 890 -> socket:
lrwx------ 1 root root 64 Nov 2 18:11 891 -> socket:
lrwx------ 1 root root 64 Nov 2 18:11 892 -> socket:
lrwx------ 1 root root 64 Nov 2 18:11 893 -> socket:
lrwx------ 1 root root 64 Nov 2 18:11 894 -> socket:
lrwx------ 1 root root 64 Nov 2 18:11 895 -> socket:
lrwx------ 1 root root 64 Nov 2 18:11 896 -> socket:
lrwx------ 1 root root 64 Nov 2 18:11 897 -> socket:
lrwx------ 1 root root 64 Nov 2 18:11 898 -> socket:
lrwx------ 1 root root 64 Nov 2 18:11 899 -> socket:
lrwx------ 1 root root 64 Nov 2 18:11 9 -> socket:
lrwx------ 1 root root 64 Nov 2 18:11 90 -> socket:
Which process has all the sockets (ie which directory is the one that is
thousands of fds) and what process is that pid
It's the Apache 1.3.14 server which I compiled with PHP 4.03pl1, modSSL 2.7.1-
1.3.14, OpenSSL 0.9.6
There are thousands of such sockets open for each Apache process. Is this
normal? Any advice on how to fix this?
Each apache should not have thousands of sockets. That sounds like something in
your apache/php/ssl setup is leaking file descriptors. That would indicate an
error in the apache build or a bug in that apache configuration.
I think I'm experiencing the same result, don't know yet if it has the same
cause. I'm running ProFTPD (CVS from a month or so ago), Apache 1.3.14-3, MySQL
3.23.24, and the Imap from RH.. This is basiaclly a stock RH 7 with updates. The
only addition is ProFTPD and XTRadius. The only update I saw that said anything
about file descriptions is up2date, which isn't even installed on the box in
question.. I'm having to reboot once a week or so. Get weird errors, and I can
tell they are file related. Reboot, and I'm good to go for a bit.. Looking at
lsof, I'm going to watch Apache.. It's the rpm out of RH updates...
Are these problems still occuring?