Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 77781

Summary: xinetd stop serving the services because "Too many open files"
Product: [Retired] Red Hat Linux Reporter: Roger Pena-Escobio <orkcu>
Component: xinetdAssignee: Jay Fenlason <fenlason>
Status: CLOSED ERRATA QA Contact: Brock Organ <borgan>
Severity: medium Docs Contact:
Priority: high    
Version: 8.0CC: benl, chris.ricker, jfeeney
Target Milestone: ---Keywords: Security
Target Release: ---   
Hardware: i386   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2003-05-13 17:14:12 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Roger Pena-Escobio 2002-11-13 15:29:44 UTC
From Bugzilla Helper: 
User-Agent: Mozilla/5.0 (compatible; Konqueror/3; Linux 2.4.18-14; i686; , 
en_US, en) 
 
Description of problem: 
when you enable the time service from xinetd, after few hours of requests, it 
start to log this:  
service <services>, accept: Too many open files (errno= 24) 
where <services> is the service that try to start 
 
but the first log of all of them is: 
 
Nov  6 16:25:01 ns1 xinetd[2033]: warning: cannot open /etc/hosts.allow: Too 
many open files 
Nov  6 16:25:01 ns1 xinetd[2033]: warning: cannot open /etc/hosts.deny: Too 
many open files 
Nov  6 16:25:01 ns1 xinetd[2033]: service time-stream, accept: Too many open 
files (errno= 24) 
 
 
Version-Release number of selected component (if applicable): 
xinetd-2.3.9-0.72 
 
How reproducible: 
Didn't try 
 
Steps to Reproduce: 
1. default configuration setting  
2. enable the time service 
3. wait enouth time under some time service load 
 
	 
 
Actual Results:  after few hours, depend of how many request to the server yo 
did, all the services that xinetd serve stop to respond as should do. 
xinetd open the conection but it close it after few seconds 
 
Additional info: 
 
i'm running the last xinetd update for rh-7.2 (xinetd-2.3.9-0.72) 
 
i have enouth filedescriptors all the time, i check it at 
/proc/sys/fs/file-nr. 
 
if i restart the xinetd server everything go fine until xinetd reach the limit 
for open file (the system filedescriptors are ok even in that moment) 
if i disable the time service everything goes fine 
i guess the same should apply for every internal service but i didn't check it 
 
workaround: i fix my problem rebuilding the xinetd pakage that came with 
redhat-8.0 (xinetd-2.3.7-2), after downgrade to this version and under "heavy" 
load (a request every 1 minute) xinetd dind't cratch. 
 
i think this should be fix as soon as posible because it can be used as a DoS 
 
this bug look very similar to this (for inetd):16729

Comment 1 Bill Nottingham 2002-12-02 20:37:32 UTC
An errata has been issued which should help the problem described in this bug report. 
This report is therefore being closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files, please follow the link below. You may reopen 
this bug report if the solution does not work for you.

http://rhn.redhat.com/errata/RHSA-2002-196.html


Comment 2 Michael Redinger 2002-12-08 20:15:43 UTC
Erratum http://rhn.redhat.com/errata/RHSA-2002-196.html does not seem to fix
this, reopening (and changing to 7.3):

I today updated some server to 2.3.7-4.7x (this is on a 7.3 box).
Some hours later I saw exactly the symptoms described above (and had to
downgrade xinetd again which "fixes" the problem).

This is RHAT 7.3, all updates applied.

The main task for xinetd on these machines is to answer time requests. It seems
almost identical to what's described above - time, "Too many open files" when
trying to access hosts.{allow,deny}.

It's easy to reproduce:

#!/bin/sh
while [[ 1 ]] ; do
        i=1
        # be sure to stay below cps (connections per second, default 25)
        while [[ ${i} -lt 25 ]] ; do
                rdate <myserver> >/dev/null
                i=$((${i}+1))
        done
        # keep cps happy ...
        sleep 2
done

When running "netstat -a | grep CLOSE_WAIT | wc -l" on the server, you see
that the number is going up until you get "Too many open files".


Comment 3 Michael Redinger 2002-12-21 15:05:47 UTC
Forgot to mention that this does not happen with time-upd (rdate -u).

Now verified this on many 7.3 and also 8.0 systems, therefore changing to 8.0.



Comment 4 Michael Redinger 2003-01-12 11:42:13 UTC
Just found this in the Changelog for xinetd 2.3.10 (http://www.xinetd.org/#changes):

"Close the service descriptors on fork. This only matters for internal forking
services, since anything that calls exec() will get those closed automagically.
This will help reduce the file discriptors used by the daemon when using some
internal services."

Related?



Comment 5 Mark J. Cox 2003-04-23 11:01:12 UTC
Errata for xinetd (to 2.3.11) is in progress

Comment 6 Mark J. Cox 2003-05-13 17:14:12 UTC
An errata has been issued which should help the problem described in this bug report. 
This report is therefore being closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files, please follow the link below. You may reopen 
this bug report if the solution does not work for you.

http://rhn.redhat.com/errata/RHSA-2003-160.html