Bug 77724

Summary: xinetd connections to tftpd denied by tcpwrappers cause problems
Product: [Retired] Red Hat Linux Reporter: David Mathog <mathog>
Component: xinetdAssignee: Jay Fenlason <fenlason>
Status: CLOSED ERRATA QA Contact: Brock Organ <borgan>
Severity: medium Docs Contact:
Priority: high    
Version: 7.3CC: jfeeney, k.georgiou, stephan.guilloux
Target Milestone: ---Keywords: Security
Target Release: ---   
Hardware: athlon   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2003-10-02 11:33:57 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description David Mathog 2002-11-12 16:59:39 UTC
Description of Problem:

(Rated this security because it indicates some problem with
tcpwrappers access to tftpd - I have no evidence that an actual
security hole exists.)

When a remote host which is denied access to tftpd by
/etc/hosts.allow, /etc/hosts.deny AND the /etc/xinetd.d/tftp
file attempts to tftp a file off the server it results in
hundreds of messages on the server's /var/log/secure like this:

Nov 12 08:37:00 safserver xinetd[15800]: START:
    tftp pid=23419 from=xxx.xxx.xxx.xxx
Nov 12 08:37:00 safserver xinetd[23419]: FAIL:
    tftp address from=xxx.xxx.xxx.xxx

eventually xinetd kills the tftpd server and restarts it several seconds
later. 

None of this should happen - tftpd access is denied so xinetd should
never even try to start tftpd.  At worst a single "connection rejected"
should show up somewhere.  Instead it seems to try to start it hundreds
of times and only then does it catch the tcpwrappers limit.

Also, in some instances, this throws xinetd into an odd state where it
uses about 3% of the CPU.  However, it must be interrupting or something,
because the server machine slows to a crawl. Restarting xinetd fixes this.

Connections from machines allowed by tcpwrappers work properly and do
not seem to throw xinetd for a loop.

Version-Release number of selected component (if applicable):
xinetd-2.3.9-0.73
RH 7.3
Kernel  2.4.18-10smp
Tyan 2468UGN dual Athlon motherboard

How Reproducible:
The hundreds of messages and tftpd kill/restart: 100%
The CPU crawling problem:  not very,  maybe 5% of the time


Steps to Reproduce:
1. Set up /etc/xinetd.d/tftp like this:

service tftp
{
        disable = no
        socket_type             = dgram
        protocol                = udp
        wait                    = yes
        user                    = root
        server                  = /usr/sbin/in.tftpd
        server_args             = -s /tftpboot
        only_from               = 192.168.1.0
        log_on_success          += HOST
        log_on_failure          += HOST
        per_source              = 11
        cps                     = 100 2
}


2. echo "blah blah" > /tftpboot/message.txt
3. deny all access to a test node (for everything) in /etc/hosts.allow,
   /etc/hosts.deny.  Simplest case:
   % echo "ALL:ALL" >/etc/hosts.deny
   % echo "#empty"  >/etc/hosts.allow
4. from test node use tftp to connect to server and issue command:
   get message.txt

Actual Results:
File is not retrieved.  (correct).  On server  a zillion log messages
appear in /var/log/secure and xinetd may go into the high CPU usage
state (not correct).


Expected Results:
File not retrieved, one "rejected connection" type of log message.


Additional Information:

Comment 1 Leif Nixon 2002-11-18 12:09:14 UTC
I second that; this morning I found multimegabytes of log entries in
/var/log/secure and /var/log/messages on one of our cluster frontends,
apparently resulting from a single connection from somewhere i Russia.

This is reproducible; any tftp connection attempt from a nonallowed address
throws xinetd into a forking loop.


Comment 2 Mark J. Cox 2003-04-23 11:02:32 UTC
An errata for xinetd (to version 2.3.11) is in progress.

Comment 3 Mark J. Cox 2003-05-30 08:40:49 UTC
An erratum for xinetd taking it to version 2.3.11 is available
http://rhn.redhat.com/errata/RHSA-2003-161.html

Does this fix this issue?

Comment 4 David Mathog 2003-05-30 16:45:59 UTC
The 2.3.11 update does NOT fix the issue.

And how hard would it have been for Redhat to test this???
It took all of 10 seconds to install the rpm
and then from a forbidden machine (blocked by /etc/hosts.allow)
do:

% tftp linuxbox
>tftp get /tmp/foobar

(a file which doesn't  exist)

and then tail linuxbox's /var/log/messages:


May 30 09:36:25 linuxbox xinetd[32052]: xinetd Version 2.3.11 started with
libwrap loadavg options compiled in.
May 30 09:36:25 linuxbox xinetd[32052]: Started working: 4 available services
May 30 09:36:28 linuxbox xinetd: xinetd startup succeeded
May 30 09:37:27 linuxbox xinetd[32052]: Deactivating service tftp due to
excessive incoming connections.  Restarting in 5 seconds.
May 30 09:37:32 linuxbox xinetd[32052]: Activating service tftp
May 30 09:37:37 linuxbox xinetd[32052]: Deactivating service tftp due to
excessive incoming connections.  Restarting in 5 
May 30 09:37:42 linuxbox xinetd[32052]: Activating service tftp

Comment 5 Stephan Guilloux 2003-06-04 17:47:08 UTC
May be this may help.
Problem is that for every over-UDP-protocols, xinetd uses something like
  recvfrom(..., MSG_PEEK, ...)

In this case, the frame is never removed from the socket queue. The child
checks in /etc/hosts for allowed hosts to use TFTP, and die but the frame
is not removed from the UDP socket queue. Then the select() find the TFTP
frame and loops then forever.

One solution would be like a
  recvfrom(..., 0, ...)
just before child death in case of error in libwrap.

Note: plateform field should be 'all', not only 'athlon'.


Comment 6 Jay Fenlason 2003-08-11 17:33:45 UTC
xinetd-2.3.12 appears to address this issue, after the upstream maintainers 
spent a long time discussing possible solutions on the mailing list. 
 
I don't think the xinetd-2.3.12-1.10.0 RPM in Raw Hide will work on a Red Hat 
Linux 7.3 system, but you can download the SRPM and do a rpmbuild --rebuild on 
it.  Let me know if it solves the problem for you.