Bug 20453
Summary: | xinetd not work properly | ||
---|---|---|---|
Product: | [Retired] Red Hat Linux | Reporter: | Łukasz Trąbiński <lukasz> |
Component: | xinetd | Assignee: | Trond Eivind Glomsrxd <teg> |
Status: | CLOSED NOTABUG | QA Contact: | David Lawrence <dkl> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 7.0 | CC: | bbraun, kas |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | i386 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2000-11-15 18:37:07 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Łukasz Trąbiński
2000-11-07 02:53:42 UTC
We don't compile in support IPv6 in xinetd... Anyway, can you take a look at the "-loop" argument and see if increasing that helps? How many connects/second do you have? Is it just one service, or all services? I have the similar problem. It seems that xinetd calls hosts_access() function (TCP wrapper) from the main process instead of child processes. Thus any DNS or ident timeout of _one_ client can make the whole xinetd inoperable for few seconds or even minutes. This occurs on my newly-upgraded RH7.0 system, which runs FTP server and Qmail-based listserver (both started from xinetd). This is with about 10000 FTP connection and 2000 SMTP ones per day. Migrating back to inetd (RPM from RH6.2) solved the problem. I have tried with -loop 50 and - loop 100 option and xinetd still had a problems with opening a new connection. I will try to use a newest version of xinetd 2.1.8.9pre13 After removed lines from /etc/hosts.allow in.telnetd: ALL: RFC931: ALLOW in.fingerd: ALL: RFC931: ALLOW in.ftpd: ALL: RFC931: ALLOW ipop3d: ALL: RFC931: ALLOW xinetd works properly! Anyway, I'm sorry for this mistake. To improve performance, use the internal access control (only_from etc) instead of tcp_wrapper The things you can do to improve performance on xinetd are: 1) don't do ident lookups. This includes doing the ident lookup in hosts.{allow,deny}. Using USERID in xinetd is much better than doing it in hosts.{allow,deny}, because the lookup is done in the child for a successful connection. For a failed connection, the ident lookup is done in the parent (painful). 2) Don't use hostnames in access control. This requires DNS lookups. If you must use hostnames in access control, at least run a name server locally, and leave out the "nameserver" tag in resolv.conf. This forces the resolver to use a local socket, rather than using an AF_INET socket to the localhost. Faster. 3) Avoid using hosts.{allow,deny}. Move these files aside, not just empty. Use xinetd's internal access control. 4) You can up some of the numbers in xinetd/defs.h and you may have improved performance. This is of marginal benefit compared to the improvements listed above. I am also looking at separating the "heavy-weight" access control from the "light-weight" access control. This way, hopefully I can get the "light-weight" stuff in the parent, and the "heavy-weight" access control can go into the child. This adds significant complexity, but should increase performance under heavy load. : For a failed ; connection, the ident lookup is done in the parent (painful). : 2) Don't use hostnames in access control. This requires : DNS lookups. I consider this a bug in xidentd, not a feature. The access control should be done in the child instead of the parent process even when using hosts_access(). I think we all prefer the centralized and general access control system of tcp wrappers instead of a non-universal solution of a particular daemon. This is really a philosophical difference. On the one hand, there is the hardline approach of saying we will not give any resources to an unknown entity until we have verified it's authenticity to the best of our ability. Allowing the remote host to consume a process, do a fork, and take up an extra 200K of memory before verifying that the remote host meets our access control criteria, is a bug. If the host performing the access control cannot keep up, it is safer to prevent incoming connections than to blindly allow them to consume resources. On the other hand, it seems wrong that the access control should force everything to fall behind and deny connections. First and foremost comes the availability of the service, and you're willing to give up some system resources to ensure availability of that service. From the first point of view, xinetd's handling of the situation is not a bug, it is behaving correctly and securely. From the second point of view, xinetd is large, slow, inefficient, and it's handling of the situation is a bug. As I said in my previous explaination, I am looking into a compromise, where the parent can do the easier tasks of checking the time, how many instances of the service are running, etc. while handing off the host_access() call and the address matching (which includes name lookups) to the child process. This has some tradeoffs that are not philosophical in nature, such as increasing the complexity of the already complicated access control system. Rob What about the following approach: - xinetd would be able to create some number (let's say 10, it can be configurable) of children on accept uncoditionally. - child will then verify the access control using TCP wrapper's hosts_access(). - when the verification is done (successfuly or not), the child would notify the parent that it is either exiting or exec()ing the real daemon. Then you will have at most 10 processes doing access control verification at any time, and the master will not block. The notification mechanism can be anything from realtime signals to SysV semaphores or FC_CLOEXEC'd pipe from parent to the child. Let's move this discussion to the private mail, it probably does not have relevance to RH bugzilla. |