Bug 59876 - Rsh makes an error after >~290 repeats
Summary: Rsh makes an error after >~290 repeats
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Raw Hide
Classification: Retired
Component: rsh (Show other bugs)
(Show other bugs)
Version: 1.0
Hardware: i386 Linux
medium
medium
Target Milestone: ---
Assignee: Radek Vokal
QA Contact: Ben Levenson
URL:
Whiteboard:
Keywords:
: 88644 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2002-02-14 12:38 UTC by takujiro katayama
Modified: 2007-04-18 16:40 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2004-09-14 10:56:37 UTC
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

Description takujiro katayama 2002-02-14 12:38:45 UTC
From Bugzilla Helper:
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows 98; Q312461)

Description of problem:
When I rsh a short script to a remote host repeatedly (more than around 290 
times), the following error occurs:

poll: protocol failure in circuit setup

or

rcmd: socket: All ports in use.


Once this error comes out, no more rsh jobs can be invoked for a while.  
Strangely, after some period of time, this situation goes out.



Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1.Just repeat "rsh <hostname> ls" or something else that consumes small amount 
of time
2.
3.


Actual Results:  An error occurs after around 290 rsh's:

poll: protocol failure in circuit setup

or

rcmd: socket: All ports in use


Expected Results:  No error occurs.


Additional info:

Comment 1 Phil Knirsch 2002-02-24 10:23:45 UTC
Hmmm.. That sounds as if your box is running out of sockets on one end. Could
you verify if your rsh scripts don't take forever to complete? If thats the
case, it can easily happen that your user limit for open file descriptors runs
out. Try the same thing as root or use the ulimit command to change the maximum
number of open filedescriptor.

Read ya, Phil

Comment 2 takujiro katayama 2002-03-08 09:11:37 UTC
I tried to change the maximum number of  open filedescriptor by using limit command, but I got the 
same error message as before.I think this error is caused by anather cause.

Comment 3 Phil Knirsch 2003-06-25 15:01:56 UTC
*** Bug 88644 has been marked as a duplicate of this bug. ***

Comment 4 Barry K. Nathan 2004-07-14 16:29:58 UTC
I've seen this bug before (on Red Hat 7.2). It turns out that this
xinetd erratum fixes it:
https://rhn.redhat.com/errata/RHSA-2003-160.html


Note You need to log in before you can comment on or make changes to this bug.