Bug 125485

Summary: TCP RPC programs started by xinetd fail when "wait = no" is specified.
Product: Red Hat Enterprise Linux 3 Reporter: Linda Lee <linda.lee>
Component: glibcAssignee: Jakub Jelinek <jakub>
Status: CLOSED WONTFIX QA Contact:
Severity: high Docs Contact:
Priority: medium    
Version: 3.0CC: drepper, tao
Target Milestone: ---   
Target Release: ---   
Hardware: i386   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2005-07-25 22:45:23 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
tar file for toto_server/toto_client
none
sample rpc test programs none

Description Linda Lee 2004-06-07 23:09:21 UTC
Description of problem:


Version-Release number of selected component (if applicable):

RHAS 3 FCS is installed on intel platform, with new xinetd patch from
bug fix 123522 (xinetd-2.3.12-6.3E).

Linux test1 2.4.21-4.ELsmp #1 SMP Fri Oct 3 17:52:56 EDT 2003 i686
i686 i386 GNU/Linux


Configured a rpc service in /etc/xinetd.d.

# default: on
service toto_server
{
        disable = no
        type = RPC 
        socket_type = stream 
        wait = yes
        protocol = tcp
        rpc_version = 1
        rpc_number = 100145
        user = root
        server = /etc/init.d/in.toto_server
}

Ran the client program and it got timed out for the first run but it
was OK for the subsequence runs.  I noticed that the server stays
forever and  does not go away, as the behaviour of inetd on Solaris.
 
Another thing is that if manually killed the server and ran the client
program again, the xinetd could not start the server.  We need to
either reboot the machine or restart the xinetd, which also is NOT the
same behaviour as inetd.



How reproducible:


Steps to Reproduce:
1.
1. add toto_server config file in /etc/xinetd.d
2. cp in.toto_server to /etc/init.d/
3. add a line "toto_server     100145" to /etc/rpc.
4. service xinetd restart
5. ran client program (got timed out).
6. ran client program (ok this time


  
Actual results:

The client request always got TIMED out for first time.



Expected results:

The client request should NOT get TIMED out for first time.

Additional info:

Comment 1 Linda Lee 2004-06-07 23:12:12 UTC
Created attachment 100946 [details]
tar file for toto_server/toto_client

I have attached sources in the attachment.  To begin with, I did
"rpcgen -a toto.x".

Same attachment for bugId 124740.

Do "tar xvf toto.tar" to retrieve files.

Comment 3 Jay Fenlason 2004-06-25 03:01:55 UTC
The toto_server program was not written so that it could be called 
from xinetd.  Look at main(): at startup it does 
transp = svctcp_create(RPC_ANYSOCK, 0, 0); 
creating a new tcp listener socket.  It then registers the socket 
with 
if (!svc_register(transp, COUNTER, COUNTERVERS, counter_1, 
IPPROTO_TCP)) { 
This unregisters the service that xinetd had registered and 
registers itself instead. 
 
I'm attaching a tarball of my test programs (the jf and fj services) 
to this bug report.  Note that jf_svc.c's main is different.  It 
starts by doing 
        if (getsockname (0, (struct sockaddr *)&saddr, &asize) == 0) 
{ 
... 
} else { 
... 
transp = svctcp_create(sock, 0, 0); 
... 
if (!svc_register(transp, JF_PROG, JF_VERS, jf_prog_1, proto)) 
... 
} 
If it was started by xinetd, stdin (descriptor 0) is a socket, so 
the getsockname will succeed, and it will read the RPC information 
from stdin.  If stdin is not a socket, it was not started by xinetd, 
and it creates its own socket and registers it. 
 
 
 

Comment 4 Jay Fenlason 2004-06-25 03:02:57 UTC
Created attachment 101394 [details]
sample rpc test programs

Comment 5 Linda Lee 2004-06-30 20:01:22 UTC
I have re-compiled my toto.x using -I option and verified it works
ONLY for "wait=yes".  BUT, it DOES NOT work for "wait=no".  According
to the man page, "wait=no" is for multi-threaded rpc service and tcp
service expects the value to be "no".  Does RHAS 3 support
multi-threaded rpc service??

I also verified your sample rpc test program only works for
"wait=yes".  If I changed to "wait=no" and invoked r_jf on the remote
host, it complained:

RPC: Unable to receive; errno = Connection reset by peer

And the jf_svc is not up on the remote host. On the /var/log/messages
of the remote host, the error message appeared.

Jun 30 12:56:55 remote_host jf[4779]: cannot create tcp service.


Comment 6 Jay Fenlason 2004-07-27 15:30:05 UTC
Recall that to xinetd "multi-threaded" means "xinetd does the 
accept()", and "single-threaded" means "the server does the 
accept()".  I don't see any thing in the xinetd source code that 
would prevent multi-threaded rpc servers from working.  However, I 
don't see any support in glibc's sunrpc/ directory for a TCP RPC 
server where the super-server (xinetd) did the accept(), and passed 
the remote file descriptor as stdin.  So if there is a bug here, 
it's in glibc's sunrpc code.  I've changed the summary of the bug to 
more accuratly represent the problem, and I'm reassinging this to 
the glibc maintainer. 

Comment 7 Ulrich Drepper 2004-10-01 07:18:55 UTC
What does everybody expect?  We are not doing any development of RPC.
 It is as it is.  We might fix bugs, but that's about it.  If there is
functionality missing, it remains this way.

If you want a better RPC implementation, use one of the separate
packages.  I don't thing RHEL3 contains in, but they exist.

You can probably hand-code some server which uses the RPC functions
and works th way you want.  But rpcgen as we have does not.

So, say explicitly what you think you need to be done (not a simple
"make this work", since as said, we will not extend RPC's
functionality).  If there is a little change in an interface or a very
small addition, we can talk about it.

Comment 8 Linda Lee 2004-10-01 21:57:30 UTC
What I would like is to fix this bug so that if "wait=no" is
specified, xinetd could start up the rpc server.  I don't expect you
to extend the RPC functionalities. (Of course, I would be very happy
to hear it if you do).  Thanks!

Comment 11 Ulrich Drepper 2005-07-25 22:45:23 UTC
I'm closing this bug.  The RPC implementation we have is what it is.  Blame the
original RPC authors for it (incidently, Sun).  There is nothing we can do
without disturbing the existing code base and  users.