Not able to nfs mount: Server: [root@becedge02 root]#rpcinfo -p program vers proto port 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper 100011 1 udp 983 rquotad 100011 2 udp 983 rquotad 100011 1 tcp 986 rquotad 100011 2 tcp 986 rquotad 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100021 1 udp 1033 nlockmgr 100021 3 udp 1033 nlockmgr 100021 4 udp 1033 nlockmgr 100021 1 tcp 32779 nlockmgr 100021 3 tcp 32779 nlockmgr 100021 4 tcp 32779 nlockmgr 100005 1 udp 1001 mountd 100005 1 tcp 1004 mountd 100005 2 udp 1001 mountd 100005 2 tcp 1004 mountd 100005 3 udp 1001 mountd 100005 3 tcp 1004 mountd [root@becedge02 root]#more /etc/exports /home/oracle *(rw,sync,all_squash) [root@becedge02 root]#exportfs /home/oracle <world> #########host.allow & host.deny : NO ENTRY Client: [root@tgdb1 root]#rpcinfo -p program vers proto port 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 20000 status 100024 1 tcp 32768 status 391002 2 tcp 32769 sgi_fam 100011 1 udp 615 rquotad 100011 2 udp 615 rquotad 100011 1 tcp 618 rquotad 100011 2 tcp 618 rquotad 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100021 1 udp 20006 nlockmgr 100021 3 udp 20006 nlockmgr 100021 4 udp 20006 nlockmgr 100021 1 tcp 32813 nlockmgr 100021 3 tcp 32813 nlockmgr 100021 4 tcp 32813 nlockmgr 100005 1 udp 634 mountd 100005 1 tcp 637 mountd 100005 2 udp 634 mountd 100005 2 tcp 637 mountd 100005 3 udp 634 mountd 100005 3 tcp 637 mountd [root@tgdb1 root]# showmount -e becedge02 Export list for becedge02: /home/oracle * [root@tgdb1 root]# mount becedge02:/home/oracle /suman mount: RPC: Unable to receive; errno = Connection refused
Is it possible a firewall is getting in the way?
Firewall disabled. Able to telnet & ftp without any problem.
Ok... could you post an ethereal trace and the output of mount -v becedge02:/home/oracle /suman is?
[root@becedge02 root]# ifconfig -a eth0 Link encap:Ethernet HWaddr 00:06:5B:8C:48:1F inet addr:10.24.200.145 Bcast:10.24.200.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:167647 errors:0 dropped:0 overruns:0 frame:0 TX packets:4027 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:18743412 (17.8 Mb) TX bytes:483538 (472.2 Kb) Interrupt:28 eth1 Link encap:Ethernet HWaddr 00:06:5B:8C:48:20 inet addr:10.24.200.110 Bcast:10.24.200.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:217506 errors:0 dropped:0 overruns:0 frame:0 TX packets:66330 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:28310691 (26.9 Mb) TX bytes:23136513 (22.0 Mb) Interrupt:29 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:178993 errors:0 dropped:0 overruns:0 frame:0 TX packets:178993 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:5815684 (5.5 Mb) TX bytes:5815684 (5.5 Mb) [root@tgdb1 root]# ifconfig -a eth0 Link encap:Ethernet HWaddr 00:11:43:37:E1:93 inet addr:10.24.200.136 Bcast:10.24.200.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1177471 errors:0 dropped:0 overruns:0 frame:0 TX packets:537408 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1304374492 (1243.9 Mb) TX bytes:136315826 (130.0 Mb) Interrupt:48 Base address:0xecc0 Memory:dfae0000-dfb00000 eth1 Link encap:Ethernet HWaddr 00:11:43:37:E1:94 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) Interrupt:49 Base address:0xdcc0 Memory:df8e0000-df900000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:2298581 errors:0 dropped:0 overruns:0 frame:0 TX packets:2298581 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:225348864 (214.9 Mb) TX bytes:225348864 (214.9 Mb) Server: [root@becedge02 root]# showmount -e Export list for becedge02.bec.gxs.com: /home/oracle * Client: [root@tgdb1 root]# showmount -e becedge02 Export list for becedge02: /home/oracle * I am sending the exact output what I am getting... [root@tgdb1 root]# mount -v becedge02:/home/oracle /suman mount: no type was given - I'll assume nfs because of the colon mount: RPC: Remote system error - No route to host [root@tgdb1 root]# mount -v becedge02:/home/oracle /suman mount: no type was given - I'll assume nfs because of the colon mount: RPC: Unable to receive; errno = Connection refused [root@tgdb1 root]# showmount -e becedge02 mount clntudp_create: RPC: Port mapper failure - RPC: Unable to receive [root@tgdb1 root]# showmount -e becedge02 rpc mount export: RPC: Unable to receive; errno = Connection refused [root@tgdb1 root]# ONCE I EXECUTE MOUNT COMMAND FROM CLIENT, IT IS STOPPING THE OUTPUT OF SHOWMOUNT COMMAND IN BOTH SERVER AND CLIENT AND THEN IF I START THE NFS SERVICES AGAIN IN BOTH CLENT AND SERVER .IT IS GIVING THE OUTPUT OF SHOWMOUNT.. Here is the exactly what I am telling: After that go to server: [root@becedge02 root]# showmount -e rpc mount export: RPC: Unable to receive; errno = Connection refused [root@becedge02 root]# service nfs start Starting NFS services: OK ] Starting NFS quotas: OK ] Starting NFS daemon: OK ] Starting NFS mountd: OK ] [root@becedge02 root]# showmount -e Export list for becedge02.bec.gxs.com: /home/oracle * [root@becedge02 root]# [root@tgdb1 root]# showmount -e becedge02 mount clntudp_create: RPC: Port mapper failure - RPC: Unable to receive [root@tgdb1 root]# service nfs start Starting NFS services: OK ] Starting NFS quotas: OK ] Starting NFS daemon: OK ] Starting NFS mountd: OK ] [root@tgdb1 root]# showmount -e becedge02 Export list for becedge02: /home/oracle * [root@tgdb1 root]#
I guess I'm missing the point. This report is about mount returning "Connection refused" which goes away after the server is started. That's exactly what is suppose to happen. So that is the bug here?
I will be glad to give any furtherinformation you may require
I am seeing similar behavior while trying to install RHEL AS v4 over NFS.
(In reply to comment #5) > I guess I'm missing the point. This report is about mount > returning "Connection refused" which goes away after the > server is started. That's exactly what is suppose to happen. > So that is the bug here? nfs server should not be needed for client access. That's the bug: Cannot mount nfs from a remote server without running the nfs service locally
Ok.. I understand now... but this is a bit bizarre since the NFS server is definitely not need for the client to work... so there must be something else going on here... Ok.. I understand now... but this is a bit bizarre since the NFS server is definitely not need for the client to work... so there must be something else going on here... In general: > mount: RPC: Remote system error - No route to host means the ip gateway is not set (i.e. netstat -rn will show you that) > mount: RPC: Unable to receive; errno = Connection refused means there was no listener for the connection which generally means the server is not up... > mount clntudp_create: RPC: Port mapper failure - RPC: Unable to receive I'm really not sure that this exactly means, but the mount did not get the the server's port for rpc.mountd. > rpc mount export: RPC: Unable to receive; errno = Connection refused Again this means there was no listener... Adding these all together and it really appears to be a network issued. So to look at that, we'll needed a b2zip tethereal trace of these failure and the successes... So start a 'tethereal -w /tmp/data.pcap host <server>' on the client then reproduce the errors and then start up the server.
This bug is filed against RHEL 3, which is in maintenance phase. During the maintenance phase, only security errata and select mission critical bug fixes will be released for enterprise products. Since this bug does not meet that criteria, it is now being closed. For more information of the RHEL errata support policy, please visit: http://www.redhat.com/security/updates/errata/ If you feel this bug is indeed mission critical, please contact your support representative. You may be asked to provide detailed information on how this bug is affecting you.