Bug 69329
Summary: | nfs server is not responding | ||||||
---|---|---|---|---|---|---|---|
Product: | [Retired] Red Hat Public Beta | Reporter: | Joachim Kunze <joachim> | ||||
Component: | kernel | Assignee: | Pete Zaitcev <zaitcev> | ||||
Status: | CLOSED WORKSFORME | QA Contact: | Ben Levenson <benl> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | medium | ||||||
Version: | limbo | ||||||
Target Milestone: | --- | ||||||
Target Release: | --- | ||||||
Hardware: | i386 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2002-08-14 14:33:32 UTC | Type: | --- | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 67217 | ||||||
Attachments: |
|
Description
Joachim Kunze
2002-07-21 09:38:55 UTC
Can you try nfs-utils-1.0.1-0? I tried it with nfs-utils-1.0.1-1, but the result is still the same. I don't receive the message any longer, but the connection hangs and I can't access to the share. When I try to shutdown the server - the system hangs. This is pretty ugly Bob. Suggest you rope in additional assistance. Please attach nsfd, mountd and statd messages from the server logs. Also, I'd be very interested to know what network card is installed on the server. When you say that the server can't be rebooted because it couldn't kill the processes, what do you mean? Does it hang on shutdown? If so, what messages are generated? Can you kill the processes by hand with kill(1)? I recreated this setup (Limbo server with kernel-2.4.18-5.74, nfs-utils-1.0.1) and 7.3 client. I did not have any problem mounting, reading or writing to NFS directories on the client. The machine also reboots cleanly. Also, why are you trying to NFS mount 192.168.10.12 = webapp-1-f2.rwc-colo.redhat.com? I hate to reopen a wound here, but I'm seeing the exact behavior joachim described. This may (or may not) be related to the issues in bugs #70069 and #70321. In my case I have a limbo2 system with all updates as the server, and a valhalla system with (I think) all updates as the client. The server has the following components: kernel-2.4.18-7.93 nfs-utils-1.0.1-2 The client: kernel-2.4.18-5 nfs-utils-0.3.3-5 mount-2.11n-12.7.3 The server has a D-Link DE-570T NIC, and the client has a Netgear FA310TX. I'll attach a file with sample configs and log messages. What's really odd is that the file system gets hung on the server as well. I ran a "watch ls -la" on the exported FS on the server, and it hung a few seconds after the mount. I'll include some ps output in the attachment. I'm going to keep poking at this problem. I'll add in any useful information I find. Created attachment 71224 [details]
Sample commands and log/ps output
On the server, it seems to only affect the portion of the FS that was NFS mounted. eg, on dynamic229 all of /home was exported. After the mount I can still descend into /home, /home/warehouse, etc. As soon as I touch /home/landfill, though, my "ls" goes into state D. An attempt to reboot the server produces: Shutting down NFS daemon: [FAILED] and an indefinate hang at Shutting down NFS services: The system is already trying to shut down, so it's impossible to resolve this with anything other than a hard reset. Installing the updated kernel from 7.3 (2.4.18-5) on the limbo system seems to make things work. I guess this is a kernel issue after all. after the setup of the beta on another machine (completely different hw) it works quite well there. So I've also setup the first machine from scratch with kernel 2.4.18-10.99 and now the server-functionality also works well. I just installed the latest beta (7.3.94, (null), kernel 2.4.18-11) on the beta box (server), and all appears happy. I've got my tunes playing right now, and I'll chuck some ISOs back and forth just to be sure. |