Bug 130165
Summary: | nfs mounts "disappear" randomly | ||
---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Shahms E. King <shahms> |
Component: | kernel | Assignee: | Dave Jones <davej> |
Status: | CLOSED NEXTRELEASE | QA Contact: | |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 2 | CC: | nhorman, pfrields, rdieter, redhat, steven, wtogami |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | i686 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2005-04-16 04:50:30 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Shahms E. King
2004-08-17 17:03:42 UTC
What OS(es) are the NFS clients? Both client and server are Fedora Core 2, both compeletely updated. More information: If the client is an updated FC1 machine, the error gets reported as "Stale NFS File Handle" but the same work around solves the problem. If you use the touch command on the server in the directory that you are trying to access via nfs (i.e. run touch .) in the affected directory, can the clients see it again? Are you by any chance running a program on your server that might reset the modification time on various directories of your server (rsync with the -a option is an example of this.) Yes, running touch on the affected directory allows clients to see it again briefly. There are some scripts running which may reset modification times, but they aren't running in the affected directories. Additionally, the mtime is unchanged between touching the directory and the next time it fails. Unchanged modification times are commonly what lead to these problems. If the contents of the directory were updated, but the modification time was set to a previous value, the nfs client can be "fooled" into thinking its cached file handles are still good. A simmilar problem was fixed for RHEL3 in BZ 113636. What version of NFS are you using, 2, 3 or 4? I believe the RHEL3 did not occur if NFS v2 was used. With kernel 2.6.5-1.358 (the original kernel) the file ownerships are correct with NFSv4, any kernel after has the root:bin prob. So I take it then that you were using NFSv3 when the stale file handle issues came up? Assuming the nfs(5) man page is correct that v2 is the default, then it occurs with both NFSv2 and NFSv3. It happens both with nfsvers=3 and whatever the default is. IIRC, v3 is actually the current default. Can you try explicitly mounting with V2? Even explicitly mounting as V2 the problem persists only the error message changes to "Stale NFS file handle." I may be having a similar problem. kernel 2.6.9-1.3_FC2smp, Nexsan atabeast carved up with LVM2 and formatted with reiserfs, then shared with nfs. Processes will start complaining about Stale file handles; generally they can be umounted and remounted manually At least in one case a user did an 'ls' on a directory 2 times and received "stale" errir, then on third try the filesystem came back. Happens both to manual mounts and autofs mounts. I'm thinking something screwy in lockd? Not sure how to go about documenting this; it happens maybe once a day. Logging? What is available. are you still having this problem with the 2.6.10 updates ? I'm going to tentatively say it has been fixed. I haven't experienced any problems today ;-) Given the random nature of the problem, I can't quit give an unresevered "this bug is fixed", but so far, it appears to be. Fedora Core 2 has now reached end of life, and no further updates will be provided by Red Hat. The Fedora legacy project will be producing further kernel updates for security problems only. If this bug has not been fixed in the latest Fedora Core 2 update kernel, please try to reproduce it under Fedora Core 3, and reopen if necessary, changing the product version accordingly. Thank you. |