Red Hat Bugzilla – Bug 199937
util-linux-2.13-0.20.4 has problem with NFS mounts
Last modified: 2008-05-06 12:09:44 EDT
Description of problem:
NFS mounts don't seem to work consistently with util-linux-2.13-0.20.4
An automounted home directory that mounts with util-linux-2.13-0.20.1 has
problems with util-linux-2.13-0.20.4.
Version-Release number of selected component (if applicable):
very. with correct parameters
Steps to Reproduce:
1. Upgrade to util-linux-2.13-0.20.4
2. Restart autofs
3. Attempt access to nfs automounted directory
[root@dunamis ~]# ls ~kennethr
ls: /home/poppy24/kennethr: No such file or directory
[root@dunamis ~]# ls ~kennethr
007-4644-003.pdf employee_faq.pdf README fedora-core.repo
tests archives Desktop kdb.mm
tmp bin dev Mail
Trash Downloads nsswitch.conf wa
I was able to successfully perform the following on dumamis
[root@dunamis log]# mount -o vers=3,proto=tcp poppy:/export/home/poppy24 /mnt/poppy
[root@dunamis log]# umount /mnt/poppy
This mount failed...
[root@dunamis log]# mount -o vers=3,proto=udp poppy:/export/home/poppy24 /mnt/poppy
proto=udp failed to mount with either nfs vers=2 or vers=3 on dunamis.
The NFS server poppy is an IRIX machine.
A mount from a SLES9 NFS server seemed to work.
Could you please post a bzip2 binary tethereal trace via
tethereal -w /tmp/data.pcap host <server> ; bzip2 /tmp/data.pcap
Created attachment 133022 [details]
tethereal -w /tmp/data.pcap host poppy
here is the requested trace
The trace seems to show an oddity that is hard to explain... When a client
mounts a NFS v3/v2 fileystem it send the server a series of request to
the portmapper (asking for the mountd and nfs ports), a NFS ping to ensure
the server is up and then finally a request to mountd to get the root filehandle
of the exported filesystem.
Now the curious thing about this trace is that it only shows the server's
reply to the NFS ping not the clients request... So I'm wonder if the
request is going out one network interface and coming back on
another one... Is there two network interfaces (i.e. eth0 and an eth1)
on the client?
Looking at the changes between 2.13-0.20.1 and 2.13-0.20.4 where
as a change to the mount code that would use UDP first and then
TCP when contacting the remote server (which greatly reduces the
number of tcp connections need to do mounts allowing more mounts
to happen at the same time). To eliminate this functionality, add the
'tcp' option to the autofs mount args... and then retry it...
Also please capture and post an ethereal trace of the retry.
Fedora apologizes that these issues have not been resolved yet. We're
sorry it's taken so long for your bug to be properly triaged and acted
on. We appreciate the time you took to report this issue and want to
make sure no important bugs slip through the cracks.
If you're currently running a version of Fedora Core between 1 and 6,
please note that Fedora no longer maintains these releases. We strongly
encourage you to upgrade to a current Fedora release. In order to
refocus our efforts as a project we are flagging all of the open bugs
for releases which are no longer maintained and closing them.
If this bug is still open against Fedora Core 1 through 6, thirty days
from now, it will be closed 'WONTFIX'. If you can reporduce this bug in
the latest Fedora version, please change to the respective version. If
you are unable to do this, please add a comment to this bug requesting
Thanks for your help, and we apologize again that we haven't handled
these issues to this point.
The process we are following is outlined here:
We will be following the process here:
http://fedoraproject.org/wiki/BugZappers/HouseKeeping to ensure this
doesn't happen again.
And if you'd like to join the bug triage team to help make things
better, check out http://fedoraproject.org/wiki/BugZappers
This bug is open for a Fedora version that is no longer maintained and
will not be fixed by Fedora. Therefore we are closing this bug.
If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen thus bug against that version.
Thank you for reporting this bug and we are sorry it could not be fixed.