Bug 832748 - some nfs4 mounts fail at boot time
Summary: some nfs4 mounts fail at boot time
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Fedora
Classification: Fedora
Component: nfs-utils
Version: 17
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Steve Dickson
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-06-17 06:05 UTC by Ed Greshko
Modified: 2019-09-12 07:41 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-08-01 11:05:33 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)

Description Ed Greshko 2012-06-17 06:05:18 UTC
Description of problem: Not all nfs4 mounts specified in the fstab get mounted a boot time.

Version-Release number of selected component (if applicable): 

nfs-utils-1.2.6-0.fc17.i686
kmod-7-2.fc17.i686


How reproducible: Define 2 nfs4 mounts in /etc/fstab and reboot


Steps to Reproduce:
1. Define 2 nfs4 mounts in /etc/fstab
2. reboot
3.
  
Actual results: Only 1 of the 2 mounts gets completed.  It seems random as to which
one succeeds.  After booting, the mount can be done manually.


Expected results: All mounts should succeed


Additional info: The information recorded in /var/log/messages is...

Jun 17 13:30:55 f17 kernel: [   16.487683] RPC: Registered udp transport module.
Jun 17 13:30:55 f17 kernel: [   16.487684] RPC: Registered tcp transport module.
Jun 17 13:30:55 f17 kernel: [   16.487685] RPC: Registered tcp NFSv4.1 backchannel transport module.
Jun 17 13:30:55 f17 rpc.statd[831]: Version 1.2.6 starting
Jun 17 13:30:55 f17 sm-notify[832]: Version 1.2.6 starting
Jun 17 13:30:55 f17 kernel: [   17.024327] FS-Cache: Loaded
Jun 17 13:30:56 f17 kernel: [   17.169280] NFS: Registering the id_resolver key type
Jun 17 13:30:56 f17 kernel: [   17.169295] FS-Cache: Netfs 'nfs' registered for caching
Jun 17 13:30:56 f17 mount[842]: mount.nfs4: No such device
Jun 17 13:30:56 f17 systemd[1]: syntegra.mount mount process exited, code=exited status=32
Jun 17 13:30:56 f17 systemd[1]: Job remote-fs.target/start failed with result 'dependency'.
Jun 17 13:30:56 f17 systemd[1]: Unit syntegra.mount entered failed state.

Could this somehow be related to bugzilla 806333?

Comment 1 Ed Greshko 2012-06-24 06:23:54 UTC
I did forget to add that there is a work-around.

Creating a /etc/rc.d/rc.local file with a "/bin/mount -a" will result in all the file systems being mounted.

However, that isn't as it should be....

I wonder if anyone has tried to duplicate this bugzilla?

Comment 2 J. Bruce Fields 2012-06-25 11:48:43 UTC
"mount.nfs4: No such device"

Huh.  I think ENODEV from the mount systemcall means it doesn't have the right module loaded.

If you mount it like "-tnfs -onfsversion=4", does it work?

Comment 3 Ed Greshko 2012-06-25 12:18:40 UTC
My original fstab had the following entries....

misty:/syntegra /syntegra                nfs4    defaults        0 0
misty:/myhome /home/egreshko/misty       nfs4    defaults        0 0

I changed them to 

misty:/syntegra /syntegra                nfs    nfsvers=4        0 0
misty:/myhome /home/egreshko/misty       nfs    nfsvers=4        0 0

which is what I think you are asking. 

After rebooting only one mount was successful....

misty:/myhome/             366919872 195061600 153236256  57% /home/egreshko/misty

The error recorded changed....

Jun 25 20:12:50 f17 mount[898]: mount.nfs: No such device
Jun 25 20:12:50 f17 systemd[1]: syntegra.mount mount process exited, code=exited status=32
Jun 25 20:12:50 f17 systemd[1]: Job remote-fs.target/start failed with result 'dependency'.
Jun 25 20:12:50 f17 systemd[1]: Unit syntegra.mount entered failed state.

I then did a "mount -a" and the mount succeeded....

misty:/myhome/             366919872 195061600 153236256  57% /home/egreshko/misty
misty:/syntegra/            80634688  49308448  27230240  65% /syntegra

Comment 4 J. Bruce Fields 2012-06-25 12:51:27 UTC
(In reply to comment #3)
> The error recorded changed....
> 
> Jun 25 20:12:50 f17 mount[898]: mount.nfs: No such device

Well, but this looks like the original failure, and it's the same in both cases.

Probably the one that fails each time is the first one that's attempted (and probably there's some sort of parallelism that makes it random which one is attempted first).

And probably the nfs module isn't getting mounted the first time, but is mounted by the second time.

And that first failure is weird--the module-loading should be totally automatic.

If there was a way to insert a command *before* systemd starts trying to mount remote filesystems, it would be interesting to try inserting a "modprobe nfs" and seeing if that solves the problem.

That's not the real *solution*, but it would confirm that this is all caused by some sort of failure to autoload the nfs module when it should be.

> Jun 25 20:12:50 f17 systemd[1]: syntegra.mount mount process exited,
> code=exited status=32
> Jun 25 20:12:50 f17 systemd[1]: Job remote-fs.target/start failed with
> result 'dependency'.
> Jun 25 20:12:50 f17 systemd[1]: Unit syntegra.mount entered failed state.

Comment 5 Ed Greshko 2012-06-25 13:34:08 UTC
Well, if you know a way to " insert a command *before* systemd starts trying to mount remote" let me know and I'll give it a go.  I can't think of a way at the moment.  It is late in the day here.

Have you looked at bugzilla 806333?  They sound "similar".

Comment 6 J. Bruce Fields 2012-06-25 13:50:13 UTC
(In reply to comment #5)
> Well, if you know a way to " insert a command *before* systemd starts trying
> to mount remote" let me know and I'll give it a go.  I can't think of a way
> at the moment.  It is late in the day here.

Yeah, I don't know either, I guess I'd start by reading through the systemd documentation.

> Have you looked at bugzilla 806333?  They sound "similar".

The problem there as I understand it is specifically with -tnfs4 mounts.  That's why I suggested trying "-tnfs -onfsversion=4".  Since that doesn't work either then I believe you're seeing a different problem.

Comment 7 Ed Greshko 2012-06-25 13:58:29 UTC
OK....  Here is what I did...

I saw there was a systemd-modules-load.service.  So, I researched that and created a file called my-nfs.conf in /usr/lib/modules-load.d.  The contents of that file are....

# preload nfs
nfs

I rebooted and both file systems came up mounted.  I tested it with the new and the old fstab settings.

So, it would see, the problem is that the nfs module is not getting autoloaded as it should be.

Comment 8 J. Bruce Fields 2012-06-25 15:37:57 UTC
OK, that makes sense.  But I'm not sure how that could happen, or how to debug it.

On mount if the kernel finds it doesn't know about the filesystem type "nfs", then it calls request_module("nfs"), which should run "modprobe nfs", and not return until that module is loaded, at which point it tries again to look for the filesystem type "nfs", and should find it.

So why isn't it working that way?

Comment 9 J. Bruce Fields 2012-06-25 19:56:03 UTC
It might be interesting to know what would happen if you *remove* both entries from your fstab, boot, and then mount them by hand.

Can you still get the first mount to fail?

Comment 10 Ed Greshko 2012-06-25 22:39:11 UTC
I removed the entries from the fstab and made sure the nfs module was not loaded.

[egreshko@f17 ~]$ lsmod | grep nfs
[egreshko@f17 ~]$

[root@f17 ~]# mount -t nfs4 misty:/syntegra /syntegra
[root@f17 ~]# 

[root@f17 ~]# lsmod | grep nfs
nfs                   358901  1 
nfs_acl                12653  1 nfs
auth_rpcgss            34810  1 nfs
fscache                48610  1 nfs
lockd                  78001  1 nfs
sunrpc                215303  9 nfs,auth_rpcgss,lockd,nfs_acl

So, it worked just fine the first time on a manual mount.

Comment 11 Pete Zaitcev 2012-09-18 19:38:55 UTC
See also bug 771285.

Comment 12 Fedora End Of Life 2013-07-04 03:29:23 UTC
This message is a reminder that Fedora 17 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 17. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '17'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 17's end of life.

Bug Reporter:  Thank you for reporting this issue and we are sorry that 
we may not be able to fix it before Fedora 17 is end of life. If you 
would still like  to see this bug fixed and are able to reproduce it 
against a later version  of Fedora, you are encouraged  change the 
'version' to a later Fedora version prior to Fedora 17's end of life.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 13 Fedora End Of Life 2013-08-01 11:05:42 UTC
Fedora 17 changed to end-of-life (EOL) status on 2013-07-30. Fedora 17 is 
no longer maintained, which means that it will not receive any further 
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of 
Fedora please feel free to reopen this bug against that version.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.