Bug 771285 - mount fails with 2 XFS filesystems
mount fails with 2 XFS filesystems
Product: Fedora
Classification: Fedora
Component: kmod (Show other bugs)
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: kmod development team
Kay Sievers
: Reopened
: 790238 (view as bug list)
Depends On:
  Show dependency treegraph
Reported: 2012-01-03 00:48 EST by Pete Zaitcev
Modified: 2013-01-08 14:46 EST (History)
20 users (show)

See Also:
Fixed In Version: kmod-7-1.fc17
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2013-01-08 14:46:06 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
console capture 1 (20.48 KB, image/png)
2012-01-03 00:49 EST, Pete Zaitcev
no flags Details
/etc/fstab (872 bytes, text/plain)
2012-01-03 09:35 EST, Pete Zaitcev
no flags Details
dmesg (33.40 KB, text/plain)
2012-01-03 09:57 EST, Pete Zaitcev
no flags Details

  None (edit)
Description Pete Zaitcev 2012-01-03 00:48:08 EST
Description of problem:

After a boot, system comes to the repair prompt due to failure to mount.

Version-Release number of selected component (if applicable):


How reproducible:

Unknown... Seems 100% now, but somehow worked before

Steps to Reproduce:
1. configure 2 xfs filesystems
2. reboot
Actual results:

Stuck at "Give root password for maintenance"

Expected results:

Normal boot as usual

Additional info:

No idea what I break. This definitely worked before Christmas vacation.
I shut down the VMs, turned the box off, and turned it on today.

Please see attached console capture.

In it, the "first" filesystem fails (vdb), but the second filesystem (vdc)
mounts just fine. When I log in through maintenance prompt, it's mounted.
Exactly same parameters, filesystem is completely identical!
Comment 1 Pete Zaitcev 2012-01-03 00:49:22 EST
Created attachment 550363 [details]
console capture 1
Comment 2 Pete Zaitcev 2012-01-03 00:57:11 EST
The problem may to have something with XFS an unclean shutdown.
I ran xfs_check on both filesystems, and the VM now boots normally.
There were no messages about any filesystem errors, but presumably
xfs_check sets a superblock flag.
Comment 3 Michal Schmidt 2012-01-03 09:03:08 EST
Could you attach your /etc/fstab?

systemd spawned "/bin/mount /src/node/vdb", but the mount failed with an error:
mount: unknown filesystem type 'xfs'

I don't see what systemd did wrong here. Reassigning to util-linux.
Comment 4 Pete Zaitcev 2012-01-03 09:35:50 EST
Created attachment 550439 [details]
Comment 5 Karel Zak 2012-01-03 09:37:35 EST

 * check dmesg output

 * try "strace -o ~/log mount /src/node/vdb" and send me the ~/log file
Comment 6 Pete Zaitcev 2012-01-03 09:52:24 EST
You do realize that mount under strace is going to succeed, don't you?
I suppose I could create a wrapper that traces _all_ mount invocations.
Comment 7 Pete Zaitcev 2012-01-03 09:57:29 EST
Created attachment 550445 [details]

This dmesg is captured at the maintenance prompt after failure.
Comment 8 Karel Zak 2012-01-03 10:47:41 EST
(In reply to comment #6)
> You do realize that mount under strace is going to succeed, don't you?
> I suppose I could create a wrapper that traces _all_ mount invocations.

I thought that you're able to call mount(8) manually from command line. It seems that you can disable (comment out) the /src/node/* entries in your fstab to boot successfully
Comment 9 Karel Zak 2012-01-03 10:48:21 EST
(In reply to comment #8)
> you can disable (comment out) the /src/node/* entries

or add "noauto" there
Comment 10 Pete Zaitcev 2012-01-03 14:14:50 EST
The bug only occurs when two mounts are run simultaneously by the systemd.
If they run consequently or only one is run, they succeed. It's something
about the way mount detects the presense of the module before mounting.
Comment 11 Karel Zak 2012-01-03 15:07:35 EST
(In reply to comment #10)
> The bug only occurs when two mounts are run simultaneously by the systemd.
> If they run consequently or only one is run, they succeed. It's something
> about the way mount detects the presense of the module before mounting.

It sounds like kernel problem, mount(8) does not care about modules, it's kernel job...

mount(8) prints the "unknown filesystem type" message only if mount(2) syscall returns ENODEV and the FS type is not found in /proc/filesystems.

  udevd[293]: segfault at 24 ip 00007f13dbd01992 sp 00007fff6dc53fa0 error 6 in udevd[7f13dbcfd000+21000]

looks strange.
Comment 12 Michal Schmidt 2012-02-14 06:06:48 EST
*** Bug 790238 has been marked as a duplicate of this bug. ***
Comment 13 Kay Sievers 2012-02-14 09:24:56 EST
Usually the mount() syscall triggers the in-kernel modprobe  loader to insert
the module for an unknown, not already loaded filesystem. This call blocks
until the module in properly linked into the kernel.

One possible explanation could be that that two competing mount() syscalls
for the same filesystem module race against each other and one of them does
not block for some reason.

The problem might be new, before systemd, we certainly did almost everything
fully serialized in userspace.

It can be that the modprobe binary returns to early, or that the kernel does
not call the second modprobe at all.

Can someone who can reproduce the problem possibly add some printk() debugs

to get a clue here. Thanks!
Comment 14 Andrew Walker 2012-02-15 20:58:56 EST
I added printk() into get_fs_type() as suggested and here's what I saw:

[   18.947397] #####-----> get_fs_type() entered with name=xfs
[   18.965933] #####-----> get_fs_type() entered with name=xfs
[   19.214892] #####-----> get_fs_type() for name=xfs returned with   (null)
[   19.216575] SGI XFS with ACLs, security attributes, large block/inode numbers, no debug enabled
[   19.218279] systemd[1]: mnt-whatever.mount mount process exited, code=exited status=32
[   19.219521] mount[472]: mount: unknown filesystem type 'xfs'
[   19.222075] SGI XFS Quota Management subsystem
[   19.223593] #####-----> get_fs_type() for name=xfs returned with f7ff57e0
[   19.225218] XFS (sdb2): Mounting Filesystem
[   19.230243] systemd[1]: Job fedora-autorelabel-mark.service/start failed with result 'dependency'.
[   19.232221] systemd[1]: Job fedora-autorelabel.service/start failed with result 'dependency'.
[   19.233151] systemd[1]: Job local-fs.target/start failed with result 'dependency'.
[   19.233985] systemd[1]: Triggering OnFailure= dependencies of local-fs.target.
[   19.234828] systemd[1]: Unit mnt-whatever.mount entered failed state.
[   19.365065] XFS (sdb2): Ending clean mount

You can see from the above that one of the invocations of get_fs_type() returns with (null) while the other succeeds later.

Hope this helps!
Comment 15 Pete Zaitcev 2012-02-23 14:53:39 EST
For now, I worked around this by doing this:

cat <<EOF >/etc/rc.modules
modprobe xfs
chmod 755 /etc/rc.modules
Comment 16 Kay Sievers 2012-02-23 19:17:05 EST
A possible explanation is that two modprobe calls are issued by the kernel.
the first one links the module into the kernel, and the second one bails out
to early because it finds the module in /sys/module/ but it is not fully
initialized at that moment, so the second call does not block long enough
and fails.

Taking over the bug until we find out if that's the case. I'm trying to fix
modprobe now.
Comment 17 Kay Sievers 2012-02-23 19:58:57 EST
New kmod package on the way, which might block the second modprobe for a
longer time:
Comment 18 Fedora Update System 2012-02-24 05:01:15 EST
kmod-5-8.fc17 has been submitted as an update for Fedora 17.
Comment 19 Fedora Update System 2012-02-24 17:32:04 EST
Package kmod-5-8.fc17:
* should fix your issue,
* was pushed to the Fedora 17 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing kmod-5-8.fc17'
as soon as you are able to.
Please go to the following url:
then log in and leave karma (feedback).
Comment 20 Andrew Walker 2012-02-24 20:01:44 EST
Will this fix be back-ported to Fedora 16?
Comment 21 Fedora Update System 2012-03-04 10:47:09 EST
kmod-6-1.fc17 has been submitted as an update for Fedora 17.
Comment 22 Fedora Update System 2012-03-19 10:50:03 EDT
kmod-7-1.fc17 has been submitted as an update for Fedora 17.
Comment 23 Fedora Update System 2012-04-11 23:21:25 EDT
kmod-7-1.fc17 has been pushed to the Fedora 17 stable repository.  If problems still persist, please make note of it in this bug report.
Comment 24 Pete Zaitcev 2012-06-17 12:10:56 EDT
Still a problem on F17 with kmod-7-2.fc17 but whatever. The workaround
is still effective.
Comment 25 Jeremy Uchitel 2012-07-30 21:40:22 EDT
I think I am also seeing this problem on F16 with kernel-3.4.6-1 and module-init-tools-3.16-5.  Mounting the two xfs systems worked when I originally configured the system with F15, but stopped after my upgrade to F16.  Hoping to replace this with a new F17 install in the near future, but have copied Pete's pre-loading of the xfs module as a fix for now. (It works).  Just a side observation, but it seems there are a few cases where the systemd init seems more susceptible to race conditions than the old one.
Comment 26 Kay Sievers 2012-07-31 09:39:49 EDT
Seems we still miss the loop in kmod, that blocks the second modprobe
until the first modprobe returns and the module state has turned from
loading to ready.
Comment 27 Gerardo Exequiel Pozzi 2012-09-11 12:21:50 EDT
https://bugs.freedesktop.org/show_bug.cgi?id=53665 [mount fails when fstab has more than one entry for unloaded fs module]
Comment 28 Josh Boyer 2012-09-14 08:39:17 EDT
Rusty has submitted a patch in the kernel module loading to fix this issue:


That should resolve things as soon as it gets into Fedora.
Comment 29 Pete Zaitcev 2013-01-08 14:46:06 EST
Fixed in kernel-3.7.0-6.fc19 (turned out to require a kernel fix after all,
the workarounds in kmod were insufficient). Closing.

Note You need to log in before you can comment on or make changes to this bug.