Bug 708574 - Boot fails: Starting /home aborted because dependency failed
Summary: Boot fails: Starting /home aborted because dependency failed
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Fedora
Classification: Fedora
Component: kernel
Version: 15
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Kernel Maintainer List
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-05-28 05:48 UTC by Kriton Kyrimis
Modified: 2013-01-22 23:45 UTC (History)
26 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-06-06 13:19:16 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Kriton Kyrimis 2011-05-28 05:48:41 UTC
Description of problem:
After upgrading to Fedora 15 from Fedora 14, the machine frequently (but not always) fails to boot. The boot progress slows to a crawl, until something times out, at which point I am prompted for the root password, to enter debug mode. At first I thought this was caused by bug 698209, as the message on the console was the "incomplete write" error, mentioned there.

However, the error has now changed to the following:

Starting /home aborted because a dependency failed.
Starting Mark the need to relabel after reboot aborted because a dependency failed.
Starting Relabel all filesystems, if necessary aborted because a dependency failed.

("Aborted" appears in red.)

At this point, I am prompted for the root password, to enter debug mode.

Looking at the state of the system in debug mode, /home is not mounted, but / and swap, which are on the same volume group are. If I try to mount /home by hand, I get the message:

Special device /dev/mapper/vg_kriton_lv_home does not exist.

However, if I say "lvm vgchange -ay". the device appears, and I can mount /home.

Some things I tried which may or may not be relevant, as the boot failures may be random:

* Hit the hardware reset switch. The same error occurs.

* Boot into windows (I have a dual boot machine), then reboot into Linux. I tried it once and I was able to boot Fedora.

* Mount /home, then exit debug mode. The machine hanged.

* "lvm vgchange -ay", fsck the home partition, mount it, then issue a "reboot" command. The machine booted fine.

When booting after a hard reset, the modification time of / was in the future, but less than a day. So was the modification time of /home, when I fsck'ed it by hand. I don't know if this was caused by the hard reset or if it is related to the problem.


Version-Release number of selected component (if applicable):
2.6.38.6-27.fc15.x86_64

How reproducible:
Very frequently, but not always.

Steps to Reproduce:
1. Turn the computer on.
  
Actual results:
The machine does not boot.

Expected results:
The machine should boot.

Additional info:
My disk is a BIOS RAID.

I have selinux disabled in /etc/selinux/config.

Comment 1 jjy 2011-05-29 07:17:11 UTC
Maybe you should check your /etc/fstab, is there any file sysetm mount failed before /home?
Comment it out from fstab, maybe succeed.

Comment 2 Kriton Kyrimis 2011-05-29 10:32:12 UTC
No, that's not it. Everything else, both before and after /home in /etc/fstab mounts successfully.

Besides, the problem is not that mounting /home fails, but, as I mentioned above, that the entire device associated with /home is not present, even though the devices associated with the other volumes in the same volume group are.

If it were a case of some file system not mounting, I would expect the problem to be consistent, but it is not. Some times I have a hard time booting, some times the machine boots without any problem whatsoever.

Comment 3 Kriton Kyrimis 2011-05-30 19:43:40 UTC
I installed a vanilla 2.6.39 kernel, and the problem seems to have gone away,
as I have rebooted several times without a problem.

Hoping that the problem had been fixed upstream, I installed the 2.6.39-1.fc16
kernel from rawhide, but the problem reappeared as soon as I tried to boot my
computer with that kernel. It looks like the problem is fedora-specific.

I will be keeping 2.6.38.6-27 around, so if there's anything I can do to help
debug this, do let me know.

Comment 4 Kriton Kyrimis 2011-06-08 06:14:41 UTC
The problem persists with the new 2.6.38.7-30.fc15.x86_64 kernel.

Comment 5 Julian Tosh 2011-06-10 14:48:31 UTC
I'm experiencing very similar issues. Same kernel (2.6.38.7-30.fc15.x86_64) but I have SELinux enabled. OS is a guest installed under a Windows 7 Virtualbox host.

Problem entries in my /etc/fstab are:
/dev/ram0  /mnt/ram    tmpfs defaults 0 0
Shared     /mnt/shared vboxsf uid=500,gid=500 0 0

I've tried manually relabeling the file systems with 'fixlabel relabel /mnt/shared' and 'fixlabel relabel /mnt/ram' with no luck.

The only way to get the system to boot consistently is to remark out the problem entries in /etc/fstab.

Errors I see when problem persists:
Starting /mnt/shared failed, see 'systemctl status mnt-shared.mount for details
Starting relabel all filesystems, if necessary aborted because a dependency failed
Starting mark the need to relabel after reboot aborted because a dependency failed
Starting /boot aborted because a dependency failed
Starting File System Check on /dev/disk/by-uuid/* aborted because a dependency failed
Starting Cryptography setup for luks* aborted because a dependency failed

'systemctl status mnt-shared.mount' reveals
mnt-shared.mount - /mnt/shared
 loaded: error
 active: inactive (dead)
 where: /mnt/shared
 CGroup: name=systemd:/system/mnt-shared.mount

Comment 6 Andy 2011-06-11 23:41:10 UTC
(In reply to comment #4)
> The problem persists with the new 2.6.38.7-30.fc15.x86_64 kernel.

Confirmed. Is someone is working of a fix?

Comment 7 Kriton Kyrimis 2011-06-12 05:41:16 UTC
I just tried the following hack, which DID NOT work:

I added the "nofail" option for /home in /etc/fstab, and then added code in /etc/rc.local to mount it at the end of the boot process, if it has not been mounted.

After the "dependency failed" messages, the computer hangs, instead of prompting for the root password. I guess that this is similar to entering debug mode, mounting /home by hand, then exiting debug mode, where the computer also hangs, as I mentioned in comment #1.

Back to using a vanilla kernel!

Comment 8 Andy 2011-06-12 17:13:01 UTC
(In reply to comment #7)
> Back to using a vanilla kernel!

Could you please post instructions on getting and installing with yum the "vanilla kernel!" which works for you? I would appreciate it very much.

Comment 9 Kriton Kyrimis 2011-06-12 18:01:05 UTC
> Could you please post instructions on getting and installing with yum the
> "vanilla kernel!" which works for you? I would appreciate it very much.

I'm afraid that you don't install vanilla kernels with yum; you build them from source: download the kernel source from www.kernel.org and follow the instructions at http://kernelnewbies.org/KernelBuild

Comment 10 Andy 2011-06-12 18:42:45 UTC
(In reply to comment #9)
> > Could you please post instructions on getting and installing with yum the
> > "vanilla kernel!" which works for you? I would appreciate it very much.
> 
> I'm afraid that you don't install vanilla kernels with yum; you build them from
> source: download the kernel source from www.kernel.org and follow the
> instructions at http://kernelnewbies.org/KernelBuild

Thanks for your reply! This would be a hard thing for me to do, since I also need the NVIDIA kernel patches, provided by "kmod" 

Did it work for you with any FC15 kernel? May be I should try to downgrade the kernel? I have used the preupgrade from FC14 and, unfortunately, removed the working FC14 kernel, and went directly to the current broken 2.6.38.7-30.fc15.x86_64 kernel. Can an FC14 kernel work in FC15?

Comment 11 Andy 2011-06-15 17:05:57 UTC
Is anyone working on a fix, please? I am still hoping to see it fixed in the next kernel RPM. I cannot do any work since performing the FC15 upgrade. But if a fix is coming rather soon, I would wait, rather then doing a fresh install.

Here are some more details on my setup: 64 bit updated FC 14, followed the instructions for the preupgrade. The preupgrade went fine and put the 2.6.38.7-30.fc15.x86_64 kernel. The first (hard) reboot gives the error exactly as described in this bug. 

I have tried an FC14 kernel - it does not boot in FC15. Both released FC15 kernels are broken. I cannot easily compile the kernel myself, since I need nvidia-kmod kernel patches. 

I can login into the system as "single", if I comment out the \home partition  in /etc/fstab. 

Please help!

Comment 12 Kriton Kyrimis 2011-06-15 18:36:30 UTC
> I cannot do any work since performing the FC15 upgrade.

Try the trick I mentioned in comment #1:

After booting fails, give the root password, to enter debug mode, then issue the following commands:

lvm vgchange -ay
fsck /home
mount /home
reboot

Repeat until your computer boots; this is obviously a magic incantation that shouldn't make any difference, but for me it usually works with the first try!

Comment 13 Andy 2011-06-15 21:10:35 UTC
Kriton, YES! Many thanks!

After installing lvm by 

yum install lvm2

I successfully run 

lvm vgchange -ay

The next command proposed, 

fsck /home

gave me an error, something about a subdirectory. 
So, I just put \home back in /etc/fstab and rebooted. And I could now soft reboot (tested twice) without any problems and not doing anything special! Evidently, "lvm vgchange -ay" did the trick. 

I do not want to test the hard reboot at this point.

Comment 14 Kriton Kyrimis 2011-06-16 15:12:49 UTC
I installed the new kernel 2.6.38.8-32.fc15.x86_64 today and, so far, I've booted my computer four times (two cold boots, two soft reboots) successfully with this kernel. As the bug was intermittent, I'd wait a couple of days before closing it, but it does look like it has been fixed.

Comment 15 Kriton Kyrimis 2011-06-17 10:43:45 UTC
Regarding the problems with VirtualBox in comment #5, I've had a similar problem, but it was not caused by this bug:

I have a dual boot system, where, on the Windows side, I have configured a VirtualBox VM to boot from the Linux disk, so that I can have access to the Linux system without rebooting, should the need arise. When physically booting Linux, I want access to my Windows disk, so I've put an appropriate entry in /etc/fstab. When booting through VirtualBox, mounting the running Windows partition is a no-no, so I have not made the disk available to the VM. Up to Fedora 14, there was no problem with this approach, as mounting the Windows disk would fail silently, and booting would continue. Under Fedora 15, failed mounts are treated as fatal, however, and booting stops. Adding the "nofail" option for the Windows disk in /etc/fstab, allowed me to boot Fedora 15 as before, without needing to comment out the entry.

Thus, if the file systems that cause the boot failure in comment #5 don't exist, adding the "nofail" option in /etc/fstab may allow you to boot.

Comment 16 Andrew Lau 2011-06-19 19:37:13 UTC
Using same kernel version as comment 14, but having similar problem.  My /home is a dmraid, but hangs the same way.

Comment 17 Kriton Kyrimis 2011-06-20 07:42:07 UTC
Regarding comment #16, could it be that you need to run fsck manually, as in comment #13? It seems to have helped in that case.

Comment 18 Andrew Lau 2011-06-20 17:56:39 UTC
@kriton, tried to fsck manually but didn't find any errors, so that doesn't seem to be it.

Comment 19 Chuck Ebbert 2011-06-27 07:36:09 UTC
Does kernel-2.6.38.8-32 fix this or not? There is one report of success, have others tried it?

Comment 20 Kriton Kyrimis 2011-06-27 08:55:07 UTC
Well, it fixed it for me, the original poster, but, according to comment #16, it doesn't fix it for everyone.

Perhaps comment #16 refers to a different problem that prevents /home from getting mounted, producing the same error as in my case.

Comment 21 Andrew Lau 2011-06-27 13:29:34 UTC
2.6.38... did not fix it for me.  Instead I've commented the mount out of fstab, and mount it after it boots by hand because otherwise it hangs.

Comment 22 Andrew Lau 2011-07-13 02:00:51 UTC
A more detailed look at the console when booting; it eventually comes back and says: 

Starting /home aborted because a dependency failed.
Starting Mark the need to relabel after reboot aborted because a dependency failed.
Job dev-mapper-via_dbjbjj....device/start timed out.
Job fedora-autorelabel-mark.service/start failed with result 'dependency'.
Starting Relabel all filesystems, if necessary aborted because a dependency failed.
Job fedora-autorelabel.service/start failed with result 'dependency'
Job local-fs.target/start failed with result 'dependency'.
Triggering OnFailure= dependencies of local-fs.target.
Job home.mount/start failed with result 'dependency'.
Job dev-mapper-via_dbbjj....device/start failed with result 'timeout'.
Welcome to emergency mode. ........

Comment 23 Andrew Haveland-Robinson 2011-07-15 03:05:16 UTC
I arrived here in my quest to solve a possibly related problem.
I just installed fc15 on a 2600K processor, 4 raid drives, /boot (raid1), swap (raid10), / as ext4 (raid10), no lvm. Boots fine.
Now I add a 1TB drive, and create /dev/md3 with it as a mirror with a missing drive for its partner to arrive later, and put reiserfs on it because it will be holding deep trees of small files.
updated mdadm.conf:

added /archive to fstab using its UUID, as with the others. Boot fails with this error:
Starting /archive aborted because a dependency failed.
Starting Relabel... etc as above,
Welcome to emergency mode...

I tried creating /dev/md3 again using mdadm v1.1 instead of v1.2, and got the same error message.

I "fixed" it in the following way:
just commented this out:
UUID=04d54553-15da-8b89-3715-9c72a0df1127 /archive reiserfs noatime 1 2

and replaced it with this:
/dev/md3                                  /archive reiserfs noatime 1 2

Baffling. These should do exactly the same thing, but the former causes the abort and drop to emergency mode.
The other mdadm UUID mounts are fine.
Just reiserfs doesn't like being mounted by UUID.

fstab:
UUID=e40a3317-5983-4046-8f29-82dbdc22c12d /                       ext4     noatime,nodiratime,defaults        1 1
UUID=8d6841c1-3958-45eb-88b4-d17cd33c164d /boot                   ext4     noatime,nodiratime,defaults        1 2
UUID=9a7e4584-5db5-4b78-ba0b-713cccdce507 swap                    swap     defaults                           0 0
#UUID=04d54553-15da-8b89-3715-9c72a0df1127 /archive                reiserfs noatime,nodiratime,defaults        1 2
/dev/md3                /archive                                  reiserfs noatime,nodiratime,defaults        1 2


mdadm.conf:
# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md0 level=raid1 num-devices=4 UUID=b9db7fba:6677f4cd:abcdd88e:cbf22d5a
ARRAY /dev/md1 level=raid10 num-devices=4 UUID=243bcf2b:24e20580:6275ded1:30cccbd1
ARRAY /dev/md2 level=raid10 num-devices=4 UUID=feb8573b:c363ebd3:11f297de:ae0f2351
ARRAY /dev/md3 level=raid1 num-devices=2 UUID=04d54553:15da8b89:37159c72:a0df1127

Hope this helps.

Comment 24 Andrew Haveland-Robinson 2011-07-15 03:09:15 UTC
Should also add:
selinux disabled
uname -a
2.6.38.8-35.fc15.x86_64 #1 SMP Wed Jul 6 13:58:54 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
Not using BIOS's fake raid. Standalone ACPI mode.

Comment 25 John Albright 2011-08-05 01:48:14 UTC
I seem to be encountering this bug as well. I installed Fedora 15 on my wife's HP Mini 210, then did a "yum update' and it of course installed kernel 2.6.40-4 (3.0 in disguise). Rebooted and it gave the following messages:

Job fedora-autorelabel.service/start failed with result 'dependency'.
job local-fs.target/start failed with result 'dependency'.
Triggering OnFailure= dependencies of local-fs.target.
Job home.mount/start failed with result 'dependency'.
Job dev-mapper-vg_lilblue\x2dlv_home.device/start failed with result 'timeout'.
Starting Mark the need to relabel after reboot aborted because a dependency failed.
Starting Relabel all filesystems, if necessary aborted because a dependency failed.

Then it goes into emergency mode. This seems to happen on every boot-up; I haven't gotten it to successfully boot once after doing the full update, and I've tried many times. I also want to mention that I didn't do anything else to the system. Just installed Fedora and ran the full update; that's it.

Comment 26 Tom Moran 2011-08-09 15:27:33 UTC
I have the same problem, from a clean install of Fedora 15. I previously had Fedora 14 and did a new install with Fedora 15 rather than an upgrade. As with some others above, I can't boot the system at all.

Does anyone know if a solution is in progress for this bug? The status is still "NEW"!

The first error/aborted message I see is;

"Starting Cryptography setup for luks-[UID] aborted because a dependency failed."

This is followed by several other "dependency failed" messages.

For what its worth, some info on my system;
 - is a single disk (old IDE) machine
 - most partitions are encrypted
 - the only mounts that are successful after I enter emergency mode are;

/boot (ext4, not enctypted), 
/ (ext4, encrypted)
/media (ext4, encrypted)

I have tried commenting out some of the mount points in fstab but, on boot up, it still fails.

It is probably irrelevant, but one thing I noticed that seems to be different from Fedora 14 is that, after I enter the global encryption password, at boot up, Fedora 14 used to print the UID of each entry in the crypttab, now on Fedora 15 it only prints out the first entry.

This is the first time since I started with Fedora Core 1 that I can't get a release to boot at all! Its a superb project, keep up all the good work!

Comment 27 Julian Tosh 2011-08-09 16:41:49 UTC
I was having the problem under Virtualbox and found that an incantation of commenting out shared folder entries in fstab and removing Guest Additions prior to kernel updates helped.

Comment 28 Steve 2011-09-05 09:04:06 UTC
I can confirm that the bug exists on kernel-2.6.40-3(4).fc15. I have to boot my server about 20 times for one success.

-----fstab-----
/dev/sdc1               /mnt/xxx                ext3    noatime         1 2
/dev/sde1               /mnt/XXX                ext4    noatime         1 2
/dev/sdd1               /mnt/xXx                ext4    noatime         1 2
/dev/sda1               /mnt/XxX                ext4    noatime         1 2
/dev/sdb1               /mnt/Xxx                ext4    noatime         1 2
-----fstab-----

Comment 29 Steve 2011-09-06 08:15:50 UTC
I have booted three times with success since I have all drives formatted in ext4:

-----fstab-----
/dev/sdc1               /mnt/xxx                ext4    noatime         1 2
/dev/sde1               /mnt/XXX                ext4    noatime         1 2
/dev/sdd1               /mnt/xXx                ext4    noatime         1 2
/dev/sda1               /mnt/XxX                ext4    noatime         1 2
/dev/sdb1               /mnt/Xxx                ext4    noatime         1 2
-----fstab-----

Comment 30 Anthony Horton 2011-10-05 06:18:37 UTC
I can confirm the same bug exists with 2.6.40.4-5.fc15 on both x86_64 and i686 platforms.  I get intermittent boot failures on two different laptops, both with clean installs of Fedora 15.  Sometimes the laptops will boot cleanly, most times I will end up in emergency mode with the 'aborted because a dependency failed' errors as described by the OP.  I have found that pressing Ctrl-D to continue allows the boot process to eventually complete.

Comment 31 Ron Gonzalez 2011-10-17 15:19:42 UTC
I am having this same problem mounting a volume group that exists on an Iscsi device at boot time.

the service control manager reports that the dependency fails and this is why I get this error message:



Oct 17 10:27:33 talon systemd[1]: Job remote-fs.target/start failed with result 'dependency'.
Oct 17 10:27:33 talon systemd[1]: Job home-iscsi_media.mount/start failed with result 'dependency'.


I have to manually go in and do a vgscan and then do a vgchange -ay

Please fix this.

Also see bug # 743740

Comment 32 Red1 2011-10-26 21:15:04 UTC

Hi All 

I am facing the same issue , I upgraded my Fedora 13 to Fedora 15 today , and it fails to mount /home and /var , only / is mounted .... 

my kernel version is 2.6.40.6-0.fc15.x86_64

Please, fix this bug , I can not sleep without having my system running.

Regards
Red1

Comment 33 Charles 2011-10-28 01:58:57 UTC
Hi All,

I also experienced these exact symptoms after upgrading F14 to F15. I think there might be a couple of different situations that can trigger the same results. I didn't have to do anything with LVM / vgchange et al, my LV's were all already active.

In my case I had about 4 volumes that would stall the boot and drop it into emergency mode. I could mount them by hand fine, I could move them from /etc/fstab to rc.local work around, but I couldn't rest with that solution.

So after several hours of head bashing (and reading this page top to bottom about 10 times), I discovered that the solution in my case was just to rebuild my initrd. (An idea triggered by Kriton building a vanilla kernel which means a fresh initrd). The RPM upgrade should have triggered a rebuild, but perhaps it didn't due to the upgrade path from F14...

For anyone who doesn't know how to do that, you can try this (as root):

(Make a backup first...):
   mv /boot/initramfs-`uname -r`.img /boot/initramfs-`uname -r`.img.bak

(Build new initrd...):
   mkinitrd /boot/initramfs-`uname -r`.img `uname -r`

Or alternatively it might be easier to try using the Plymouth theme switcher to do it if you have plymouth installed...

(Determine your current theme):
   plymouth-set-default-theme
(Set that theme again, but ask for initrd to be rebuilt...)
   plymouth-set-default-theme [current theme: solar/charge/etc] --rebuild-initrd
In my case:
   plymouth-set-default-theme solar --rebuild-initrd

I'd be interested to know if this fixes it for anyone else...

I still don't know what in the original initrd was causing systemd to time out.... and now by fixing it, I've destroyed my evidence...

Cheers

Comment 34 David Nadle 2011-10-29 06:30:33 UTC
(In reply to comment #27)
> I was having the problem under Virtualbox and found that an incantation of
> commenting out shared folder entries in fstab and removing Guest Additions
> prior to kernel updates helped.

Thank you Julian, I was having this problem with VirtualBox after an update, and commenting out my shared vboxfs folder fixed the boot.

Comment 35 xic1971 2011-11-03 05:49:00 UTC
I believe this is caused by /run. It is supposed to be mounted (as tmpfs) when the system boots up. At the end of initrd, there is a line moving the /run of initrd to that of the root. So when systemd starts up, it assumes that /run is available. Otherwise, a lot of things will go wrong. The old initrd's do not have that line. So all the weird things happen.

A rebuild of initrd should solve the problem. I'm using a customized initrd based on Ubuntu. The addition of the line
    mount -t tmpfs -o mode=0755,nodev,noexec,nosuid tmpfs ${rootmnt}/run
at the end of the script "local" solves the problem.

Comment 36 Chris Ward 2011-11-14 13:22:01 UTC
I just upgraded from F15->F16 and hit this problem too; or something similar.

After banging my head on the table for a couple hours, it eventually clicked that the message i was getting on boot indicated pretty clearly that the volume being mounted had errors on it and i needed to run fsck manually to clean up. Perhaps there were other things there too... i did a few things before that... but ultimately, once i fsck'ed the busted home volume, everything started working again!

So... if you're hitting this, try manually unlocking the volume in emergency mode with cryptsetup luksOpen, mounting it somewhere and then running fsck on it. Clean up the issues it suggests and reboot.

But be careful... back up first, of course!

Comment 37 Thierry Leconte 2011-11-19 17:06:08 UTC
Do a fresh fedora 15 install today, and I have the same problem with
a 2.6.41.1-1.fc15.x86_64

The trick at comment 12 works (activate lv, mount /home and sysctl default)

My blkid :
/dev/sda1: LABEL="PQSERVICE" UUID="D02CCD9416D9B5D2" TYPE="ntfs" 
/dev/sda2: LABEL="ACER" UUID="A2C0CC14C0CBED1B" TYPE="ntfs" 
/dev/sda3: UUID="9418a2f5-5646-481a-b636-17cd95e284a1" TYPE="ext4" 
/dev/sda5: UUID="dBRI68-Znuq-KPRH-Wj6I-VNC5-A9Sl-P6Uk6D" TYPE="LVM2_member" 
/dev/mapper/vg_amoi-lv_swap: UUID="e8f45e9d-de92-4d4e-99c8-6082f72e4791" TYPE="swap" 
/dev/mapper/vg_amoi-lv_root: UUID="204536ef-f567-42fb-8b28-ba198e29a120" TYPE="ext4" 
/dev/mapper/vg_amoi-lv_home: UUID="28fbe49c-a05d-4d83-b099-135573decf1f" TYPE="ext4" 

It seems that this problem arise for conf with "not usual" partitioning
In my case, first 2 partitions are not Linux ...

Comment 38 Thierry Leconte 2011-11-20 16:33:01 UTC
Ok I solve my boot problem by disabling USB 2.0 in the BIOS !

2 bugs seems to be at work here :

1) there is a bug with ehci scanning with the fedoara 15 kernel on my machine.
Lots of inexisting ports are scanned with the messages :  
usb 1-10: new high speed USB device number 18 using ehci_hcd
usb 1-10: device descriptor read/64, error -110

2) it seems that the slow USB scanning put the mess in the systemd boot process
for an unknown (by me) reason

By disabling USB 2.0 (but keeping USB 1.1) in the BIOS every thing goes well and fast !!!

Comment 39 Kev 'Kyrian' Green 2012-01-30 23:04:40 UTC
I've just been fighting with similar myself, and actually, to my slight surprise, had a successful resolution.

The problem seems to be the assumption (which doesn't work for everyone) that non-RAID, non-LVM, non-encrypted, in short just 'non-complicated' filesystems should be initiated first by systemd.

So if for example you've got a RAID /home/ and sub-mounts under there of 'simple' filesystems for user directories, the mounts will fail with a dependency issue as observed by some people.

Now, I'm not saying these solution(s) will work for everyone, but basically the gist was to re-order the dependencies for systemd so it worked.

I tried various things, which might be suitable for some people, or might not, some might even do nothing at all, but this is the process I followed and my system now boots repeatably (although I'll grant you I've only tested it a little bit):

- Change your /etc/fstab entries to add "comment=systemd.mount" for non-complicated devices/filesystems. Possibly try it with 'complicated' ones too in the first instance.

- Then if that doesn't work, make sure you *dont* do the above with the complicated filesystems, instead try making sure that 'auto' is included in the options in /etc/fstab for those (should be included in the 'defaults' macro, but making them explicit may not be harmful even if it doesn't work).

- Try referencing filesystems by UUID=[whatever] as well as the raw device paths, eg. /dev/md0.

- First try at poking systemd's priorities:

mkdir -p /lib/systemd/system/local-fs-pre.target.wants/
cd /lib/systemd/system/
ln -s ../fedora-storage-init.service \
  /lib/systemd/system/local-fs-pre.target.wants/fedora-storage-init.service

- Second try:

vi /lib/systemd/system/fedora-storage-init.service

Then add in "local-fs-pre.target" to the line starting "Before=", following the space-delimited syntax.

All of that done, somehow it started working. I think it was only the last step that actually had any real effect, but I could be wrong. I shall save comment on that to those more learned than I with systemd.

I found myself wondering at one point why systemd didn't follow the last argument of /etc/fstab for the order in which filesystems should be fsck-ed as that might make more sense than adding another layer of abstraction, but it did not seem to be the case?

Comment 40 Kev 'Kyrian' Green 2012-01-30 23:06:26 UTC
I was also rather surprised that simply installing the 'upstart' version of init was not a supported pathway out of this situation, but hell, as they say, any solution that works is better than any solution that does not, at the end of the day.

Comment 41 collura 2012-04-19 12:19:46 UTC
seem to be getting similar errors with kernel-3.3.2-1.fc16.x86_64 
  'Dependency failed. Aborted start of /dev/disk/by-uuid'
though system boots fine with kernel-3.3.1-5.fc16.x86_64

Comment 42 Josh Boyer 2012-04-19 13:59:18 UTC
(In reply to comment #41)
> seem to be getting similar errors with kernel-3.3.2-1.fc16.x86_64 
>   'Dependency failed. Aborted start of /dev/disk/by-uuid'
> though system boots fine with kernel-3.3.1-5.fc16.x86_64

The kernel doesn't print stuff like that.  It's likely something new that was included in the initramfs when you updated the kernel.

Comment 43 collura 2012-04-20 10:30:56 UTC
as suggested in comment#42 my issue didnt seem to be kernel related since any kernels i reinstall had the same issue even though they worked previously.

had tried comment#33 mkinitrd seemed to fail before/after downgraded dracut 
 dracut-013-18.fc16.x86_64 <-> dracut-013-22.fc16.x86_64

think i might have tripped over 
 https://bugzilla.redhat.com/show_bug.cgi?id=809123
with some odd timing of NetworkManager releases 
as noticed meandering up/downgrade path of  
 NetworkManager-1:0.9.2-1.fc16 upgrade to -> 
 NetworkManager-1:0.9.4-2.git20120403.fc16 downgrade to ->
 NetworkManager-1:0.9.1.90-5 upgrade to ->
 NetworkManager-1:0.9.4-2.git20120403.fc16

i had up/downgraded various packages in stages including: 

NetworkManager, gnome-applets, PackageKit, coreutils, yum, rpm, dracut

(dont know if related but did notice that when upgraded back to PackageKit-0.6.22-1.fc16.x86_64 seemed to require admin password to install the next kernel update)

wasnt able to reproduce failure after the eventual fix so not sure of cause other than it not being part of this kernel bug lol

Comment 44 redhat_bugzilla 2012-05-14 16:23:48 UTC
I had the same error message displayed at boot and only manually mounting the device or commenting it out in /etc/fstab allowed me to boot.

I fixed the problem which was an UUID mismatch between the device and the entry in /etc/fstab. I wrote the details in:

https://plus.google.com/u/0/108730899326892173401/posts/DnYCUaHcBzs

Comment 45 Josh Boyer 2012-06-06 13:19:16 UTC
This bug is kind of a trainwreck.  Since F15 is going EOL in less than a month, I'm going to close it out.

If you are still seeing problems on F16 or newer, please open a new bug describing exactly what the issue is and what kernel version(s) are involved.


Note You need to log in before you can comment on or make changes to this bug.