Created attachment 687333 [details] Logs (journalctl -b and dmesg) Description of problem: As described in Bug #902292, I was upgrading my Fedora 17 64 bit KDE installation to Fedora 18 using fedup and an ISO of Fedora 18 DVD. Fedup did some changes and then rebooted. I then hitted a first bug (see #902292) and after applying a patch, I hitted a second one: [ TIME ] Timed out waiting for device dev-mapper-vg_fs\x2dhome.device. [DEPEND] Dependency failed for /home. [DEPEND] Dependency failed for Local File Systems. I had (for historical reason, been playing with Btrfs on fedora 16 before) a LVM2 volum group called vg_fs with 2 logical volumes: root and home. Home is a btrfs partition (I guess that's how anaconda did it when I installed F16). I have tried to investigate but I could not find out what was bugging systemd. Version-Release number of selected component (if applicable): Latest F17 components as of 19.01.2013 How reproducible: I tried to reboot several times, and the bug was always there. Steps to Reproduce: 1. Install F17 with a dedicated btrfs partition for /home on top of a LVM2 VG 2. Upgrade F17 to the latest version 3. Upgrade to F18 using the iso DVD option of fedup 4. after the reboot and patch correction from Bug #902292 you will have the problem. Actual results: System does not boot properly and maintenance mode is forced. Expected results: Upgrade is successful and I can see the login screen. Additional info: See attachments.
I have just encountered this exact problem with fedup-0.7.3-4 as have several other people responding on the Installation forum. Here's the output from "fdisk -l" showing my partition structure: Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00082e40 Device Boot Start End Blocks Id System /dev/sda1 * 2048 1026047 512000 83 Linux /dev/sda2 1026048 1953519615 976246784 8e Linux LVM Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00082e40 Device Boot Start End Blocks Id System /dev/sdb1 * 2048 1026047 512000 83 Linux /dev/sdb2 1026048 1953519615 976246784 8e Linux LVM Disk /dev/md125: 1000.2 GB, 1000202043392 bytes 2 heads, 4 sectors/track, 244189952 cylinders, total 1953519616 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00082e40 Device Boot Start End Blocks Id System /dev/md125p1 * 2048 1026047 512000 83 Linux /dev/md125p2 1026048 1953519615 976246784 8e Linux LVM Disk /dev/mapper/vg_clowder-lv_swap: 4227 MB, 4227858432 bytes 255 heads, 63 sectors/track, 514 cylinders, total 8257536 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/vg_clowder-lv_root: 53.7 GB, 53687091200 bytes 255 heads, 63 sectors/track, 6527 cylinders, total 104857600 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/vg_clowder-lv_home: 941.7 GB, 941738688512 bytes 255 heads, 63 sectors/track, 114493 cylinders, total 1839333376 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes When running FC17, /dev/mapper contains: control vg_clowder-lv_home -> ../dm-2 vg_clowder-lv_root -> ../dm-1 vg_clowder-lv_swap -> ../dm-0 When the System Upgrade boot drops into Emergency Mode, /dev/mapper only contains: control vg_clowder-lv_root -> ../dm-1 vg_clowder-lv_swap -> ../dm-0 and /dev/dm-2 is also missing. Given that this problem was reported in January and it is now May, I have to ask whether or not this bug has been investigated at all? Is any more information needed? I cannot complete my upgrade until there is at least a work-around. I'd rather avoid having to do a clean install if that can be at all avoided because I can't afford the time at the moment. Alternatively, what's the best way to back out of this failed upgrade attempt?
This isn't likely to be a fedup problem per se, since fedup itself doesn't do the initial mounting. This likely to be a problem with dracut/systemd not knowing how to handle your weird filesystem layout without special instructions. Does the upgrade actually start/run/finish? Is the "System Upgrade (fedup)" item still in the boot configuration? What kernel(s) are available? Can you choose an older kernel to boot? If you edit /etc/fstab and comment out /home temporarily, does the system start? Is /home encrypted?
The "weird" filesystem was the result of choosing a "Use all" default partition when I installed FC15 with a clean install. The installer ended up partitioning the logical volume into /root and /home anyway; it wasn't by conscious choice on my part. Other answers in order: Does the upgrade actually start/run/finish? No Is the "System Upgrade (fedup)" item still in the boot configuration? Yes What kernel(s) are available? The boot menu has: System Upgrade (fedup) Fedora (3.8.8-100.fc17.x86_64) Fedora (3.8.4-102.fc17.x86_64) Fedora (3.8.3-103.fc17.x86_64) Can you choose an older kernel to boot? Yes. I am running 3.8.8-100 If you edit /etc/fstab and comment out /home temporarily, does the system start? At this point I'm somewhat reluctant to experiment and risk ending up with the need re-install. I realize that an upgrade is a always a risk, but I encountered a problem with RAID on an earlier upgrade to FC15 (See Bug #736386) that left me dead in the water for a week. Is /home encrypted? No, no partition is encrypted. Also SELinux is in permissive mode.
(In reply to comment #3) > The "weird" filesystem was the result of choosing a "Use all" default > partition when I installed FC15 with a clean install. The installer ended up > partitioning the logical volume into /root and /home anyway; it wasn't by > conscious choice on my part. So you don't have a btrfs filesystem for /home? Because that's the "weird filesystem" that the original bug report was about. You said you had the "exact same problem", so I assumed that was also the case. Actually it looks like you have LVM on top of mdraid - something like: sda+sdb -> md125 (/boot, LVM PV). The failure condition is the same (can't mount filesystem, drop to emergency shell) but the cause is actually very different. So. I've filed bug 959576 for this - please continue the dicussion there, unless you've got btrfs-on-LVM. For the record, though: > Does the upgrade actually start/run/finish? > No > Is the "System Upgrade (fedup)" item still in the boot configuration? > Yes If the upgrade didn't start, then there is no "failed upgrade attempt" to back out of - no upgrade has been attempted, and your system is basically untouched. You can use 'fedup --resetbootloader' or 'fedup --clean' if you want to remove the boot entry and/or remove the cached packages and boot images.
This works for me (btrfs-on-LVM), *unless* you're using fedup-0.7.3-4.fc17. Which makes it probably a duplicate of bug 958586. *** This bug has been marked as a duplicate of bug 958586 ***