Bug 903998
Summary: | Second Logical Volume fails to mount during upgrade (brtfs-on-LVM) | ||||||
---|---|---|---|---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Jean-Christophe Berthon <huygens> | ||||
Component: | fedup | Assignee: | Will Woods <wwoods> | ||||
Status: | CLOSED DUPLICATE | QA Contact: | Fedora Extras Quality Assurance <extras-qa> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | 17 | CC: | apollidoro, ollmtm, sergio, tflink, wwoods | ||||
Target Milestone: | --- | ||||||
Target Release: | --- | ||||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | |||||||
: | 959576 (view as bug list) | Environment: | |||||
Last Closed: | 2013-05-16 21:18:55 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 959576 | ||||||
Attachments: |
|
Description
Jean-Christophe Berthon
2013-01-25 09:12:01 UTC
I have just encountered this exact problem with fedup-0.7.3-4 as have several other people responding on the Installation forum. Here's the output from "fdisk -l" showing my partition structure: Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00082e40 Device Boot Start End Blocks Id System /dev/sda1 * 2048 1026047 512000 83 Linux /dev/sda2 1026048 1953519615 976246784 8e Linux LVM Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00082e40 Device Boot Start End Blocks Id System /dev/sdb1 * 2048 1026047 512000 83 Linux /dev/sdb2 1026048 1953519615 976246784 8e Linux LVM Disk /dev/md125: 1000.2 GB, 1000202043392 bytes 2 heads, 4 sectors/track, 244189952 cylinders, total 1953519616 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00082e40 Device Boot Start End Blocks Id System /dev/md125p1 * 2048 1026047 512000 83 Linux /dev/md125p2 1026048 1953519615 976246784 8e Linux LVM Disk /dev/mapper/vg_clowder-lv_swap: 4227 MB, 4227858432 bytes 255 heads, 63 sectors/track, 514 cylinders, total 8257536 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/vg_clowder-lv_root: 53.7 GB, 53687091200 bytes 255 heads, 63 sectors/track, 6527 cylinders, total 104857600 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/vg_clowder-lv_home: 941.7 GB, 941738688512 bytes 255 heads, 63 sectors/track, 114493 cylinders, total 1839333376 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes When running FC17, /dev/mapper contains: control vg_clowder-lv_home -> ../dm-2 vg_clowder-lv_root -> ../dm-1 vg_clowder-lv_swap -> ../dm-0 When the System Upgrade boot drops into Emergency Mode, /dev/mapper only contains: control vg_clowder-lv_root -> ../dm-1 vg_clowder-lv_swap -> ../dm-0 and /dev/dm-2 is also missing. Given that this problem was reported in January and it is now May, I have to ask whether or not this bug has been investigated at all? Is any more information needed? I cannot complete my upgrade until there is at least a work-around. I'd rather avoid having to do a clean install if that can be at all avoided because I can't afford the time at the moment. Alternatively, what's the best way to back out of this failed upgrade attempt? This isn't likely to be a fedup problem per se, since fedup itself doesn't do the initial mounting. This likely to be a problem with dracut/systemd not knowing how to handle your weird filesystem layout without special instructions. Does the upgrade actually start/run/finish? Is the "System Upgrade (fedup)" item still in the boot configuration? What kernel(s) are available? Can you choose an older kernel to boot? If you edit /etc/fstab and comment out /home temporarily, does the system start? Is /home encrypted? The "weird" filesystem was the result of choosing a "Use all" default partition when I installed FC15 with a clean install. The installer ended up partitioning the logical volume into /root and /home anyway; it wasn't by conscious choice on my part. Other answers in order: Does the upgrade actually start/run/finish? No Is the "System Upgrade (fedup)" item still in the boot configuration? Yes What kernel(s) are available? The boot menu has: System Upgrade (fedup) Fedora (3.8.8-100.fc17.x86_64) Fedora (3.8.4-102.fc17.x86_64) Fedora (3.8.3-103.fc17.x86_64) Can you choose an older kernel to boot? Yes. I am running 3.8.8-100 If you edit /etc/fstab and comment out /home temporarily, does the system start? At this point I'm somewhat reluctant to experiment and risk ending up with the need re-install. I realize that an upgrade is a always a risk, but I encountered a problem with RAID on an earlier upgrade to FC15 (See Bug #736386) that left me dead in the water for a week. Is /home encrypted? No, no partition is encrypted. Also SELinux is in permissive mode. (In reply to comment #3) > The "weird" filesystem was the result of choosing a "Use all" default > partition when I installed FC15 with a clean install. The installer ended up > partitioning the logical volume into /root and /home anyway; it wasn't by > conscious choice on my part. So you don't have a btrfs filesystem for /home? Because that's the "weird filesystem" that the original bug report was about. You said you had the "exact same problem", so I assumed that was also the case. Actually it looks like you have LVM on top of mdraid - something like: sda+sdb -> md125 (/boot, LVM PV). The failure condition is the same (can't mount filesystem, drop to emergency shell) but the cause is actually very different. So. I've filed bug 959576 for this - please continue the dicussion there, unless you've got btrfs-on-LVM. For the record, though: > Does the upgrade actually start/run/finish? > No > Is the "System Upgrade (fedup)" item still in the boot configuration? > Yes If the upgrade didn't start, then there is no "failed upgrade attempt" to back out of - no upgrade has been attempted, and your system is basically untouched. You can use 'fedup --resetbootloader' or 'fedup --clean' if you want to remove the boot entry and/or remove the cached packages and boot images. This works for me (btrfs-on-LVM), *unless* you're using fedup-0.7.3-4.fc17. Which makes it probably a duplicate of bug 958586. *** This bug has been marked as a duplicate of bug 958586 *** |