Bug 242334
Summary: | FC6->F7 upgrade w/ mdadm.conf created new mdadm.conf w/o all devices listed | ||
---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Matt Domsch <matt_domsch> |
Component: | anaconda | Assignee: | Peter Jones <pjones> |
Status: | CLOSED RAWHIDE | QA Contact: | |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 7 | CC: | edwinh, greno, jburgess777, linux-bugs, olle, orion, paul, pza, redhat, vchepkov |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2008-04-24 12:33:52 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Matt Domsch
2007-06-03 12:52:22 UTC
The same happened to me twice and it doesn't matter if I used upgrade mode or yum. I ended up with non-bootable system. Me too. Let me know if you need any logs. Me too. I have LVM-on-RAID1 and in an FC-6 to F-7 upgrade, my /etc/mdadm.conf was replaced with one containing only: # mdadm.conf written out by anaconda DEVICE partitions MAILADDR root ARRAY /dev/md0 level=raid1 num-devices=2 UUID=a6023eda:5dd9ef69:a77f13f3:6e25e139 Booting the DVD in rescure mode, I used "mdadm --detail --scan" to recover the missing information and create a new /etc/mdadm.conf: # mdadm.conf written out by anaconda and edited by Paul DEVICE partitions MAILADDR root ARRAY /dev/md0 level=raid1 num-devices=2 UUID=a6023eda:5dd9ef69:a77f13f3:6e25e139 ARRAY /dev/md1 level=raid1 num-devices=2 uuid=451ff0fc:fb610ea3:d05d0076:442ef352 ARRAY /dev/md2 level=raid1 num-devices=2 UUID=9ea76464:ea298b64:4dd98395:c2064a2b ARRAY /dev/md3 level=raid1 num-devices=2 UUID=78e55309:7dba3918:1f3e29d4:75f5d52e ARRAY /dev/md4 level=raid1 num-devices=2 UUID=fb599c79:d8f72cc9:0fb29f9f:d716c262 ARRAY /dev/md5 level=raid1 num-devices=2 UUID=29034664:e2924612:bc076052:789a4a40 After re-making the initrd I was then able to boot. I have another machine with multiple RAID1 devices that I did a fresh install of F7 on, and there was no problem with that. bah. This happened to me too, and I didn't find this entry from searching until after I submitted my own bugzilla. I can't see how to mark it a duplicate myself, but is the same as #246081. *** Bug 246081 has been marked as a duplicate of this bug. *** pjones, is this likely to have been fixed in anaconda for F8? I don't see how it wouldn't be -- we now unconditionally do our fsset writeout which also includes writing an mdadm.conf. The only way I could see it not happening is if we don't detect all of the raid arrays on upgrade This still exists in FC8 when upgrading from FC7. See bug 383641 (which is probably a duplicate of this). Also when upgrading FC6->F8. The backup made before the upgrade confirms mdadm.conf had both md0 and md1, but only md0 afterwards. On comment 7: The root file system was on md1, including the updated /etc/mdadm.conf, so I don't think the volume wasn't detected. This should all be better for f9. We use mdadm to write configuration now. Please reopen if you see the same behavior. *** Bug 383641 has been marked as a duplicate of this bug. *** |