Red Hat Bugzilla – Bug 478751
Anaconda crashes when an LVM PV on top of a Software RAID is requested
Last modified: 2009-06-18 14:17:17 EDT
Description of problem:
Installation of a new system from a Fedora 10 Live CD.
Two SATA harddisks.
Want to partition as follows:
sda --+-> sda1 --> software RAID partition for mirror
+-> sda2 --> software RAID partition for mirror
sdb --+-> sdb1 --> software RAID partition for mirror
+-> sdb2 --> software RAID partition for mirror
On top of this, create the software RAID mirrors:
One for /boot (unsure this will work, but not that far yet)
sdb1 -->+---> md0 ---> /boot
sdb2 -->+---> md1 ---> PV ---> VolGroupAleph --+-> swap
Anaconda does not like this:
PVCreateError: pvcreate of pv "/dev/md1" failed
Running... ['lvm', 'pvcreate', '-ff', '-y', '-v', '/dev/md1']
Device /dev/md1 not found (or ignored by filtering).
It is likely that /dev/md1 has not yet been created when anaconda
tries to set up the PV.
On the shell:
[root@localhost ~]# mdadm --detail /dev/md0
mdadm: cannot open /dev/md0: No such file or directory
[root@localhost ~]# mdadm --detail /dev/md1
mdadm: md device /dev/md1 does not appear to be active.
Version-Release number of selected component (if applicable):
Unsure, will try again.
Created attachment 328139 [details]
Anaconda log from /tmp/anaconda.log
Created attachment 328140 [details]
Anaconda dump from /tmp/anacdmp.txt
Second try fails again with the same problem.
The array is not ready for sure..
[root@localhost ~]# mdadm --query /dev/md0
/dev/md0: is an md device which is not active
[root@localhost ~]# mdadm --query /dev/md1
/dev/md1: is an md device which is not active
[root@localhost ~]# mdadm --examine /dev/sda1
mdadm: No md superblock detected on /dev/sda1.
[root@localhost ~]# mdadm --examine /dev/sda2
mdadm: No md superblock detected on /dev/sda2.
[root@localhost ~]# mdadm --examine /dev/sdb1
mdadm: No md superblock detected on /dev/sdb1.
[root@localhost ~]# mdadm --examine /dev/sdb2
Magic : a92b4efc
Version : 0.90.00
UUID : 420aac0b:446aba32:44b4b681:b9617688
Creation Time : Thu Jan 1 02:35:25 2009
Raid Level : raid1
Used Dev Size : 156087424 (148.86 GiB 159.83 GB)
Array Size : 156087424 (148.86 GiB 159.83 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Update Time : Thu Jan 1 02:59:52 2009
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Checksum : d2f2f4d2 - correct
Events : 17
Number Major Minor RaidDevice State
this 1 8 18 1 active sync /dev/sdb2
0 0 8 2 0 active sync /dev/sda2
1 1 8 18 1 active sync /dev/sdb2
This problems seems to be more general in that it occurs whenever two software RAID devices are requested.
In addition to the above
('md0+ext3' and 'md1+LVM') -- which fails as described
('md0+ext3' and 'md1+ext3') -- which fails similarly
('md0+ext3' + 'ext3') -- which works
('ext3' + 'md1+LVM') -- which works for formatting but then fails, apparently
at "mke2fs /dev/sdb1 -t ext3"
Trying this manually yields
"/dev/sdb1 is apparently in use by the system; will not make a filesystem here!"
Right, so anaconda won't format my /dev/sdb1. This seems to be some other problem than the one described in this report, thus I will open another bug.
"In the end, we will have a successful installation ... SO SAY WE ALL!"
The problem with the unformattable /dev/sdb1 has been solved, it was due to leftover software RAID metadata left at the end of the partition.
We have made extensive changes to the partitioning code for F11 beta, such that it is very difficult to tell whether your bug is still relevant or not. Please test with either the latest rawhide you have access to or F11 and let us know whether you are still seeing this problem. Thanks for the bug report.