Red Hat Bugzilla – Bug 151653
anaconda fails to bring up raid device whose members have moved
Last modified: 2007-11-30 17:11:02 EST
Although anaconda finds all raid members correctly, it uses a deprecated ioctl
to start raid arrays. It appears that this deprecated ioctl is very dumb, in
that it takes a single member name, and uses information from the superblock to
locate the other members, without verification. As a result, it may bring up
incomplete arrays, fail to bring them up entirely, and even bring up unrelated
As an example, I had two raid 1 arrays, one with two members (say md2), one with
a single member (say md1). This is just a simplified scenario; I ran into the
the error with multiple 2+-component devices. md2 had say hda5 and sda5 as
components; md1 had say sdb5 as the only component. mdadm --examine confirmed
the array memberships.
As it turned out, I recabled the box such that sda become sdb and vice-versa.
From that point on, anaconda refused to re-install the box because one of the
arrays was degraded. When I issued `raidstart /dev/md2' from rescue mode,
starting without any running arrays, it brought up not only a degraded md2 with
hda5 only, but also md1, which was very puzzling.
The reason appears to be that the kernel (or anaconda C-level isys; I haven't
completed my investigation) almost-blindly follows the information it finds in
the superblock of the named member to locate the other members. So, when the
dev nodes for partitions change, the kernel doesn't notice the change; it just
goes ahead bringing up all the arrays whose members are listed in the named
block device. Neat, huh?
I suppose that's why this ioctl is said to be deprecated. Could we perhaps
change anaconda to use mdadm to bring up raid devices, instead of duplicating
this functionality incorrectly in its own code? I may try to code this up if
you agree with the general idea.
Version-Release number of selected component (if applicable):
The idea doesn't sound awful. If you're volunteering to do it, go right ahead ;)
This report targets the FC3 or FC4 products, which have now been EOL'd.
Could you please check that it still applies to a current Fedora release, and
either update the target product or close it ?
Fedora Core 3 and Fedora Core 4 are no longer supported. If you could retest
this issue on a current release or on the latest development / test version, we
would appreciate that. Otherwise, this bug will be marked as CANTFIX one month
from now. Thanks for your help and for your patience.
This sounds very much like what I am seeing when I try to install Fedora 7
test 3 on my home system. I have a number of software RAID devices spread
across 4 IDE drives. From Fedora Core 6 to Fedora 7 test 2, these drives
have been renamed as follows:
hde --> sda
hdg --> sdb
hdi --> sdc
hdk --> sdd
As a result, anaconda is unable to see any of my software RAID devices, and
I have bee unable to install any Fedora 7 test release. This is going to
bite anyone with software RAID devices on IDE drives very hard.
Seems like it is still an issue.
*** Bug 238926 has been marked as a duplicate of this bug. ***
Does it work when you manually start the MD device?
I can start the devices with the --uuid= and --scan options. Trying to start
them by specifying the partitions gives a "no devices found" error.
This should be better now that we've switched to using mdadm for
(In reply to comment #9)
> This should be better now that we've switched to using mdadm for
> starting/stopping arrays
Is there installation media that can be used to test this?
I just tried installing today's Rawhide. Some progress has been made, but
this is still not completely fixed.
* anaconda is able to use mdadm to start the software RAID devices.
* All of the software RAID devices are listed in Disk Druid, and their sizes
are shown correctly.
* Software RAID devices that are LVM PVs are correctly identified as such,
and the correct VG is listed in the "Mount Point/RAID/Volume" column.
What doesn't work:
* RAID members are not listed.
* Software RAID devices with an ext3 filesystem on them are not correctly
identified as such. Instead they are listed as "software RAID" in the
"Type" column. Disk Druid will not allow me to assign a mount point to
one of these devices unless I also choose to format it.
Since my RAID-1 /boot device is shared between multiple installations, a
reformat is obviously unacceptable. Net result, I'm still unable to install
on this system.
I disagree with closing this bug.
Should be fixed in anaconda-220.127.116.11-1 .
Sorry, .64-1 .
*** Bug 240952 has been marked as a duplicate of this bug. ***
bug 240952 was regarding an error placed in /etc/mdadm.conf by anaconda during a
fresh install of F7T4, not an upgrade. so not 100% certain it really is a dup.
can someone confirm? - i'm a bit lost in the comments on this bug and bug 238926!
Hal-ee-loo-yah! This appears to be truly fixed in the RC. Disk Druid
correctly detected ext3 filesystems on pre-existing software RAID devices
and allows me to use them without a reformat.
The component partitions are still not shown for software RAID devices, but
that's certainly not a blocker.
(In reply to comment #15)
> bug 240952 was regarding an error placed in /etc/mdadm.conf by anaconda
> during a fresh install of F7T4, not an upgrade. so not 100% certain it
> really is a dup.
Creating mdadm.conf files incorrectly is a different problem than this. If we're
still creating mdadm.conf files incorrectly then bug 240952 should be reopened,
but this particular bug is fixed.