Red Hat Bugzilla – Bug 227474
When installing from boot.iso, software raid / ext partitions not recognized
Last modified: 2007-11-30 17:11:56 EST
Description of problem:
I am trying to install using today's boot.iso file (post 2.6.20 kernel in
rawhide) and when I try to use a hard disk install, it can't mount any of my
partitions. These partitions are all ext3 on software raid. While I can't get a
shell prompt at that point to try mounting these manually, I do see errors about
not being able to mount FAT partitions. I would expect that if you were going to
guess one file system format for a linux raid partition, that it would be ext2,
not vfat. It is probably possible to figure out what partition type it is
without guessing, though that might be more work. As long as the partition is
being mounted read only, the software raid can be ignored without trashing the
Version-Release number of selected component (if applicable):
I tried it a couple of times and got the same result.
Steps to Reproduce:
1. Boot off boot.iso
2. Select hard drive for iso image location
3. Try to use a partition marked as a software raid partition
It was unable to find any directories on the selected partitions. Error messages
in one of the other windows suggested that the partition was mounted as a vfat
device and that that failed.
That it would find the specified directory.
I tried using rescue mode and I was able to mount partitions manually.
However rescue mode did not recognize any previous Fedora installations (of
which there was really two). I noticed that /dev/md* names had been created, but
I was unable to mount any of the md devices.
I was able to start a more normal install by puting rawhide core on the DVD with
a fudged .discinfo file. When I did this and got to the partition layout
selection page, all of my raid partitions were detected as being part of md0
when there were actually 7 different raid 1 arrays (though 2 only had one
element). Except for the arrays with one element, the FC6 install disk was able
to see them properly.
The arrays are on ide drives and the partitions were set up as hd devices,
though now they are sd devices. I wouldn't expect that to be a problem as I
thought array elements were assembled based on uuids not device names, but just
in case, I'd thought I'd mention this.
The version of anaconda is: 126.96.36.199-1
I retested this today's rawhide (anaconda-188.8.131.52-1) and saw all of the array
elements mashed into the md0 array on the partition layout page. In rescue mode
the /dev/md* device nodes were built, but none of them were mounted. mdadm -D
/dev/md0 said that the device wasn't active.
I wasn't really expecting a change, as there haven't been any Changelog entries
suggesting a fix, but I wanted to get a jump on testing (effectively) TEST2
during the freeze (while there was avilable bandwidth).
We don't really support using "advanced" types as the source for a hard drive
install (be it LVM, RAID, etc) as that would then require having all of the code
to activate, scan them, etc in the first stage. And the complication for that
is one part of why the installer is split into two stages.
I don't think you understand the problem, I couldn't do any kind of install
(i.e. the problem wasn't limited to hard drive installs, that's just what I was
doing at that time) because the software raid partitions were not properly
grouped. This used to work.
Unfortunately I can't retest this now, because there is another bug preventing
me from seeing my hard drives at all.
I'll leave this closed for now, since I can't verify that the problem is still
occuring and will reopen it if I still see the problem once anaconda/kudzu
starts loading the correct module for my hard drive controller.
Actually, now that I reread this, I see that this report really covered two
different problems. For the original issue, the won't fix is reasonable. The
potential other problem was mentioned in a followup comment and really should
have been reported separately. If I do find it still happening when I can test
it again, I'll open a new bugzilla for it.
I finally was able to retest the raid array devices and don't see a problem. So
I was either imagining things or it got fixed.