Red Hat Bugzilla – Bug 505872
Fedora 11 failed to install on system with fake RAID in striping mode
Last modified: 2009-10-05 09:33:34 EDT
Description of problem: Failed to install
Version-Release number of selected component (if applicable):
How reproducible: stable reproduce
Steps to Reproduce:
1. Install Fedora 11 via NFS on computer with fake raid controller with 2 identical SATA disks and raid in mode 'Striping'
Actual results: Exception in installer
Expected results: Complete install
Created attachment 347819 [details]
Created attachment 347820 [details]
There are two separate problems here, one of which is RAID and one is general partitioning. The general partitioning bug may be the same as another report (505887), so I will ask dlehman to look at it when he's back from vacation and if they're the same problem, use that bug for that problem.
Ok I Upgraded from Fedora 10, through the automated process on a HP ML 310 (fake raid, set to Raid0). I also noticed that the usb failed, possibly because of the bluetooth usb adapater, in which bluetooth also failed.
I had a similar problem, when installing Fedora 11 from the bootable CD-ROM onto a system with 6 drives: a pair of S-ATA drives in a RAID 0 array, two IDE drives as single-drive stripes connected to a separate PCI card, and two more IDE drives connected to the motherboard IDE controller.
The two single-drive stripes sounds similar to the described situation, the difference in my case being that I was able to install Fedora onto the RAID 0 array, but then not able to mount the other four drives.
Fedora 11 recognized the RAID 0 array connected to the Intel ICH9R southbridge chip on the Gigabyte GA-P35-DS4 motherboard, consisting of mirrored 320 GB Samsung HD321K SATA drives. I installed onto this array.
However, the lvm (logical volume manager) did not properly handle a pair of IDE drives connected to a separate Promise TX2000 FastTrack PCI card, a RAID controller which can control up to 4 drives. Two 120 GB ATA/133 Maxtor DiamondMax Plus 9 drives were connected to it on separate cables, both set as master and each configured as a stripe consisting of a single drive.
What I had actually wanted was just an additional plain old IDE controller, but the PCI card is such that it insists on configuring the drives as RAID drives, leaving me no choice other than two create two single-drive stripes if I want to mount them separately.
The lvm added entries for these to /dev/mapper, but it was not possible to mount them. The /dev directory had entries for the drives, but not their partitions. Running "fdisk -l" showed the drive information as expected, and running "partprobe" added the partition entries to /dev, however they could not be mounted afterwards. Any attempt to mount or reformat the drives resulted in an error message that the device was busy. However, neither "lsof" or "fuser" indicated any user, so presumably lvm was blocking any mounting.
The problem with the lvm also prevented the two other drives from being properly seen or mounted. Another pair of Maxtor drives were installed in removable bays, connected to the IDE controller on the motherboard. Like the drives connected to the PCI card, for one or both, their partitions were not shown in /dev. At one point during multiple installation attempts, one of them was mountable, but never both. Removing the PCI card solved the problem, and the drives in the removable bays could be mounted.
*** This bug has been marked as a duplicate of bug 504829 ***