Red Hat Bugzilla – Bug 237456
Existing LVM, Software Raid, and SATA disks not detected
Last modified: 2007-11-30 17:12:02 EST
Description of problem: Trying to do a new fresh install from rawhide of
20070422 via network to a pre-existing software raid array. Selected manual
partitioning and saw that existing LVs, software raid arrays and promise sata
drives not detected or were improperly detected. The installation was attempted
on a machine used for testing and had a currently up-to-date rawhide
installation in one of the LVs in which the new /dev/sd* scheme works fine and
all raids and LVs are seen properly. Attaching LSPCI, PVSCAN, LVSCAN and MDADM
output from the working rawhide installation and also LSPCI, PCVSCAN and LVSCAN
from the attempted new install install of rawhide. MDADM reported no arrays.
In addition, the promise sata drives looked like they were seen as a bios
enabled raid array; however, they are not. They are enabled as individual ide
Version-Release number of selected component (if applicable): rawhide 20070422
Steps to Reproduce:
1. boot boot.iso cd
2. select new installation
3. select mirror (duke)
4. work to setting up the partitions manually
5. see that existing raid arrays and LVs not seen
6. quit and BZ
Actual results: aborted installtion
Expected results: normal Fedora installtion on prexisting raid array
Created attachment 153271 [details]
LSPCI, PVSCAN, LVSCAN and MDADM output of working rawhide installation
Created attachment 153273 [details]
LSPCI, PVSCAN and LVSCAN from attempted install of rawhide of 20070422
Created attachment 153286 [details]
/proc/partitions of working rawhide installation
Forgot to mention that /proc/partitions are very different between working,
established rawhide installation and the attempted new installation of rawhide
20070422. Attached file is existing working rawhide and 15273 already has
/proc/partitions of attempted install of 20070422.
I was going to attempt to get the raid arrays working with rawhide 20070423 by
copying the working installation mdadm.conf to a location accessible to the
installer and then doing a mdadm --assemble --scan --config=/wheremdadm.confis;
however, with the differences in /proc/paritions, there is a good chance I
would render the arrays unusable for existing installations.
you mentioned on fedora-test-list..
"Strange thing is, I have a rawhide installation on the same machine that
I have kept up-to-date since FC6 final days and the disk detection and
libata work fine."
Which makes it sound like it isn't a kernel bug to me, but possibly either a bug
in anaconda, lvm or dmraid.
Created attachment 153305 [details]
dmesg of failed rawhide install
Apologies for not doing this with rawhide of 20070422. Tried rawhide of
20070423 and same issues as before; but, dmesg attached. Seems suggestive.
Also note that some raid arrays were created, but the partitioner indicated
that the file system was foreign. I have seen this before when devices from
working pre-existing arrays are mismatched. dmesg of working rawhide
installation does not show any errors.
the lockdep errors are already filed in another bug, but that's not what's
preventing your install. This seems to be confusion over which members of the
raidset go where judging by the chaos that happens when you start the array.
What does your /etc/mdadm.conf look like ?
Created attachment 153329 [details]
mdadm.conf from working rawhide installation
Per request in #6, /etc/mdadm.conf attached.
Update with net install attempt of rawhide of 20070424.
Using nompath parameter allows the promise sata devices to be seen individually
instead of as a bios assembled array. However, only one LV was seen and not all
software raid arrays were seen. The ones seen still show the filesystem as foreign.
So, pressed back until opening install screen was up, went to console 2, mounted
a drive where I had put a valid mdadm.conf and did a mdadm --assemble --scan
--config=/temp/mdadm.conf and all arrays came up. Then did an lvm vgchange -ay
and all LVs came up. Back to console 6 and the partitioner now saw all LVs and
all software raid arrays and NO foreign file systems. I did not continue with
the install since I had to "kludge" things to this point and I didn't know if
damage would be done. On rebooting to the working rawhide installation, I
noticed that some of the 3rd devices in some of the raid5 arrays were missing,
but they re-added ok. Hope this helps.
For the mdadm.conf file I changed DEVICE partitions to DEV /dev/sd*. No other
changes from the attachment in #7.
Created attachment 155311 [details]
anaconda dump file from HTTP install of rawhide of 20070523
Based on W. Woods list msg that mdadm now used, tried an HTTP install and
encountered an anaconda exception when it began looking for existing
installations. Reran with nompath parm, but still failed. This is actually a
regression over FC7T4 where you could work around software raid problems, but
Also, don't know why this isn't a blocker.
Is this still broken in today's rawhide? This appears to be a different symptom
of the bug fixed in #151653 .
Haven't found an updated anaconda yet. No msg in development list indicating
rawhide has been updated. Fedora development dir at redhat has .63 version.
Will try as soon as rawhide is updated.
I really want to wait for a rawhide update msg, but did find a mirror that had
the .64 version of anaconda listed. The images dir and files are dated 23 May,
so don't know if they include the new version of anaconda. Anyway, tried an
HTTP install and got an anaconda traceback. How do you determine the version of
anaconda during an install? I tried -v and --version at console 2, but they
aren't valid parms. Looked at the log, but didn't see any version information.
Looked at the traceback file, but didn't see any version information.
FC7RC2 DVD tested and with nompath parm added all pre-existing software raid and
LVs are seen. Without nompath, the promise raid controller (which is *NOT*
configured as raid in the BIOS) is configured as a multipath controller and not
all software raid devices are correctly detected. I don't know what is involved
in determining BIOS setting for these controllers--probably no standard across
the many BIOS'?--but maybe an option should be given to allow a user to select?
Also, I changed the mdadm.conf file to DEV /dev/sd* and used UUIDs instead of
major-minor. I find UUIDs more precise and they are consistent across
Nice job tho, install went fine after nompath. I consider this BZ closed, but
will leave final determination up to y'all.
I think we'll consider this bug closed - if the Promise RAID stuff is a change
in behavior from FC6, you might file that as a separate bug.
Thanks for your help and patience.