Bug 227474
Summary: | When installing from boot.iso, software raid / ext partitions not recognized | ||
---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Bruno Wolff III <bruno> |
Component: | anaconda | Assignee: | Anaconda Maintenance Team <anaconda-maint-list> |
Status: | CLOSED WONTFIX | QA Contact: | |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | rawhide | ||
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2007-02-26 19:19:01 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Bruno Wolff III
2007-02-06 08:45:58 UTC
I tried using rescue mode and I was able to mount partitions manually. However rescue mode did not recognize any previous Fedora installations (of which there was really two). I noticed that /dev/md* names had been created, but I was unable to mount any of the md devices. I was able to start a more normal install by puting rawhide core on the DVD with a fudged .discinfo file. When I did this and got to the partition layout selection page, all of my raid partitions were detected as being part of md0 when there were actually 7 different raid 1 arrays (though 2 only had one element). Except for the arrays with one element, the FC6 install disk was able to see them properly. The arrays are on ide drives and the partitions were set up as hd devices, though now they are sd devices. I wouldn't expect that to be a problem as I thought array elements were assembled based on uuids not device names, but just in case, I'd thought I'd mention this. The version of anaconda is: 11.2.0.19-1 I retested this today's rawhide (anaconda-11.2.0.26-1) and saw all of the array elements mashed into the md0 array on the partition layout page. In rescue mode the /dev/md* device nodes were built, but none of them were mounted. mdadm -D /dev/md0 said that the device wasn't active. I wasn't really expecting a change, as there haven't been any Changelog entries suggesting a fix, but I wanted to get a jump on testing (effectively) TEST2 during the freeze (while there was avilable bandwidth). We don't really support using "advanced" types as the source for a hard drive install (be it LVM, RAID, etc) as that would then require having all of the code to activate, scan them, etc in the first stage. And the complication for that is one part of why the installer is split into two stages. I don't think you understand the problem, I couldn't do any kind of install (i.e. the problem wasn't limited to hard drive installs, that's just what I was doing at that time) because the software raid partitions were not properly grouped. This used to work. Unfortunately I can't retest this now, because there is another bug preventing me from seeing my hard drives at all. I'll leave this closed for now, since I can't verify that the problem is still occuring and will reopen it if I still see the problem once anaconda/kudzu starts loading the correct module for my hard drive controller. Actually, now that I reread this, I see that this report really covered two different problems. For the original issue, the won't fix is reasonable. The potential other problem was mentioned in a followup comment and really should have been reported separately. If I do find it still happening when I can test it again, I'll open a new bugzilla for it. I finally was able to retest the raid array devices and don't see a problem. So I was either imagining things or it got fixed. |