Description of problem: Background: I set up two identical SATA drives as a RAID 1 set using my onboard SiI 3112 RAID controller. I wiped out the existing linux non-LVM partition with anaconda via the F7test3 LiveCD installer. I just selected the defaults all the way through (wipe existing linux partition, setup default partition layout) and when I rebooted, I found out that it did not rewrite the boot sector so it failed to boot since I have a strange boot setup (NTLDR on MBR booting grub image on Windows partition which boots linux). I didn't really expect it to handle this. Since I wanted to have a fresh start, I went back into the LiveCD to reinstall and this time activate the option to install grub. Problem: However. when reentering anaconda, it can no longer mount my mirrored set and it gives an "Error opening /dev/mapper/sil_ahadejcacefa: No such device or address" right after the language selection. I backed up and went forward and it just listed my two drives. I selected both to use for the install and selected sda to use as boot device and it had an unhandled exception (see attached anaconda_exception.txt). Version-Release number of selected component (if applicable): F7test3 LiveCD How reproducible: I cannot restart the installation to reset my system partitions. Steps to Reproduce: 1. set up a RAID1 set with SiI 3112 controller 2. install F7 using LiveCD and choosing default LVM partitioning (RAID set detected and used properly) 3. try to reinstall F7 using LiveCD Actual results: Reinstall fails to see RAID set the second time and installation is blocked Expected results: RAID set should be detected properly again and installation proceed normally. Additional info: I noticed that this time on booting the LiveCD that there was an issue with the LVM having duplicate IDs on sda2 and sdb2 but I am not sure why the LiveCD was even concerned with the LVM on the harddrives and I cannot find the error in dmesg or /var/log/messages
Created attachment 151364 [details] Anaconda unhandled exception
I blew away the linux partition using parted and the install was able to proceed as normal again. I reinstalled the default LVM layout again but this time I made sure that the selection was made to install grub on the RAID set ... it didn't and the existing one was still being used. I will open another bug to cover this issue (Bugzilla Bug 234724) The exact LVM error that occurs when the LiveCD is booted after the install is: "Setting up Logical Volume Management: Found duplicate PV {big long ID}: using /dev/sdb3 not /dev/sda3 2 logical volumes(s) in volume group "VolGroup00" now active" Is LVM not RAID-aware? If not then anaconda should not be using a LVM layout for RAID devices. Also, the LiveCD shouldn't be detecting and messing with existing LVMs on the harddisk.
If I activate the RAID set manually using 'dmraid -ay', anaconda will detect the set properly again after language selection. So somehow the presence of an LVM filesystem on an existing system is blocking the activation of a dmraid set when starting anaconda on the LiveCD. On initial installation on a blank disk or when there is just a non-LVM partition layout on the disks, the activation of the dmraid set is not blocked.
Hrmmm... this has worked for me previously. I need to beat my dmraid box into working again this afternoon to get to the bottom of this
To be clear, anaconda run from the LiveCD *does* detect the dmraid RAID 1 set to install to provided that there are no LVM partitions on it. There is that error on LiveCD boot in Comment #2 that isn't present when there are no LVM partitions. That may indicate that LVM detection knows naught of dmraided drives and somehow messes up the activation of that set by anaconda. On the first installation, if I don't accept the default LVM partitions and rather put the old school boot-root-swap on normal partitions, anaconda redetects and activates the dmraid just fine as many times as I try subsequent installations. Of course, grub doesn't boot it either but that feedback is left for my other bugs re: booting dmraid with grub.
This is still present in LiveCD 7 test 4. Work-around of dmraid -ay still works.
I can't reproduce this on my dmraid box (which is ichraid, but should be the same as far as how the low-level bits work). Sinc eyou have a workaround and it hasn't been otherwise reported, dropping to target
I can reproduce this on a Dell PE 650 with Promise SATA RAID controller and two SATA drives. I let the FC 7 (final) Live CD install the default LVM partition on a hardware mirrored array (RAID 1). Upon boot it detected the array from the Live CD. During the formatting, an LVM error occurred, and anaconda hung. Upon restart, the RAID controller wasn't detected (no /dev/mapper entries). I finally needed to run lvm, and remove all lvms, pvs, and vgs' to get the RAID to be detected. I did the live install, and on boot, the system panics, unable to see the root=LABEL=/1 partition. Doesn't look like the initrd is detecting the array.
Based on the date this bug was created, it appears to have been reported against rawhide during the development of a Fedora release that is no longer maintained. In order to refocus our efforts as a project we are flagging all of the open bugs for releases which are no longer maintained. If this bug remains in NEEDINFO thirty (30) days from now, we will automatically close it. If you can reproduce this bug in a maintained Fedora version (7, 8, or rawhide), please change this bug to the respective version and change the status to ASSIGNED. (If you're unable to change the bug's version or status, add a comment to the bug and someone will change it for you.) Thanks for your help, and we apologize again that we haven't handled these issues to this point. The process we're following is outlined here: http://fedoraproject.org/wiki/BugZappers/F9CleanUp We will be following the process here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping to ensure this doesn't happen again.
This bug has been in NEEDINFO for more than 30 days since feedback was first requested. As a result we are closing it. If you can reproduce this bug in the future against a maintained Fedora version please feel free to reopen it against that version. The process we're following is outlined here: http://fedoraproject.org/wiki/BugZappers/F9CleanUp