From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.4.1)
Description of problem:
Ran Fedora test3 anaconda installer, selected fresh install, on a
system with four hard drives. The hard drives had been used before
and contained two pre-existing software RAID 0 setups (plus some other
When I clicked the "RAID" button in Disk Druid, I got the following
Exception Occured: An unhandled exception has occurred. This is most
likely a bug. Please copy the full text of this exception and file a
detailed bug report against anaconda at
Traceback (most recent call last):
File "/usr/lib/anaconda/lw/partition_gui.py", line 1210, in makeraidCB
File "/usr/lib/anaconda/partitions.py", line 425, in
ValueError: list.remove(x): x not in list
- - -
(Note also: incorrect / inconsistent spelling of Occurred)
Version-Release number of selected component (if applicable):
Fedora Core test3
Didn't try, expect every time
Steps to Reproduce:
1. Install Fedora Core test 3, configure a software raid.
2. Add two more hard drives to system which already contain a software
3. Start reinstalling Fedora Core test 3 on the system.
4. Select manual partitioning with Disk Druid, click the RAID button.
Actual Results: Exception as noted above
Expected Results: No exception
What were the software raid devices preconfigured on the system?
Hello. Here's a more detailed explanation:
From the first installation of Fedora Core on this machine (before I
added the second two disks), I had a RAID 0 configured as /dev/md0.
This was on two 36 GB SATA disks. Each of the two SATA disks had a
swap partition and a regular partition in addition to the raid partition.
Then I added the two more disks. These were PATA disks, transferred
from my old computer. They each had a single partition, and on the
old computer they were also configured as /dev/md0.
So, when I started to reinstall Fedora Core on the resulting system,
it autodetected the two previous RAID0 collections. However, it
seemed to get confused, since obviously there were four partitions
involved, and they couldn't all be from the same RAID0...
I worked around the problem by doing a Ctrl-Alt-F1 from the installer
to get to a command prompt, and then wiped the partition tables on the
SATA disks using:
"dd if=/dev/zero of=/dev/sda bs=512 count=1"
"dd if=/dev/zero of=/dev/sdb bs=512 count=1"
Then I rebooted and restarted the install... it worked fine that time
around since there was only the one pre-existing RAID setup to find.
My suggestion would be:
- Have anaconda detect invalid RAID (or LVM?) configurations before
entering the disk druid installer
- If something strange shows up in the existing partition tables, give
the user an option to ignore or wipe out existing partitions, on a
Hope that helps?
Okay, matches what I was guessing. Fixed in CVS