Bug 194508 - anaconda stopps a kickstart installation with software raid1
anaconda stopps a kickstart installation with software raid1
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: anaconda (Show other bugs)
x86_64 Linux
medium Severity medium
: ---
: ---
Assigned To: Chris Lumens
Mike McLean
Depends On:
  Show dependency treegraph
Reported: 2006-06-08 12:39 EDT by Ivan Kondov
Modified: 2007-11-30 17:07 EST (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2006-06-20 11:01:06 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
anaconda dump file (702.13 KB, text/plain)
2006-06-08 12:39 EDT, Ivan Kondov
no flags Details

  None (edit)
Description Ivan Kondov 2006-06-08 12:39:55 EDT
Description of problem:

Version-Release number of selected component (if applicable): anaconda-, with parted-1.6.19

How reproducible: Always

Steps to Reproduce:
1. Have two hard disks and add following lines into the kickstart file:

clearpart --all
part raid.3 --size=100 --ondisk=sda --asprimary
part raid.4 --size=100 --ondisk=sdb --asprimary
part raid.10 --size=15000 --ondisk=sdb
part raid.9 --size=15000 --ondisk=sda
part raid.13 --size=5000 --ondisk=sdb
part raid.12 --size=5000 --ondisk=sda
part raid.17 --size=3000 --ondisk=sdb
part raid.15 --size=3000 --ondisk=sda
part swap --size=1000 --ondisk=sdb
part swap --size=1000 --ondisk=sda
# part raid.19 --size=120000 --ondisk=sdb --grow
# part raid.18 --size=120000 --ondisk=sda --grow
raid /boot --fstype ext3 --level=RAID1 raid.3 raid.4
raid / --fstype ext3 --level=RAID1 raid.9 raid.10
raid /tmp --fstype ext3 --level=RAID1 raid.12 raid.13
raid /var --fstype ext3 --level=RAID1 raid.15 raid.17
# raid /home --fstype ext3 --level=RAID1 raid.18 raid.19

2. Boot with PXE and load the KS from the network (should work from a diskette 

3. Anaconda stopps with the error message in the bottom line (see attached dump 
file): ValueError: md3 is already in the mdList
Actual results: Dialog window with only possibilities save logs and reboot

Expected results: Continue installation process

Additional info: If we uncomment the three commented lines obove, the 
installation is successful. The completely uncommented version works with 
Comment 1 Ivan Kondov 2006-06-08 12:39:55 EDT
Created attachment 130763 [details]
anaconda dump file
Comment 2 Chris Lumens 2006-06-20 11:01:06 EDT
This will be fixed in the next release of RHEL, and should also be fixed in FC5
and beyond.  Please test one of those if at all possible.  If you need this fix
in an update release of RHEL, please talk to your support person who will raise
it through the appropriate channels.

I had to very slightly modify your kickstart file to get it to work with the
kickstart rewrite that's present in FC5 and later, but you can use ksvalidator
to tell you what's wrong (or just try to install - the error messages are
actually helpful now).
Comment 3 Curtis Doty 2007-01-08 14:48:53 EST
I've been having this bug since forever. And just test ran the same kickstart
with an md array on fc6 and it still occurs. I have always had to use two ugly

#1 adjust the size of the physical partitions so anconda doesn't accidentally
find the old md magic

#2 dd if=/dev/zero a disk in order to scramble all signs of existing raid

Adjusting partion types and other stuff never works. I presume that after
anaconda has written out the "new" partion table, it accidentally finds an old
md superblock in exactly the right place and loads it. And thus, md0 or whatever
is already in mdList by the time anaconda tries to create it afresh.

I cannot re-open this bug. But I can confirm that it was not fixed in fc5 or
fc6. Please re-open it, fix, and....TEST. The whole point of anaconda/kickstart
is that it makes such work almost effortless.

Note You need to log in before you can comment on or make changes to this bug.