Bug 231453

Summary: kickstart raid install fails with valueerror: md2 is already in the mdList
Product: Red Hat Enterprise Linux 4 Reporter: Dave Botsch <botsch>
Component: anacondaAssignee: Anaconda Maintenance Team <anaconda-maint-list>
Status: CLOSED DUPLICATE QA Contact:
Severity: high Docs Contact:
Priority: medium    
Version: 4.4   
Target Milestone: ---   
Target Release: ---   
Hardware: i686   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2007-03-28 18:53:17 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Dave Botsch 2007-03-08 14:01:48 UTC
Description of problem:

A kickstart installation with linux software raid partitions fails using the
generated ananconda-ks.cfg from the original by hand install. The error given is
VluaeError: md2 is already in the mdList. This error occurs during the
formatting of the partitions stage of the install.

Version-Release number of selected component (if applicable):
rhel4u4

How reproducible:
100% on test system

Steps to Reproduce:
1. Do a normal by hand install. Make several (say 4) raid 1 partitions.
2. Grab the generated anaconda-ks.cfg and uncomment the partition sections
3. Try to reinstall using this ks file. No joy.
  
Actual results:

The error message: ValueError: md2 is already in the mdList

Expected results:

The system installs successfully.

Additional info:

Partition info from the anaconda-ks.cfg file:
clearpart --all
part raid.8 --size=1024 --ondisk=sdb --asprimary
part raid.20 --size=1024 --ondisk=sdc --asprimary
part raid.21 --size=10000 --ondisk=sdc
part raid.9 --size=10000 --ondisk=sdb
part raid.22 --size=1024 --ondisk=sdc
part raid.11 --size=1024 --ondisk=sdb
part swap --size=100 --grow --ondisk=sdc --asprimary
part swap --size=100 --grow --ondisk=sdb --asprimary
part raid.35 --size=100 --grow --ondisk=sde
part raid.32 --size=100 --grow --ondisk=sdd
part /backup --fstype ext3 --size=100 --grow --ondisk=sda
part /vicepa --fstype ext3 --size=100 --ondisk=sda --asprimary
part raid.23 --size=100 --grow --ondisk=sdc
part raid.13 --size=100 --grow --ondisk=sdb
raid /boot --fstype ext3 --level=RAID1 raid.8 raid.20
raid /scratch --fstype ext3 --level=RAID1 raid.32 raid.35
raid /var --fstype ext3 --level=RAID1 raid.9 raid.21
raid /cache --fstype ext3 --level=RAID1 raid.11 raid.22
raid / --fstype ext3 --level=RAID1 raid.13 raid.23

Comment 1 Dave Botsch 2007-03-13 20:50:23 UTC
The kickstart seems to be doing its own numbering scheme and somehow renumbering
stuff it has already numbered or that I have numbered, with respect to the md
numbers.

For example, if I specify --device=mdx in each of the raid kickstart commands,
I'll then end up w. a system that doesn't install and may end up with /dev/md4's
partititions (which are /dev/md4 according to /proc/mdstat) attempting to be
auto numbered after the fact as /dev/md1 ... clearly this can't work.

Comment 2 Dave Botsch 2007-03-13 21:23:04 UTC
I seem to have found the solution...

before attempting to reinstall w. kickstart, boot up into rescue mode and use
fdisk to clear out all partitions.

So, the clearpart command in the kickstart file seems to not be doing the right
thing when software raid partitions are already present on the disk.

Comment 3 Chris Lumens 2007-03-28 18:53:17 UTC

*** This bug has been marked as a duplicate of 172648 ***