Bug 231453 - kickstart raid install fails with valueerror: md2 is already in the mdList
kickstart raid install fails with valueerror: md2 is already in the mdList
Status: CLOSED DUPLICATE of bug 172648
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: anaconda (Show other bugs)
4.4
i686 Linux
medium Severity high
: ---
: ---
Assigned To: Anaconda Maintenance Team
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2007-03-08 09:01 EST by Dave Botsch
Modified: 2007-11-16 20:14 EST (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2007-03-28 14:53:17 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Dave Botsch 2007-03-08 09:01:48 EST
Description of problem:

A kickstart installation with linux software raid partitions fails using the
generated ananconda-ks.cfg from the original by hand install. The error given is
VluaeError: md2 is already in the mdList. This error occurs during the
formatting of the partitions stage of the install.

Version-Release number of selected component (if applicable):
rhel4u4

How reproducible:
100% on test system

Steps to Reproduce:
1. Do a normal by hand install. Make several (say 4) raid 1 partitions.
2. Grab the generated anaconda-ks.cfg and uncomment the partition sections
3. Try to reinstall using this ks file. No joy.
  
Actual results:

The error message: ValueError: md2 is already in the mdList

Expected results:

The system installs successfully.

Additional info:

Partition info from the anaconda-ks.cfg file:
clearpart --all
part raid.8 --size=1024 --ondisk=sdb --asprimary
part raid.20 --size=1024 --ondisk=sdc --asprimary
part raid.21 --size=10000 --ondisk=sdc
part raid.9 --size=10000 --ondisk=sdb
part raid.22 --size=1024 --ondisk=sdc
part raid.11 --size=1024 --ondisk=sdb
part swap --size=100 --grow --ondisk=sdc --asprimary
part swap --size=100 --grow --ondisk=sdb --asprimary
part raid.35 --size=100 --grow --ondisk=sde
part raid.32 --size=100 --grow --ondisk=sdd
part /backup --fstype ext3 --size=100 --grow --ondisk=sda
part /vicepa --fstype ext3 --size=100 --ondisk=sda --asprimary
part raid.23 --size=100 --grow --ondisk=sdc
part raid.13 --size=100 --grow --ondisk=sdb
raid /boot --fstype ext3 --level=RAID1 raid.8 raid.20
raid /scratch --fstype ext3 --level=RAID1 raid.32 raid.35
raid /var --fstype ext3 --level=RAID1 raid.9 raid.21
raid /cache --fstype ext3 --level=RAID1 raid.11 raid.22
raid / --fstype ext3 --level=RAID1 raid.13 raid.23
Comment 1 Dave Botsch 2007-03-13 16:50:23 EDT
The kickstart seems to be doing its own numbering scheme and somehow renumbering
stuff it has already numbered or that I have numbered, with respect to the md
numbers.

For example, if I specify --device=mdx in each of the raid kickstart commands,
I'll then end up w. a system that doesn't install and may end up with /dev/md4's
partititions (which are /dev/md4 according to /proc/mdstat) attempting to be
auto numbered after the fact as /dev/md1 ... clearly this can't work.
Comment 2 Dave Botsch 2007-03-13 17:23:04 EDT
I seem to have found the solution...

before attempting to reinstall w. kickstart, boot up into rescue mode and use
fdisk to clear out all partitions.

So, the clearpart command in the kickstart file seems to not be doing the right
thing when software raid partitions are already present on the disk.
Comment 3 Chris Lumens 2007-03-28 14:53:17 EDT

*** This bug has been marked as a duplicate of 172648 ***

Note You need to log in before you can comment on or make changes to this bug.