Bug 172648 - anaconda crashes while trying to keep existing raid partition.
anaconda crashes while trying to keep existing raid partition.
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: anaconda (Show other bugs)
4.0
All Linux
medium Severity medium
: ---
: ---
Assigned To: Chris Lumens
Mike McLean
:
: 149796 231453 (view as bug list)
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2005-11-07 16:24 EST by Sean Dilda
Modified: 2010-10-21 23:39 EDT (History)
3 users (show)

See Also:
Fixed In Version: RHBA-2007-0816
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2007-11-15 11:34:57 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)
fix use of device nodes to probe for raid sb (2.16 KB, patch)
2007-03-28 15:36 EDT, Chris Lumens
no flags Details | Diff
kickstart file (3.76 KB, text/plain)
2008-08-05 23:36 EDT, Dave Botsch
no flags Details

  None (edit)
Description Sean Dilda 2005-11-07 16:24:12 EST
This crash happened while trying to install CentOS 4.1 (essentially RHEL 4u1)
onto a box.  The box previously had a raid0 across hda4 and hdc4.

I reinstalled (not upgraded), and wiped hda[1-3] and hdc[1-3].  I then created a
raid1 / as hda1 and hdc1, and added swap space. I left the raid0 in place, and
didn't give it a mount point.

After package selection, when it starts formatting the disks, anaconda crashed.
 I then redid the install, this time deleting the hda4 and hdc4 partitions and
recreating the raid0 in disk druid and the install worked.

Here is the crash dump.  Unfortunatley, the floppy disk I used had some bad
sectors, so its only a partial dump:

Traceback (most recent call last):
  File "/usr/bin/anaconda", line 1173, in ?
    intf.run(id, dispatch, configFileData)
  File "/var/tmp/anaconda-10.1.1.19//usr/lib/anaconda/text.py", line 510, in run
    dispatch.gotoNext()
  File "/var/tmp/anaconda-10.1.1.19//usr/lib/anaconda/dispatch.py", line 171, in
gotoNext
    self.moveStep()
  File "/var/tmp/anaconda-10.1.1.19//usr/lib/anaconda/dispatch.py", line 239, in
moveStep
    rc = apply(func, self.bindArgs(args))
  File "/var/tmp/anaconda-10.1.1.19//usr/lib/anaconda/packages.py", line 573, in
turnOnFilesystems
    thefsset.makeFilesystems (instPath)
  File "/var/tmp/anaconda-10.1.1.19//usr/lib/anaconda/fsset.py", line 1448, in
makeFilesystems
    self.formatEntry(entry, chroot)
  File "/var/tmp/anaconda-10.1.1.19//usr/lib/anaconda/fsset.py", line 1376, in
formatEntry
    entry.fsystem.formatDevice(entry, self.progressWindow, chroot)
  File "/var/tmp/anaconda-10.1.1.19//usr/lib/anaconda/fsset.py", line 615, in
formatDevice
    extFileSystem.formatDevice(self, entry, progress, chroot)
  File "/var/tmp/anaconda-10.1.1.19//usr/lib/anaconda/fsset.py", line 527, in
formatDevice
    devicePath = entry.device.setupDevice(chroot)
  File "/var/tmp/anaconda-10.1.1.19//usr/lib/anaconda/fsset.py", line 1932, in
setupDevice
    self.level, self.numDisks)
  File "/var/tmp/anaconda-10.1.1.19//usr/lib/anaconda/raid.py", line 186, in
register_raid_device
    raise ValueError, "%s is already in the mdList!" % (mdname,)
ValueError: md0 is already in the mdList!

Local variables in innermost frame:
level: 1
numActive: 2
newlevel: 1
devices: ['hda4', 'hdc4']
mdname: md0
dev: md0
newdevices: ['hda1', 'hdc1']
newnumActive: 2


Package Group selection status:
ISO8859-9 Support: 0
KDE Software Development: 0
Legacy Network Server: 0
Icelandic Support: 0
Office/Productivity: 0
German Support: 0
Compatibility Arch Development Support: 0
Arabic Support: 0
Danish Support: 0
Assamese Support: 0
KDE (K Desktop Environment): 0
Catalan Support: 0
Web Server: 1
Bengali Support: 0
X Software Development: 0
Legacy Software Development: 0
Spanish Support: 0
Mail Server: 0
Brazilian Support: 0
Workstation Common: 0
Hebrew Support: 0
Estonian Support: 0
French Support: 0
DNS Name Server: 0
Server Configuration Tools: 1
Dialup Networking Support: 1
Everything: 0
System Tools: 1
Sound and Video: 0
Network Servers: 0
PostgreSQL Database: 0
Portuguese Support: 0
Finnish Support: 0
Editors: 1
Norwegian Support: 0
Administration Tools: 1
Miscellaneous Included Packages: 0
Romanian Support: 0
Emacs: 0
Japanese Support: 0
News Server: 0
Windows File Server: 0
Serbian Support: 0
British Support: 0
Core: 0
Authoring and Publishing: 0
Cyrillic Support: 0
X Window System: 0
Chinese Support: 0
Printing Support: 0
Korean Support: 0
GNOME Desktop Environment: 0
GNOME Software Development: 0
Italian Support: 0
Slovak Support: 0
Base: 1
Slovenian Support: 0
Graphics: 0
FTP Server: 0
Ukrainian Support: 0
Turkish Support: 0
Ruby: 0
Development Libraries: 0
Czech Support: 0
GNOME: 0
MySQL Database: 0
Hungarian Support: 0
Swedish Support: 0
Polish Support: 0
XEmacs: 0
Development Tools: 1
Dutch Support: 0
Russian Support: 0
Engineering and Scientific: 1
Games and Entertainment: 0
KDE: 0
Greek Support: 0
Text-based Internet: 1
Graphical Internet: 0
Compatibility Arch Support: 0
Server: 0
ISO8859-2 Support: 0


Individual package selection status:
4Suite: 1, Canna: 0, Canna-devel: 0, Canna-libs: 0, ElectricFence: 0, FreeWnn:
0, FreeWnn-devel: 0, FreeWnn-libs: 0, GConf2: 1, GConf2-devel: 0, HelixPlayer:
0, ImageMagick: 0, ImageMagick-c++: 0, ImageMagick-c++-devel: 0,
ImageMagick-devel: 0, ImageMagick-perl: 0, MAKEDEV: 1, MyODBC: 0, MySQL-python:
0, NetworkManager: 1, NetworkManager-gnome: 0, ORBit: 0, ORBit-devel: 0, ORBit2:
1, ORBit2-devel: 0, Omni: 1, Omni-foomatic: 1, PyQt: 0, PyQt-devel: 0,
PyQt-examples: 0, PyXML: 1, Pyrex: 0, SDL: 0, SDL-devel: 0, SysVinit: 1, VFlib2:
1, VFlib2-VFjfm: 0, VFlib2-conf-ja: 0, VFlib2-devel: 0, Xaw3d: 1, Xaw3d-devel:
0, a2ps: 0, acl: 1, acpid: 1, alchemist: 1, alchemist-devel: 0, alsa-lib: 1, alsa-
Comment 1 Sean Dilda 2005-11-07 16:26:36 EST
I forgot to mention, in the old install, the raid0 was /dev/md0.  When I added
the raid1 in disk druid, it showed up as 'Raid device 1', which I'm assuming is
/dev/md1.
Comment 2 Chris Lumens 2007-03-28 14:53:22 EDT
*** Bug 231453 has been marked as a duplicate of this bug. ***
Comment 3 Chris Lumens 2007-03-28 15:30:51 EDT
I'm attaching a patch to this bug for my future reference to fix this bug for an
update release of RHEL4.  We just need to be smarter with our use of device
nodes for probing the RAID superblocks.
Comment 4 Chris Lumens 2007-03-28 15:36:05 EDT
Created attachment 151154 [details]
fix use of device nodes to probe for raid sb
Comment 6 RHEL Product and Program Management 2007-05-09 07:08:21 EDT
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
release.
Comment 11 errata-xmlrpc 2007-11-15 11:34:57 EST
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2007-0816.html
Comment 14 Dave Botsch 2008-04-22 15:07:27 EDT
Hi.

This is still broken in the anaconda distributed with 64-bit RHEL4.6
Comment 15 Dave Botsch 2008-04-22 15:10:29 EDT
sorry, would appear to be 4AS-7, judging from the
redhat-release-4AS-7.x86_64.rpm on the dvd.
Comment 16 Alexander Todorov 2008-05-05 07:11:24 EDT
Dave,
can you post your steps to reproduce? This issue has been fixed in the past
release. There are quite a few issues with the message "ValueError: md0 is
already in the mdList!" and you may be hitting a totally different one. 

Please provide steps to reproduce and the exact anaconda/RHEL version if possible. 

Thanks.
Comment 17 Alexander Todorov 2008-06-04 03:28:38 EDT
*** Bug 149796 has been marked as a duplicate of this bug. ***
Comment 18 Dave Botsch 2008-08-05 23:36:33 EDT
Created attachment 313522 [details]
kickstart file
Comment 19 Dave Botsch 2008-08-05 23:37:24 EDT
(In reply to comment #16)
> Dave,
> can you post your steps to reproduce? This issue has been fixed in the past
> release. There are quite a few issues with the message "ValueError: md0 is
> already in the mdList!" and you may be hitting a totally different one. 
> 
> Please provide steps to reproduce and the exact anaconda/RHEL version if possible. 
> 
> Thanks.
> 

Hi. Steps to reproduce are as follows:

1. Setup for a nfs kickstart install - added a couple of my own rpms (not replacing any of the redhat ones) and rebuild the comps.xml and hdlist
2. Use the attached kickstart file
3. Do the install by booting off of first CD/DVD, and giving an option of linux ks=nfs:nfsserver.domain:/install/rhel4_64/kickstart.ks
4. Decide to reinstall by rebooting the machine off of the CD/DVD and re-pointing it at the kickstart file as in step 3
5. During the partition formatting stage, the above error pops up

I can work around the error by rebooting into rescue mode from the install cd and doing a "dd if=/dev/zero of=/dev/sda bs=512 count=1" and then trying the intsall again, at which point the installer acts as if the disk is not formatted at all.

Redhat 4 Release 7, 64-bit (haven't tested the 32-bit installer on that version) with whatever anaconda is included on that dvd... don't see how to get the anaconda version out of the miniroot

Note You need to log in before you can comment on or make changes to this bug.