From Bugzilla Helper: User-Agent: Mozilla/4.76 [de] (X11; U; Linux 2.4.2-0.1.29 i686) anaconda fails with a traceback as soon as I click (GUI) on 'upgrade existing installation' Reproducible: Always Steps to Reproduce: 1. use one partition of an old, unuses software raid for your test installation 2. GUI mode 3. klick on 'upgrade existing installation' Actual Results: I'll attach the traceback Expected Results: Warning, that software raid is corrupt or something like this, but not a complete failure.
Created attachment 13143 [details] anaconda traceback
Changing product to Red Hat Linux Beta (internally) since the rest of the world doens't need to know about our internal tree builds.
Could you send your /etc/raidtab and fdisk -l output for /dev/hda and /dev/hdc?
There is no /etc/raidtab. I once had one for testing, but I don't use software raid anymore. There is only one partition of the former raid remaining: [karsten@kaarst karsten]$ sudo /sbin/fdisk -l /dev/hda Festplatte /dev/hda: 255 Kvpfe, 63 Sektoren, 790 Zylinder Einheiten: Zylinder mit 16065 * 512 Bytes Gerdt Booten Anfang Ende Blvcke ID Dateisystemtyp /dev/hda1 * 1 17 136521 82 Linux (Auslagerung) /dev/hda2 18 790 6209122+ 5 Erweiterte /dev/hda5 18 170 1228941 83 Linux /dev/hda6 171 425 2048256 83 Linux /dev/hda7 426 578 1228941 83 Linux /dev/hda8 579 790 1702858+ 83 Linux [karsten@kaarst karsten]$ sudo /sbin/fdisk -l /dev/hdc Festplatte /dev/hdc: 16 Kvpfe, 63 Sektoren, 29795 Zylinder Einheiten: Zylinder mit 1008 * 512 Bytes Gerdt Booten Anfang Ende Blvcke ID Dateisystemtyp /dev/hdc1 1 261 131512+ 82 Linux (Auslagerung) /dev/hdc2 262 29795 14885136 5 Erweiterte /dev/hdc5 262 20579 10240240+ 83 Linux /dev/hdc6 20580 24643 2048224+ 83 Linux /dev/hdc7 24644 29795 2596576+ fd Linux-RAID, autom. Erkennung The bugreport wasn't about anaconda not being able to configure my raid, but bailing out with a traceback if there is an unusual setup like mine.
I understand now - I will note for next release to handle this situation more gracefully. It is too late to change the behavior this release. Thanks for catching this case, btw.
*** Bug 71679 has been marked as a duplicate of this bug. ***
This is handled better now