From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:0.9.5) Gecko/20011012
Description of problem:
When the partition table on a disk is corrupted, using disk
druid as part of the RH7.2 install will lock up completely.
However, RH6.2 will gracefully handle the situation by
reporting that the partition table is corrupted and allowing
the user to wipe the partition table (correctly warning that
all data on disk will be lost) and start over.
Clearly, the RH6.2 ver of Disk Druid is doing the right
thing while something is the matter with the 7.2 version
of Disk Druid.
I have been able to reproduce this problem on two different
ATA disks (IBM DTTA and DJNA models) on an ASUS P2B mother-
board. The partition tables were corrupted during a botched
attempt to create a multi-boot machine. However, I think
the reason *why* the partition tables were corrupted is
irrelevant--the older RH6.2 version of Disk Druid was able to recognize and
deal with the corruption while the newer
RH7.2 Disk Druid produced a hard (could _not_ switch to any
virtual terminals, etc.) lock-up and no errors or warnings.
Please look at the Disk Druid code that parses the partition
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Create corrupted partition table on hard disk
2. Run Disk Druid on RH7.2 as part of install process
3. Get hard lock-up.
Actual Results: hard lock-up (could not switch virtual terminals)
Expected Results: Disk Druid should identify corrupt partition table
(which RH6.2 ver of Disk Druid did nicely!) and give
user option to fix it.
I can't reproduce any sort of hard hangups with corrupted partition tables. How
are you corrupting the partition tables? What kind of hardware are you using?
Does the numlock key still toggle the numlock LED?
I wish I could re-do the install just to reproduce this problem (and mail you a
copy of the broken partition table) but the system is now in use and I can't
afford the down-time. Thank you for taking a look at this issue. Ed
We understand please reopen this issue is you see the problem resurface.