Red Hat Bugzilla – Bug 30953
fdisk / DiskDruid mis-handle large disks
Last modified: 2007-04-18 12:32:03 EDT
From Bugzilla Helper:
User-Agent: Mozilla/4.76 [en] (X11; U; Linux 2.4.0-0.99.11 i686)
i was installing fisher in the second half of a 60G ide and could't create
a new partition for it due to "not enough free space", even though the disk
was just over 50% used.
DiskDruid and fdisk both exhibit this behavior. DD reported the disk (in
anaconda) as follows:
total 58643 geom (c/h/s) 7476/255/63 used 32247 free 26396
this is on a pogo linux althura - 1Ghz tbird, 256M, 60G ibm ata100 - specs
can be found on www.pogolinux.com
i reused an existing partition, and the install went fine (nice beta, btw)
if you want me to test an update for you under this configuration, and you
tell me where to get the relevant files/executables, i'll be happy to do
Steps to Reproduce:
1. run the install
2. choose to partition manually w/ either DD or fdisk
Actual Results: see description of problem
Expected Results: see description of problem
We (Red Hat) should really try to fix this before next release.
Brent please verify with a large disk (>64k cylinders logical) that recent trees
handle this correctly (disk druid and fdisk).
Can you test with the Wolverine beta? There was quite a bit of improvement
between Fisher and Wolverine. I've got a feeling that this may have already
abrown is going to test this.
abrown confirmed this is fixed.