The following was filed automatically by anaconda: anaconda 11.5.0.38 exception report Traceback (most recent call first): File "/usr/lib/anaconda/storage/devices.py", line 1694, in _addLogVol raise DeviceError("new lv is too large to fit in free space") File "/usr/lib/anaconda/storage/devices.py", line 1869, in __init__ self.vg._addLogVol(self) File "/usr/lib/anaconda/storage/__init__.py", line 604, in newLV return LVMLogicalVolumeDevice(name, vg, *args, **kwargs) File "/usr/lib/anaconda/storage/partitioning.py", line 145, in _scheduleLVs size=size) File "/usr/lib/anaconda/storage/partitioning.py", line 217, in doAutoPartition _scheduleLVs(anaconda, devs) File "/usr/lib/anaconda/dispatch.py", line 205, in moveStep rc = stepFunc(self.anaconda) File "/usr/lib/anaconda/dispatch.py", line 128, in gotoNext self.moveStep() File "/usr/lib/anaconda/gui.py", line 1317, in nextClicked self.anaconda.dispatch.gotoNext() DeviceError: new lv is too large to fit in free space
Created attachment 339119 [details] Attached traceback automatically from anaconda.
Created attachment 339121 [details] Attached traceback automatically from anaconda.
*** Bug 495257 has been marked as a duplicate of this bug. ***
*** Bug 497618 has been marked as a duplicate of this bug. ***
I am seeing a very similar issue. Anaconda doesn't give me the option to file it automatically (perhaps because I'm doing a kickstart install), so I'm going to enclose a screenshot from my KVM. Here's the disk section of my kickstart file, which works verbatim with F10. Maybe I'm doing something weird or broken there and previous versions never noticed. zerombr yes clearpart --all part raid.00 --asprimary --ondisk=sda --size 256 part raid.01 --asprimary --ondisk=sdb --size 256 part raid.02 --asprimary --ondisk=sdc --size 256 part raid.03 --asprimary --ondisk=sdd --size 256 part raid.10 --asprimary --ondisk=sda --size=32768 part raid.11 --asprimary --ondisk=sdb --size=32768 part raid.12 --asprimary --ondisk=sdc --size=32768 part raid.13 --asprimary --ondisk=sdd --size=32768 part raid.20 --size 1000 --grow part raid.21 --size 1000 --grow part raid.22 --size 1000 --grow part raid.23 --size 1000 --grow raid /boot --level=1 --device=md0 raid.00 raid.01 raid.02 raid.03 raid swap --level=1 --device=md1 raid.10 raid.11 raid.12 raid.13 raid pv.0 --level=10 --device=md2 raid.20 raid.21 raid.22 raid.23 volgroup vg0 pv.0 logvol /usr --fstype ext3 --name=usr --vgname=vg0 --size=10000 logvol /scratch --fstype ext3 --name=scratch --vgname=vg0 --size=1024 logvol /tmp --fstype ext3 --name=tmp --vgname=vg0 --size=4096 logvol / --fstype ext3 --name=root --vgname=vg0 --size=1024 logvol /var --fstype ext3 --name=var --vgname=vg0 --size=8192 I will attach a screenshot with a backtrace. The last lines are: File "/usr/lib/anaconda/storage/devices.py", line 1933, in __init__ self.gv._addLogVol(self) File "/usr/lib/anaconda/storage/devices.py", line 1753, in _addLogVol raise DeviceError("new lv is too large to fit in free space", self.path) storage.errors.DeviceError: ('new lv is too large to fit in free space', '/dev/mapper/vg0') Perhaps --grow has stopped working on part statements? I'll up the size and see what happens.
Created attachment 342911 [details] Screen capture of backtrace
OK, I changed to: part raid.20 --size 100000 --grow part raid.21 --size 100000 --grow part raid.22 --size 100000 --grow part raid.23 --size 100000 --grow and things work. --grow does work, because the resulting vg ends up as 1.76TB, but it looks like the un-grown size is used for the space check. I can live with that.
This bug appears to have been reported against 'rawhide' during the Fedora 11 development cycle. Changing version to '11'. More information and reason for this action is here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping
*** Bug 500718 has been marked as a duplicate of this bug. ***
I am closing this bug since Fedora 12 has been released some time ago. If you see a problem similar to that described in this report, please open a new report against the current version of Fedora. Thanks.