Bugzilla will be upgraded to version 5.0 on a still to be determined date in the near future. The original upgrade date has been delayed.
Bug 626826 - Anaconda fails to grow a partition using a custom partition-include
Anaconda fails to grow a partition using a custom partition-include
Status: CLOSED NEXTRELEASE
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: doc-Migration_Guide (Show other bugs)
6.0
All Linux
low Severity high
: rc
: ---
Assigned To: Laura Bailey
ecs-bugs
: Documentation
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2010-08-24 10:12 EDT by Mathieu Chouquet-Stringer
Modified: 2013-02-05 18:55 EST (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2010-08-24 18:30:51 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Mathieu Chouquet-Stringer 2010-08-24 10:12:08 EDT
Description of problem:
My custom kickstart server creates the following /tmp/partition-include

clearpart --drives=sda,sdb --all --initlabel

part raid.01 --size=512 --ondisk=sda --asprimary
part raid.11 --size=512 --ondisk=sdb --asprimary

part raid.02 --size=40960 --ondisk=sda --asprimary
part raid.12 --size=40960 --ondisk=sdb --asprimary

part raid.03 --size=1 --grow --ondisk=sda --asprimary
part raid.13 --size=1 --grow --ondisk=sdb --asprimary

raid /boot      --fstype ext4 --level=RAID1 --device md0 raid.01 raid.11
raid pv.01      --level=RAID1 --device md1 raid.02 raid.12
raid pv.02      --level=RAID1 --device md2 raid.03 raid.13

volgroup vg_system pv.01
volgroup vg_local  pv.02

logvol /    --fstype ext4 --name=lv_root --vgname=vg_system --size=10240
logvol /tmp --fstype ext4 --name=lv_tmp  --vgname=vg_system --size=5120
logvol /var --fstype ext4 --name=lv_var  --vgname=vg_system --size=20480
logvol swap --fstype swap --name=lv_swap --vgname=vg_system --size=4096

logvol /export/home --fstype ext4 --vgname=vg_local --name=lv_export_home --size=2048
logvol /usr/product --fstype ext4 --vgname=vg_local --name=lv_usr_product  --size=5120
logvol /local/p0 --fstype ext4 --vgname=vg_local --name=lv_local_p0 --grow --size=1


Basically partitions raid.03 and raid.13 should grow to fill both disks but anaconda fails to do so and dies an exception:  DeviceError: ('new lv is too large to fit in free space', 'vg_local')

When I look at the logs I see vg_local is 0 MB...

The same setup works with RHEL 5...

Version-Release number of selected component (if applicable):
6.0 Beta 2

How reproducible:
Always

Please find anaconda debug info attached to this case.
Comment 2 Chris Lumens 2010-08-24 10:17:57 EDT
You can't use "--size=1 --grow" anymore.  You need to give those partitions a reasonable default size given that you are building multiple layers of things on top of them.  If you change that, does that cause this issue to go away?
Comment 3 Mathieu Chouquet-Stringer 2010-08-24 10:49:01 EDT
Duly noted, if I specify a reasonable default size, partitions are grown accordingly...  I guess it should be in the docs...

So yes, the issue goes away.
Comment 4 Scott Radvan 2010-08-24 18:30:51 EDT
added to migration guide in kickstart section

Note You need to log in before you can comment on or make changes to this bug.