Bug 677915

Summary: DeviceError: ('new lv is too large to fit in free space', 'sysvg1')
Product: Red Hat Enterprise Linux 6 Reporter: Red Hat Case Diagnostics <case-diagnostics>
Component: anacondaAssignee: David Lehman <dlehman>
Status: CLOSED ERRATA QA Contact: Release Test Team <release-test-team-automation>
Severity: medium Docs Contact:
Priority: medium    
Version: 6.0CC: akozumpl, atodorov, dlehman, jjneely, jstodola, mmello, mvattakk, pasteur
Target Milestone: rcKeywords: Reopened
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: anaconda_trace_hash:e981b7975b995d21f034ac262acd6fc513175b9deaa27d03a8fbcbf15136dd86
Fixed In Version: anaconda-13.21.102-1 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 683573 (view as bug list) Environment:
Last Closed: 2011-05-19 12:37:46 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
File: backtrace
none
anaconda crash logs none

Description Red Hat Case Diagnostics 2011-02-16 09:33:11 UTC
The following was filed automatically by anaconda:
anaconda 13.21.82 exception report
Traceback (most recent call first):
  File "/usr/lib/anaconda/storage/devices.py", line 2058, in _addLogVol
    raise DeviceError("new lv is too large to fit in free space", self.name)
  File "/usr/lib/anaconda/storage/devices.py", line 2258, in __init__
    self.vg._addLogVol(self)
  File "/usr/lib/anaconda/storage/__init__.py", line 777, in newLV
    return LVMLogicalVolumeDevice(name, vg, *args, **kwargs)
  File "/usr/lib/anaconda/kickstart.py", line 516, in execute
    percent=self.percent)
  File "/usr/lib/anaconda/kickstart.py", line 1149, in execute
    obj.execute(self.anaconda)
  File "/usr/bin/anaconda", line 1102, in <module>
    ksdata.execute()
DeviceError: ('new lv is too large to fit in free space', 'sysvg1')

Comment 1 Red Hat Case Diagnostics 2011-02-16 09:33:35 UTC
Created attachment 479061 [details]
File: backtrace

Comment 4 David Lehman 2011-02-16 18:03:21 UTC
It looks like you are specifying software raid partitions of size 1MB that should grow as large as possible. First, specifying a size of 1MB is problematic -- try 500. Second, the use of growable raid partitions is problematic as we make no guarantees that the various raid partitions will end up with sizes that please mdadm when the time comes to create the array. Fixed sizes are strongly recommended for software raid partitions.

Comment 6 David Cantrell 2011-02-17 17:50:22 UTC
Please attach the kickstart file your are using to this bug report.

Comment 7 David Lehman 2011-02-17 18:35:30 UTC
Please also include the new traceback from when you used --size=20000 and not --grow for the raid member partitions.

Comment 9 David Cantrell 2011-02-23 19:18:34 UTC

*** This bug has been marked as a duplicate of bug 679073 ***

Comment 10 Ales Kozumplik 2011-03-01 18:11:55 UTC
Reopening, this is not a dupe of bug 679073.

Comment 13 Jeff Bastian 2011-03-02 16:42:19 UTC
Created attachment 481905 [details]
anaconda crash logs

More debug data from my own testing.

Comment 14 RHEL Program Management 2011-03-02 20:43:38 UTC
This request was evaluated by Red Hat Product Management for inclusion
in a Red Hat Enterprise Linux maintenance release. Product Management has 
requested further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed 
products. This request is not yet committed for inclusion in an Update release.

Comment 18 Jan Stodola 2011-04-06 13:53:35 UTC
Bug reproduced on RHEL6.0 and verified on RHEL6.1-20110330.2 (anaconda-13.21.108-1.el6) using the following kickstart commands:

clearpart --all --initlabel
zerombr

part /boot --fstype=ext2 --size=100 --asprimary --ondisk=sda
part /boot2 --fstype=ext2 --size=100 --asprimary --ondisk=sdb

part raid.11 --size=1000 --ondisk=sda
part raid.12 --size=8000 --ondisk=sda
part raid.13 --size=2000 --ondisk=sda
part raid.14 --size=1 --grow --ondisk=sda
#
part raid.21 --size=1000 --ondisk=sdb
part raid.22 --size=8000 --ondisk=sdb
part raid.23 --size=2000 --ondisk=sdb
part raid.24 --size=1 --grow --ondisk=sdb

raid / --fstype=ext3 --device=md5 --level=RAID1 raid.11 raid.21
raid /usr --fstype=ext3 --device=md2 --level=RAID1 raid.12 raid.22
# do NOT define fstype or you'll get a crash
raid swap --device=md3 --level=RAID1 raid.13 raid.23
raid pv.01 --fstype=ext3 --device=md6 --level=RAID1 raid.14 raid.24

# LVM configuration so that we can resize /var, /usr/local and /export later
volgroup sysvg1 pv.01
logvol /var --vgname=sysvg1 --size=8000 --name=var
logvol /usr/local --vgname=sysvg1 --size=8000 --name=usrlocal
logvol /export --vgname=sysvg1 --size=1000 --name=export


partitioning on the installed system after reboot:
[root@system1 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/md5              985M  191M  745M  21% /
tmpfs                 499M     0  499M   0% /dev/shm
/dev/sda1              97M   23M   70M  25% /boot
/dev/sdb1              97M  1.6M   91M   2% /boot2
/dev/mapper/sysvg1-export
                     1008M   34M  924M   4% /export
/dev/md2              7.7G  678M  6.7G  10% /usr
/dev/mapper/sysvg1-usrlocal
                      7.7G  146M  7.2G   2% /usr/local
/dev/mapper/sysvg1-var
                      7.7G  171M  7.2G   3% /var

Moving to VERIFIED

Comment 19 Jack Neely 2011-04-21 21:06:15 UTC
I was able to work around this bug by using a larger value for size for my RAID partitions.  A value that was larger that the sizes I had specified for the LVs I was creating.

# /boot
part raid.00 --size 1024 --ondisk sda
part raid.01 --size 1024 --ondisk sdb

# Physical volume
part raid.02 --size 51200 --grow --ondisk sda
part raid.03 --size 51200 --grow --ondisk sdb

# RAID 1 setup
raid /boot --fstype ext4 --level=1 --device md0 raid.00 raid.01
raid pv.00 --fstype LVM  --level=1 --device md1 raid.02 raid.03

volgroup Volume00 pv.00

# Volumes
logvol swap  --fstype swap --name=swap  --vgname=Volume00 --recommended
logvol /     --fstype ext4 --name=root  --vgname=Volume00 --size=10240
logvol /var  --fstype ext4 --name=var   --vgname=Volume00 --size=10240
logvol /tmp  --fstype ext4 --name=tmp   --vgname=Volume00 --size=4096
logvol /home --fstype ext4 --name=home  --vgname=Volume00 --size=4096 --grow

That incantation behaves as expected.  Where setting raid.02 and raid.03 to size=1 or even size=10240 did not.

Comment 20 errata-xmlrpc 2011-05-19 12:37:46 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2011-0530.html