RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 677915 - DeviceError: ('new lv is too large to fit in free space', 'sysvg1')
Summary: DeviceError: ('new lv is too large to fit in free space', 'sysvg1')
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: anaconda
Version: 6.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: David Lehman
QA Contact: Release Test Team
URL:
Whiteboard: anaconda_trace_hash:e981b7975b995d21f...
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-02-16 09:33 UTC by Red Hat Case Diagnostics
Modified: 2018-11-14 14:55 UTC (History)
8 users (show)

Fixed In Version: anaconda-13.21.102-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 683573 (view as bug list)
Environment:
Last Closed: 2011-05-19 12:37:46 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
File: backtrace (197.29 KB, text/plain)
2011-02-16 09:33 UTC, Red Hat Case Diagnostics
no flags Details
anaconda crash logs (134.46 KB, application/xml)
2011-03-02 16:42 UTC, Jeff Bastian
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2011:0530 0 normal SHIPPED_LIVE anaconda bug fix and enhancement update 2011-05-18 17:44:52 UTC

Description Red Hat Case Diagnostics 2011-02-16 09:33:11 UTC
The following was filed automatically by anaconda:
anaconda 13.21.82 exception report
Traceback (most recent call first):
  File "/usr/lib/anaconda/storage/devices.py", line 2058, in _addLogVol
    raise DeviceError("new lv is too large to fit in free space", self.name)
  File "/usr/lib/anaconda/storage/devices.py", line 2258, in __init__
    self.vg._addLogVol(self)
  File "/usr/lib/anaconda/storage/__init__.py", line 777, in newLV
    return LVMLogicalVolumeDevice(name, vg, *args, **kwargs)
  File "/usr/lib/anaconda/kickstart.py", line 516, in execute
    percent=self.percent)
  File "/usr/lib/anaconda/kickstart.py", line 1149, in execute
    obj.execute(self.anaconda)
  File "/usr/bin/anaconda", line 1102, in <module>
    ksdata.execute()
DeviceError: ('new lv is too large to fit in free space', 'sysvg1')

Comment 1 Red Hat Case Diagnostics 2011-02-16 09:33:35 UTC
Created attachment 479061 [details]
File: backtrace

Comment 4 David Lehman 2011-02-16 18:03:21 UTC
It looks like you are specifying software raid partitions of size 1MB that should grow as large as possible. First, specifying a size of 1MB is problematic -- try 500. Second, the use of growable raid partitions is problematic as we make no guarantees that the various raid partitions will end up with sizes that please mdadm when the time comes to create the array. Fixed sizes are strongly recommended for software raid partitions.

Comment 6 David Cantrell 2011-02-17 17:50:22 UTC
Please attach the kickstart file your are using to this bug report.

Comment 7 David Lehman 2011-02-17 18:35:30 UTC
Please also include the new traceback from when you used --size=20000 and not --grow for the raid member partitions.

Comment 9 David Cantrell 2011-02-23 19:18:34 UTC

*** This bug has been marked as a duplicate of bug 679073 ***

Comment 10 Ales Kozumplik 2011-03-01 18:11:55 UTC
Reopening, this is not a dupe of bug 679073.

Comment 13 Jeff Bastian 2011-03-02 16:42:19 UTC
Created attachment 481905 [details]
anaconda crash logs

More debug data from my own testing.

Comment 14 RHEL Program Management 2011-03-02 20:43:38 UTC
This request was evaluated by Red Hat Product Management for inclusion
in a Red Hat Enterprise Linux maintenance release. Product Management has 
requested further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed 
products. This request is not yet committed for inclusion in an Update release.

Comment 18 Jan Stodola 2011-04-06 13:53:35 UTC
Bug reproduced on RHEL6.0 and verified on RHEL6.1-20110330.2 (anaconda-13.21.108-1.el6) using the following kickstart commands:

clearpart --all --initlabel
zerombr

part /boot --fstype=ext2 --size=100 --asprimary --ondisk=sda
part /boot2 --fstype=ext2 --size=100 --asprimary --ondisk=sdb

part raid.11 --size=1000 --ondisk=sda
part raid.12 --size=8000 --ondisk=sda
part raid.13 --size=2000 --ondisk=sda
part raid.14 --size=1 --grow --ondisk=sda
#
part raid.21 --size=1000 --ondisk=sdb
part raid.22 --size=8000 --ondisk=sdb
part raid.23 --size=2000 --ondisk=sdb
part raid.24 --size=1 --grow --ondisk=sdb

raid / --fstype=ext3 --device=md5 --level=RAID1 raid.11 raid.21
raid /usr --fstype=ext3 --device=md2 --level=RAID1 raid.12 raid.22
# do NOT define fstype or you'll get a crash
raid swap --device=md3 --level=RAID1 raid.13 raid.23
raid pv.01 --fstype=ext3 --device=md6 --level=RAID1 raid.14 raid.24

# LVM configuration so that we can resize /var, /usr/local and /export later
volgroup sysvg1 pv.01
logvol /var --vgname=sysvg1 --size=8000 --name=var
logvol /usr/local --vgname=sysvg1 --size=8000 --name=usrlocal
logvol /export --vgname=sysvg1 --size=1000 --name=export


partitioning on the installed system after reboot:
[root@system1 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/md5              985M  191M  745M  21% /
tmpfs                 499M     0  499M   0% /dev/shm
/dev/sda1              97M   23M   70M  25% /boot
/dev/sdb1              97M  1.6M   91M   2% /boot2
/dev/mapper/sysvg1-export
                     1008M   34M  924M   4% /export
/dev/md2              7.7G  678M  6.7G  10% /usr
/dev/mapper/sysvg1-usrlocal
                      7.7G  146M  7.2G   2% /usr/local
/dev/mapper/sysvg1-var
                      7.7G  171M  7.2G   3% /var

Moving to VERIFIED

Comment 19 Jack Neely 2011-04-21 21:06:15 UTC
I was able to work around this bug by using a larger value for size for my RAID partitions.  A value that was larger that the sizes I had specified for the LVs I was creating.

# /boot
part raid.00 --size 1024 --ondisk sda
part raid.01 --size 1024 --ondisk sdb

# Physical volume
part raid.02 --size 51200 --grow --ondisk sda
part raid.03 --size 51200 --grow --ondisk sdb

# RAID 1 setup
raid /boot --fstype ext4 --level=1 --device md0 raid.00 raid.01
raid pv.00 --fstype LVM  --level=1 --device md1 raid.02 raid.03

volgroup Volume00 pv.00

# Volumes
logvol swap  --fstype swap --name=swap  --vgname=Volume00 --recommended
logvol /     --fstype ext4 --name=root  --vgname=Volume00 --size=10240
logvol /var  --fstype ext4 --name=var   --vgname=Volume00 --size=10240
logvol /tmp  --fstype ext4 --name=tmp   --vgname=Volume00 --size=4096
logvol /home --fstype ext4 --name=home  --vgname=Volume00 --size=4096 --grow

That incantation behaves as expected.  Where setting raid.02 and raid.03 to size=1 or even size=10240 did not.

Comment 20 errata-xmlrpc 2011-05-19 12:37:46 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2011-0530.html


Note You need to log in before you can comment on or make changes to this bug.