Bug 677915
Summary: | DeviceError: ('new lv is too large to fit in free space', 'sysvg1') | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Red Hat Case Diagnostics <case-diagnostics> | ||||||
Component: | anaconda | Assignee: | David Lehman <dlehman> | ||||||
Status: | CLOSED ERRATA | QA Contact: | Release Test Team <release-test-team-automation> | ||||||
Severity: | medium | Docs Contact: | |||||||
Priority: | medium | ||||||||
Version: | 6.0 | CC: | akozumpl, atodorov, dlehman, jjneely, jstodola, mmello, mvattakk, pasteur | ||||||
Target Milestone: | rc | Keywords: | Reopened | ||||||
Target Release: | --- | ||||||||
Hardware: | Unspecified | ||||||||
OS: | Unspecified | ||||||||
Whiteboard: | anaconda_trace_hash:e981b7975b995d21f034ac262acd6fc513175b9deaa27d03a8fbcbf15136dd86 | ||||||||
Fixed In Version: | anaconda-13.21.102-1 | Doc Type: | Bug Fix | ||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | |||||||||
: | 683573 (view as bug list) | Environment: | |||||||
Last Closed: | 2011-05-19 12:37:46 UTC | Type: | --- | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Attachments: |
|
Description
Red Hat Case Diagnostics
2011-02-16 09:33:11 UTC
Created attachment 479061 [details]
File: backtrace
It looks like you are specifying software raid partitions of size 1MB that should grow as large as possible. First, specifying a size of 1MB is problematic -- try 500. Second, the use of growable raid partitions is problematic as we make no guarantees that the various raid partitions will end up with sizes that please mdadm when the time comes to create the array. Fixed sizes are strongly recommended for software raid partitions. Please attach the kickstart file your are using to this bug report. Please also include the new traceback from when you used --size=20000 and not --grow for the raid member partitions. *** This bug has been marked as a duplicate of bug 679073 *** Reopening, this is not a dupe of bug 679073. Created attachment 481905 [details]
anaconda crash logs
More debug data from my own testing.
This request was evaluated by Red Hat Product Management for inclusion in a Red Hat Enterprise Linux maintenance release. Product Management has requested further review of this request by Red Hat Engineering, for potential inclusion in a Red Hat Enterprise Linux Update release for currently deployed products. This request is not yet committed for inclusion in an Update release. Bug reproduced on RHEL6.0 and verified on RHEL6.1-20110330.2 (anaconda-13.21.108-1.el6) using the following kickstart commands: clearpart --all --initlabel zerombr part /boot --fstype=ext2 --size=100 --asprimary --ondisk=sda part /boot2 --fstype=ext2 --size=100 --asprimary --ondisk=sdb part raid.11 --size=1000 --ondisk=sda part raid.12 --size=8000 --ondisk=sda part raid.13 --size=2000 --ondisk=sda part raid.14 --size=1 --grow --ondisk=sda # part raid.21 --size=1000 --ondisk=sdb part raid.22 --size=8000 --ondisk=sdb part raid.23 --size=2000 --ondisk=sdb part raid.24 --size=1 --grow --ondisk=sdb raid / --fstype=ext3 --device=md5 --level=RAID1 raid.11 raid.21 raid /usr --fstype=ext3 --device=md2 --level=RAID1 raid.12 raid.22 # do NOT define fstype or you'll get a crash raid swap --device=md3 --level=RAID1 raid.13 raid.23 raid pv.01 --fstype=ext3 --device=md6 --level=RAID1 raid.14 raid.24 # LVM configuration so that we can resize /var, /usr/local and /export later volgroup sysvg1 pv.01 logvol /var --vgname=sysvg1 --size=8000 --name=var logvol /usr/local --vgname=sysvg1 --size=8000 --name=usrlocal logvol /export --vgname=sysvg1 --size=1000 --name=export partitioning on the installed system after reboot: [root@system1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md5 985M 191M 745M 21% / tmpfs 499M 0 499M 0% /dev/shm /dev/sda1 97M 23M 70M 25% /boot /dev/sdb1 97M 1.6M 91M 2% /boot2 /dev/mapper/sysvg1-export 1008M 34M 924M 4% /export /dev/md2 7.7G 678M 6.7G 10% /usr /dev/mapper/sysvg1-usrlocal 7.7G 146M 7.2G 2% /usr/local /dev/mapper/sysvg1-var 7.7G 171M 7.2G 3% /var Moving to VERIFIED I was able to work around this bug by using a larger value for size for my RAID partitions. A value that was larger that the sizes I had specified for the LVs I was creating. # /boot part raid.00 --size 1024 --ondisk sda part raid.01 --size 1024 --ondisk sdb # Physical volume part raid.02 --size 51200 --grow --ondisk sda part raid.03 --size 51200 --grow --ondisk sdb # RAID 1 setup raid /boot --fstype ext4 --level=1 --device md0 raid.00 raid.01 raid pv.00 --fstype LVM --level=1 --device md1 raid.02 raid.03 volgroup Volume00 pv.00 # Volumes logvol swap --fstype swap --name=swap --vgname=Volume00 --recommended logvol / --fstype ext4 --name=root --vgname=Volume00 --size=10240 logvol /var --fstype ext4 --name=var --vgname=Volume00 --size=10240 logvol /tmp --fstype ext4 --name=tmp --vgname=Volume00 --size=4096 logvol /home --fstype ext4 --name=home --vgname=Volume00 --size=4096 --grow That incantation behaves as expected. Where setting raid.02 and raid.03 to size=1 or even size=10240 did not. An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHBA-2011-0530.html |