In a non-HA deployment of OpenStack on Red Hat Enterprise Linux 6.5 that uses LVM as the backing store for Block Storage, the disk partitioning creates a 500MB boot and will initially split the remaining space equally between physical volumes for the root and cinder-volumes. The PV for root will cap at 100G and the rest of the space is allocated for cinder-volumes.
DescriptionLars Kellogg-Stedman
2014-07-24 02:31:14 UTC
When deploying a non-HA deployment, the installer selects the "LVM with cinder-volumes" partition layout, which looks like this:
#Dynamic
zerombr
clearpart --all --initlabel
part /boot --fstype ext3 --size=500 --ondisk=sda
part swap --size=1024 --ondisk=sda
part pv.01 --size=102400 --ondisk=sda
part pv.02 --size=1 --grow --ondisk=sda
volgroup vg_root pv.01
volgroup cinder-volumes pv.02
logvol / --vgname=vg_root --size=1 --grow --name=lv_root
If you attempt to deploy this on a disk with < 100GB of free space, the kickstart install will fail with the unhelpful error, "Unable to allocate requested partition scheme." The only way to identity the problem is by walking the DEBUG messages in /tmp/storage.log on the target host until you find:
DEBUG blivet: not enough free space for primary -- trying logical
Arguably there is an Anaconda bug here (this error should be communicated to the user in a more obvious fashion), but really, because of the discovery process, we already know how big the disks are in all of our target systems.
The disk layout should either be dynamically sized (using a smaller size and --grow) or we should throw an error when attempting to assign a host to a role that requires more space than is available.
Comment 1Lars Kellogg-Stedman
2014-07-24 16:51:52 UTC
If I set both pv.01 and pv.02 to --grow, and give them the same --size setting...
zerombr
clearpart --all --initlabel
part /boot --fstype ext3 --size=500 --ondisk=sda
part swap --size=1024 --ondisk=sda
part pv.01 --size=1 --grow --maxsize=102400 --ondisk=sda
part pv.02 --size=1 --grow --ondisk=sda
volgroup vg_root pv.01
volgroup cinder-volumes pv.02
logvol / --vgname=vg_root --size=1 --grow --name=lv_root
...then both partitions grow equally, until pv.01 reaches 100GB, then pv.02 will get all the remaining space. This *only* works if both partitions have identical initial sizes, so we can do this:
part pv.01 --size=1024 --grow --maxsize=102400 --ondisk=sda
part pv.02 --size=1024 --grow --ondisk=sda
But not this:
part pv.01 --size=10240 --grow --maxsize=102400 --ondisk=sda
part pv.02 --size=1 --grow --ondisk=sda
In the latter case, Anaconda simply gets confused.
Comment 2Lars Kellogg-Stedman
2014-07-24 17:04:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
http://rhn.redhat.com/errata/RHBA-2014-1090.html
When deploying a non-HA deployment, the installer selects the "LVM with cinder-volumes" partition layout, which looks like this: #Dynamic zerombr clearpart --all --initlabel part /boot --fstype ext3 --size=500 --ondisk=sda part swap --size=1024 --ondisk=sda part pv.01 --size=102400 --ondisk=sda part pv.02 --size=1 --grow --ondisk=sda volgroup vg_root pv.01 volgroup cinder-volumes pv.02 logvol / --vgname=vg_root --size=1 --grow --name=lv_root If you attempt to deploy this on a disk with < 100GB of free space, the kickstart install will fail with the unhelpful error, "Unable to allocate requested partition scheme." The only way to identity the problem is by walking the DEBUG messages in /tmp/storage.log on the target host until you find: DEBUG blivet: not enough free space for primary -- trying logical Arguably there is an Anaconda bug here (this error should be communicated to the user in a more obvious fashion), but really, because of the discovery process, we already know how big the disks are in all of our target systems. The disk layout should either be dynamically sized (using a smaller size and --grow) or we should throw an error when attempting to assign a host to a role that requires more space than is available.