Bug 1122753 - Installer should raise an error when deploying on a disk too small for requested partition scheme
Summary: Installer should raise an error when deploying on a disk too small for reques...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhel-osp-installer
Version: 5.0 (RHEL 7)
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ga
: Installer
Assignee: Dan Radez
QA Contact: nlevinki
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-07-24 02:31 UTC by Lars Kellogg-Stedman
Modified: 2014-08-21 18:06 UTC (History)
9 users (show)

Fixed In Version: rhel-osp-installer-0.1.6-2.el6ost
Doc Type: Release Note
Doc Text:
In a non-HA deployment of OpenStack on Red Hat Enterprise Linux 6.5 that uses LVM as the backing store for Block Storage, the disk partitioning creates a 500MB boot and will initially split the remaining space equally between physical volumes for the root and cinder-volumes. The PV for root will cap at 100G and the rest of the space is allocated for cinder-volumes.
Clone Of:
Environment:
Last Closed: 2014-08-21 18:06:13 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:1090 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform Enhancement Advisory 2014-08-22 15:28:08 UTC

Description Lars Kellogg-Stedman 2014-07-24 02:31:14 UTC
When deploying a non-HA deployment, the installer selects the "LVM with cinder-volumes" partition layout, which looks like this:

    #Dynamic
    zerombr
    clearpart --all --initlabel
    part /boot --fstype ext3 --size=500 --ondisk=sda
    part swap --size=1024 --ondisk=sda
    part pv.01 --size=102400 --ondisk=sda
    part pv.02 --size=1 --grow --ondisk=sda
    volgroup vg_root pv.01
    volgroup cinder-volumes pv.02
    logvol  /  --vgname=vg_root  --size=1 --grow --name=lv_root

If you attempt to deploy this on a disk with < 100GB of free space, the kickstart install will fail with the unhelpful error, "Unable to allocate requested partition scheme."  The only way to identity the problem is by walking the DEBUG messages in /tmp/storage.log on the target host until you find:

    DEBUG blivet: not enough free space for primary -- trying logical

Arguably there is an Anaconda bug here (this error should be communicated to the user in a more obvious fashion), but really, because of the discovery process, we already know how big the disks are in all of our target systems.  

The disk layout should either be dynamically sized (using a smaller size and --grow) or we should throw an error when attempting to assign a host to a role that requires more space than is available.

Comment 1 Lars Kellogg-Stedman 2014-07-24 16:51:52 UTC
If I set both pv.01 and pv.02 to --grow, and give them the same --size setting...

zerombr
clearpart --all --initlabel
part /boot --fstype ext3 --size=500 --ondisk=sda
part swap --size=1024 --ondisk=sda
part pv.01 --size=1 --grow --maxsize=102400 --ondisk=sda
part pv.02 --size=1 --grow --ondisk=sda
volgroup vg_root pv.01
volgroup cinder-volumes pv.02
logvol  /  --vgname=vg_root  --size=1 --grow --name=lv_root

...then both partitions grow equally, until pv.01 reaches 100GB, then pv.02 will get all the remaining space.  This *only* works if both partitions have identical initial sizes, so we can do this:

part pv.01 --size=1024 --grow --maxsize=102400 --ondisk=sda
part pv.02 --size=1024 --grow --ondisk=sda

But not this:

part pv.01 --size=10240 --grow --maxsize=102400 --ondisk=sda
part pv.02 --size=1 --grow --ondisk=sda

In the latter case, Anaconda simply gets confused.

Comment 2 Lars Kellogg-Stedman 2014-07-24 17:04:46 UTC
Submitted https://github.com/theforeman/foreman-installer-staypuft/pull/54

Comment 9 nlevinki 2014-08-07 08:03:44 UTC
tested and verified, installer send an error message when the volume < 100G

Comment 10 errata-xmlrpc 2014-08-21 18:06:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-1090.html


Note You need to log in before you can comment on or make changes to this bug.