Bug 1083240 - Horizon permits invalid minimum size of new images
Summary: Horizon permits invalid minimum size of new images
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: python-django-horizon
Version: 4.0
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: rc
: 5.0 (RHEL 7)
Assignee: RHOS Maint
QA Contact: Amit Ugol
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-04-01 18:23 UTC by Jason Callaway
Modified: 2019-09-09 16:23 UTC (History)
7 users (show)

Fixed In Version: python-django-horizon-2014.1-5.el7ost
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-07-08 15:41:56 UTC
Target Upstream Version:


Attachments (Terms of Use)
pic (25.53 KB, image/jpeg)
2014-06-18 10:09 UTC, Amit Ugol
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 775453 0 None None None Never
Red Hat Product Errata RHEA-2014:0855 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform Enhancement - Dashboard 2014-07-08 19:33:24 UTC

Description Jason Callaway 2014-04-01 18:23:03 UTC
Description of problem:

When creating a new image in the Horizon dashboard, no minimum size is automatically set.  Further, the dialog will permit an invalid minimum size, which results in a block device mapping error during instance creation.

Steps to Reproduce:
1. In Horizon in the Admin tab, create a new image from http://download.fedoraproject.org/pub/fedora/linux/releases/20/Images/x86_64/Fedora-x86_64-20-20131211.1-sda.qcow2
2. Set the minimum size to 1GB
3. Launch an instance from the new image

Actual results:

Creation of the instance will fail after the 'mapping block device' phase.

[root@rhelosp ~(keystone_admin)]# cinder list
+--------------------------------------+-----------+---------------+------+-------------+----------+-------------+
|                  ID                  |   Status  |  Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+---------------+------+-------------+----------+-------------+
| 9e854726-2ee6-4559-aa62-78c19f281973 |   error   |               |  1   |     None    |  false   |             |
+--------------------------------------+-----------+---------------+------+-------------+----------+-------------+

From /var/log/nova/compute.log:

40fdde0b68084dd599439b071f38305b bb29a94ac3fd4116b8888bf979bbc09a] [instance: db8cc644-a31a-4b61-acb5-6662ce5c6ea1] Instance failed block device setup
2014-03-27 08:39:46.232 9477 TRACE nova.compute.manager [instance: db8cc644-a31a-4b61-acb5-6662ce5c6ea1] Traceback (most recent call last):
2014-03-27 08:39:46.232 9477 TRACE nova.compute.manager [instance: db8cc644-a31a-4b61-acb5-6662ce5c6ea1]   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1392, in _prep_block_device
2014-03-27 08:39:46.232 9477 TRACE nova.compute.manager [instance: db8cc644-a31a-4b61-acb5-6662ce5c6ea1]     self._await_block_device_map_created))
2014-03-27 08:39:46.232 9477 TRACE nova.compute.manager [instance: db8cc644-a31a-4b61-acb5-6662ce5c6ea1]   File "/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 283, in attach_block_devices
2014-03-27 08:39:46.232 9477 TRACE nova.compute.manager [instance: db8cc644-a31a-4b61-acb5-6662ce5c6ea1]     block_device_mapping)
2014-03-27 08:39:46.232 9477 TRACE nova.compute.manager [instance: db8cc644-a31a-4b61-acb5-6662ce5c6ea1]   File "/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 246, in attach
2014-03-27 08:39:46.232 9477 TRACE nova.compute.manager [instance: db8cc644-a31a-4b61-acb5-6662ce5c6ea1]     db_api)
2014-03-27 08:39:46.232 9477 TRACE nova.compute.manager [instance: db8cc644-a31a-4b61-acb5-6662ce5c6ea1]   File "/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 153, in attach
2014-03-27 08:39:46.232 9477 TRACE nova.compute.manager [instance: db8cc644-a31a-4b61-acb5-6662ce5c6ea1]     volume_api.check_attach(context, volume, instance=instance)
2014-03-27 08:39:46.232 9477 TRACE nova.compute.manager [instance: db8cc644-a31a-4b61-acb5-6662ce5c6ea1]   File "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 231, in check_attach
2014-03-27 08:39:46.232 9477 TRACE nova.compute.manager [instance: db8cc644-a31a-4b61-acb5-6662ce5c6ea1]     raise exception.InvalidVolume(reason=msg)
2014-03-27 08:39:46.232 9477 TRACE nova.compute.manager [instance: db8cc644-a31a-4b61-acb5-6662ce5c6ea1] InvalidVolume: Invalid volume: status must be 'available'
2014-03-27 08:39:46.232 9477 TRACE nova.compute.manager [instance: db8cc644-a31a-4b61-acb5-6662ce5c6ea1] 
2014-03-27 08:39:47.076 9477 ERROR nova.openstack.common.rpc.amqp [req-47ec700d-6335-4166-acdb-344fa054d1db 40fdde0b68084dd599439b071f38305b bb29a94ac3fd4116b8888bf979bbc09a] Exception during message handling
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp Traceback (most recent call last):
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 461, in _process_data
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp     **args)
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", line 172, in dispatch
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp     result = getattr(proxyobj, method)(ctxt, **kwargs)
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/exception.py", line 90, in wrapped
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp     payload)
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/exception.py", line 73, in wrapped
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp     return f(self, context, *args, **kw)
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 243, in decorated_function
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp     pass
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 229, in decorated_function
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp     return function(self, context, *args, **kwargs)
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 294, in decorated_function
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp     function(self, context, *args, **kwargs)
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 271, in decorated_function
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp     e, sys.exc_info())
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 258, in decorated_function
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp     return function(self, context, *args, **kwargs)
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1630, in run_instance
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp     do_run_instance()
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py", line 246, in inner
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp     return f(*args, **kwargs)
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1629, in do_run_instance
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp     legacy_bdm_in_spec)
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 968, in _run_instance
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp     notify("error", msg=unicode(e))  # notify that build failed
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 952, in _run_instance
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp     instance, image_meta, legacy_bdm_in_spec)
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1081, in _build_instance
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp     LOG.exception(msg, instance=instance)
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1034, in _build_instance
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp     context, instance, bdms)
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1409, in _prep_block_device
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp     raise exception.InvalidBDM()
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp InvalidBDM: Block Device Mapping is Invalid.
2014-03-27 08:39:47.076 9477 TRACE nova.openstack.common.rpc.amqp 


If you look at the image, it requires at least 2GB:

[root@rhelosp ~(keystone_admin)]# qemu-img info Fedora-x86_64-20-20131211.1-sda.qcow2 
image: Fedora-x86_64-20-20131211.1-sda.qcow2
file format: qcow2
virtual size: 2.0G (2147483648 bytes)
disk size: 204M
cluster_size: 65536
[root@rhelosp ~(keystone_admin)]# glance image-show f20
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 51bc16b900bf0f814bb6c0c3dd8f0790     |
| container_format | bare                                 |
| created_at       | 2014-03-28T00:41:37                  |
| deleted          | False                                |
| disk_format      | qcow2                                |
| id               | 9f50caf4-108d-4206-a974-933d65805920 |
| is_public        | True                                 |
| min_disk         | 1                                    |
| min_ram          | 1                                    |
| name             | f20                                  |
| owner            | 5b55bc7b39394c1f9f3517448d4a9542     |
| protected        | False                                |
| size             | 214106112                            |
| status           | active                               |
| updated_at       | 2014-03-28T00:42:20                  |
+------------------+--------------------------------------+

Expected results:

Ideally the create image dialog should automatically populate the minimum size field.

At a minimum, the image creation process should fail specifying the invalid minimum size.

Additional info:

Support was able to diagnose the problem in case 0160543.

Comment 2 Matthias Runge 2014-04-02 06:53:21 UTC
This is fixed in Icehouse. Unfortunately, we can only rely on data given by glance.

Comment 3 Matthias Runge 2014-04-02 07:44:31 UTC
The fix was https://review.openstack.org/#/c/56639/

Comment 4 Amit Ugol 2014-06-18 10:09:36 UTC
Created attachment 909921 [details]
pic

Comment 5 Amit Ugol 2014-06-18 10:10:40 UTC
Verified python-django-horizon-2014.1-7.el7ost.noarch.
error is showing the short description plus the http error (see attachment)

Comment 7 errata-xmlrpc 2014-07-08 15:41:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0855.html


Note You need to log in before you can comment on or make changes to this bug.