Bug 1316871 - User can't boot from volume snapshot
User can't boot from volume snapshot
Status: CLOSED WONTFIX
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder (Show other bugs)
8.0 (Liberty)
Unspecified Unspecified
high Severity medium
: ---
: 8.0 (Liberty)
Assigned To: Eric Harney
Avi Avraham
: Automation, ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-03-11 05:55 EST by Martin Pavlásek
Modified: 2017-08-16 09:53 EDT (History)
13 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-08-16 09:53:11 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Martin Pavlásek 2016-03-11 05:55:09 EST
User can't boot from volume snapshot from Horizon.

Possible related packages:
RHEL 7.2, puddle 2016-03-10.1
python-django-horizon-8.0.1-1.el7ost.noarch
openstack-dashboard-8.0.1-1.el7ost.noarch
openstack-packstack-puppet-7.0.0-0.12.dev1699.g8f54936.el7ost.noarch
openstack-packstack-7.0.0-0.12.dev1699.g8f54936.el7ost.noarch
python-openstackclient-1.7.2-1.el7ost.noarch
openstack-nova-api-12.0.2-1.el7ost.noarch
openstack-dashboard-theme-8.0.1-1.el7ost.noarch
openstack-keystone-8.0.1-1.el7ost.noarch
python-nova-12.0.2-1.el7ost.noarch
python-novaclient-3.1.0-2.el7ost.noarch

How reproducible:
100%

Steps to reproduce:
1. create dmeo tenant and user

# on controller as root
$ source keystonerc_admin
$ keystone tenant-create --name demo
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |                                  |
|   enabled   |               True               |
|      id     | 86c9bcc45ac74dbc8d807fd07bc3624d |
|     name    |               demo               |
+-------------+----------------------------------+

$ keystone user-create --name demo --tenant 86c9bcc45ac74dbc8d807fd07bc3624d --pass demo
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |                                  |
| enabled  |               True               |
|    id    | 65104b6722c646cd81cd5a11988dd9bf |
|   name   |               demo               |
| tenantId | 86c9bcc45ac74dbc8d807fd07bc3624d |
| username |               demo               |
+----------+----------------------------------+

$ cinder quota-show 86c9bcc45ac74dbc8d807fd07bc3624d
+----------------------+-------+
|       Property       | Value |
+----------------------+-------+
|   backup_gigabytes   |  1000 |
|       backups        |   10  |
|      gigabytes       |  1000 |
|   gigabytes_iscsi    |   -1  |
| per_volume_gigabytes |   -1  |
|      snapshots       |   10  |
|   snapshots_iscsi    |   -1  |
|       volumes        |   10  |
|    volumes_iscsi     |   -1  |
+----------------------+-------+

$ nova quota-show --user 65104b6722c646cd81cd5a11988dd9bf
+-----------------------------+-------+
| Quota                       | Limit |
+-----------------------------+-------+
| instances                   | 10    |
| cores                       | 20    |
| ram                         | 51200 |
| floating_ips                | 10    |
| fixed_ips                   | -1    |
| metadata_items              | 128   |
| injected_files              | 5     |
| injected_file_content_bytes | 10240 |
| injected_file_path_bytes    | 255   |
| key_pairs                   | 100   |
| security_groups             | 10    |
| security_group_rules        | 20    |
| server_groups               | 10    |
| server_group_members        | 10    |
+-----------------------------+-------+


2. create some resources to be able boot VM from volume snapshot
Log in into Horizon as 'demo' user
Project - Network - Networks, Create new network (we need some otherwise we can't spawn VMs).
name: test_net, subnet: test_subnet, net addr: 10.0.0.0/24, gateway IP: 10.0.0.1

3. create volume, it's snaphot
Project - Compute - Volumes - Volues, create new volume 'vol'
edit volume 'vol', enable 'bootable' flag
click to dropdown menu of 'vol' volume and click to Create snapshot 'snap'

4. ... and try to boot from it
Volume snapshots, click to dropdown menu of 'snap' and choose 'Launch as instance'
name of instance 'from snap', click 'Launch' button.

Current result: the VM is not created due this error message:
The requested instance cannot be launched. Requested volume exceeds quota: Available: 0, Requested: 1.



I've tried also create the same VM by using command line client, and VM was spawned successfuly (as demo user):
$ nova boot --flavor m1.tiny --block-device source=snapshot,id=8e8aa774-eae1-4240-bb30-e859540a0e01,dest=volume,shutdown=PRESERVE,bootindex=0 from-snap
+--------------------------------------+-------------------------------------------------+
| Property                             | Value                                           |
+--------------------------------------+-------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                          |
| OS-EXT-AZ:availability_zone          |                                                 |
| OS-EXT-STS:power_state               | 0                                               |
| OS-EXT-STS:task_state                | scheduling                                      |
| OS-EXT-STS:vm_state                  | building                                        |
| OS-SRV-USG:launched_at               | -                                               |
| OS-SRV-USG:terminated_at             | -                                               |
| accessIPv4                           |                                                 |
| accessIPv6                           |                                                 |
| adminPass                            | MoYrhE9tr8g4                                    |
| config_drive                         |                                                 |
| created                              | 2016-03-11T10:10:59Z                            |
| flavor                               | m1.tiny (1)                                     |
| hostId                               |                                                 |
| id                                   | 5fbf5ea8-013d-4f95-9aba-fa3df2c5c034            |
| image                                | Attempt to boot from volume - no image supplied |
| key_name                             | -                                               |
| metadata                             | {}                                              |
| name                                 | from-snap                                       |
| os-extended-volumes:volumes_attached | []                                              |
| progress                             | 0                                               |
| security_groups                      | default                                         |
| status                               | BUILD                                           |
| tenant_id                            | 86c9bcc45ac74dbc8d807fd07bc3624d                |
| updated                              | 2016-03-11T10:10:59Z                            |
| user_id                              | 65104b6722c646cd81cd5a11988dd9bf                |
+--------------------------------------+-------------------------------------------------+

$ nova show 5fbf5ea8-013d-4f95-9aba-fa3df2c5c034
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                   |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-STS:task_state                | -                                                        |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-SRV-USG:launched_at               | 2016-03-11T10:11:10.000000                               |
| OS-SRV-USG:terminated_at             | -                                                        |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| config_drive                         |                                                          |
| created                              | 2016-03-11T10:10:59Z                                     |
| flavor                               | m1.tiny (1)                                              |
| hostId                               | 897e6588fee1ee8caaf244209595d65fe186eee8e502e0bebc27ac2a |
| id                                   | 5fbf5ea8-013d-4f95-9aba-fa3df2c5c034                     |
| image                                | Attempt to boot from volume - no image supplied          |
| key_name                             | -                                                        |
| metadata                             | {}                                                       |
| name                                 | from-snap                                                |
| os-extended-volumes:volumes_attached | [{"id": "04df4d88-32df-4f5b-bc36-82ce567c235b"}]         |
| progress                             | 0                                                        |
| security_groups                      | default                                                  |
| status                               | ACTIVE                                                   |
| tenant_id                            | 86c9bcc45ac74dbc8d807fd07bc3624d                         |
| test_net network                     | 10.0.0.4                                                 |
| updated                              | 2016-03-11T10:11:10Z                                     |
| user_id                              | 65104b6722c646cd81cd5a11988dd9bf                         |
+--------------------------------------+----------------------------------------------------------+


Workaround:
Log into Horizon as admin user, Identity - Identity - Projects
from dropdown menu of 'demo' project choose 'Modify quotas', do not change anything, just click 'Save'. Repeat step 4 and VM would be successfuly spawned.

Additional info:
Such a long bug report... uff.
Comment 2 Itxaka 2016-04-07 04:46:30 EDT
Martin do you have the environment still up so I can have a look at it?


thanks!
Comment 3 Martin Pavlásek 2016-04-07 04:55:42 EDT
Hi Itxaka,
definitely not the one, that I've discovered the bug, but I have one live deployment approx. two days old, so I'm going to try reproduce the bug on it and tell you more afterwards.
Comment 4 Martin Pavlásek 2016-04-07 07:17:20 EDT
I'm back, verifying that the bug is still there.

openstack-packstack-puppet-7.0.0-0.14.dev1702.g490e674.el7ost.noarch
openstack-packstack-7.0.0-0.14.dev1702.g490e674.el7ost.noarch
Comment 7 Itxaka 2016-04-08 06:36:51 EDT
Seems that the cinder call from horizon is returning quotas at 0 for everything:

<QuotaSet backup_gigabytes=0, backups=0, gigabytes=0, gigabytes_iscsi=0, per_volume_gigabytes=0, snapshots=0, snapshots_iscsi=0, volumes=0, volumes_iscsi=0>
Comment 8 Itxaka 2016-04-08 06:41:30 EDT
Aha, seems that the query from cli also fails if you do it as the demo1 user:

[root@mpavlase-rhos8-selenium-controller ~(keystone_demo1)]# cinder quota-show bd3a9788e8754e9fb079f966e900abf5
+----------------------+-------+
|       Property       | Value |
+----------------------+-------+
|   backup_gigabytes   |   0   |
|       backups        |   0   |
|      gigabytes       |   0   |
|   gigabytes_iscsi    |   0   |
| per_volume_gigabytes |   0   |
|      snapshots       |   0   |
|   snapshots_iscsi    |   0   |
|       volumes        |   0   |
|    volumes_iscsi     |   0   |
+----------------------+-------+
Comment 9 Itxaka 2016-04-08 07:03:31 EDT
Tested and it seems an issue when creating the user/tenant:

- Created demo2 user+tenant via horizon

Result: user can see its own quotas from cinder correctly:

[root@mpavlase-rhos8-selenium-controller ~(keystone_demo2)]# cinder quota-show 1312a01c26c140db856b479e2955515d
+----------------------+-------+
|       Property       | Value |
+----------------------+-------+
|   backup_gigabytes   |   0   |
|       backups        |   0   |
|      gigabytes       |  1000 |
|   gigabytes_iscsi    |   0   |
| per_volume_gigabytes |   0   |
|      snapshots       |   10  |
|   snapshots_iscsi    |   0   |
|       volumes        |   10  |
|    volumes_iscsi     |   0   |
+----------------------+-------+



- Created demo3 user+tenant from cli:

[root@mpavlase-rhos8-selenium-controller ~(keystone_admin)]# keystone tenant-create --name demo3
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |                                  |
|   enabled   |               True               |
|      id     | 17fb76df8c6746628e1c72c88c01d84b |
|     name    |              demo3               |
+-------------+----------------------------------+

[root@mpavlase-rhos8-selenium-controller ~(keystone_admin)]# keystone user-create --name demo3 --tenant 17fb76df8c6746628e1c72c88c01d84b --pass demo3
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |                                  |
| enabled  |               True               |
|    id    | 3dd370cb8ca449358815903737172ff0 |
|   name   |              demo3               |
| tenantId | 17fb76df8c6746628e1c72c88c01d84b |
| username |              demo3               |
+----------+----------------------------------+



Result: user gets 0 as all its cinder quotas
[root@mpavlase-rhos8-selenium-controller ~(keystone_admin)]# source keystonerc_demo
[root@mpavlase-rhos8-selenium-controller ~(keystone_demo3)]# cinder quota-show 17fb76df8c6746628e1c72c88c01d84b
+----------------------+-------+
|       Property       | Value |
+----------------------+-------+
|   backup_gigabytes   |   0   |
|       backups        |   0   |
|      gigabytes       |   0   |
|   gigabytes_iscsi    |   0   |
| per_volume_gigabytes |   0   |
|      snapshots       |   0   |
|   snapshots_iscsi    |   0   |
|       volumes        |   0   |
|    volumes_iscsi     |   0   |
+----------------------+-------+


So this seems to be an issue related to either keystoneclient or just keystone itself.
Comment 10 Itxaka 2016-04-08 08:07:22 EDT
Martin, as this seems to be an issue of either keystone or keystoneclient, I would need info on:

 - Are you still able to reproduce it if creating the user/tenant from horizon (I could not)?
 - Can this ticket be moved to the proper team so they can investigate it?

Thanks!
Comment 11 Martin Pavlásek 2016-04-08 08:43:27 EDT
I've tried to create all resources from Horizon (tenant, user, network, volume, volume snapshot, boot from snapshot) and I didn't face this bug again.

So yes, it seems to be probably caused by keystone. Changing component to general 'openstack-keystone'.
Comment 12 Christoph Dwertmann 2016-04-13 23:14:05 EDT
I ran into the same issue on Liberty when creating a new project and user from the CLI. Looking at the database, it seems that no quota values for the new project are added to the cinder->quotas table. "cinder quota-show" returns the default quotas when run as an admin user, but zeroes when run as the newly created user.

As a workaround, I explicitly set cinder quotas as admin:

openstack quota set --volumes 10 --snapshots 10 --gigabytes 1000 <project>

After that, "cinder quota-show" returns the correct values when queried as the new user and the user can successfully boot new instances.

The problem seems to be around fetching the default quotas when querying cinder as unprivileged user when no explicit quota values have been set.

Note You need to log in before you can comment on or make changes to this bug.