RDO tickets are now tracked in Jira https://issues.redhat.com/projects/RDO/issues/
Bug 1081022 - Non-admin user can not attach cinder volume to their instance (LIO)
Summary: Non-admin user can not attach cinder volume to their instance (LIO)
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: RDO
Classification: Community
Component: openstack-cinder
Version: unspecified
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: RC
: Icehouse
Assignee: Eric Harney
QA Contact: Dafna Ron
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-03-26 13:45 UTC by James Slagle
Modified: 2016-04-27 01:41 UTC (History)
6 users (show)

Fixed In Version: openstack-cinder-2014.1-0.9.rc3.fc21
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-03-30 23:09:37 UTC
Embargoed:


Attachments (Terms of Use)
the cinder logs (394.51 KB, application/zip)
2014-04-06 11:00 UTC, Yogev Rabl
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1300148 0 None None None Never
OpenStack gerrit 86400 0 None None None Never
OpenStack gerrit 87364 0 None None None Never

Description James Slagle 2014-03-26 13:45:13 UTC
Description of problem:
As a non-admin user, you can not attach a cinder volume to one of your instances. Resulting cinder traceback:

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/cinder/openstack/common/rpc/amqp.py", line 462, in _process_data
    **args)
  File "/usr/lib/python2.7/site-packages/cinder/openstack/common/rpc/dispatcher.py", line 172, in dispatch
    result = getattr(proxyobj, method)(ctxt, **kwargs)
  File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 801, in initialize_connection
    self.driver.remove_export(context, volume)
  File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/lvm.py", line 540, in remove_export
    self.target_helper.remove_export(context, volume)
  File "/usr/lib/python2.7/site-packages/cinder/volume/iscsi.py", line 232, in remove_export
    volume['id'])
  File "/usr/lib/python2.7/site-packages/cinder/db/api.py", line 232, in volume_get_iscsi_target_num
    return IMPL.volume_get_iscsi_target_num(context, volume_id)
  File "/usr/lib/python2.7/site-packages/cinder/db/sqlalchemy/api.py", line 116, in wrapper
    raise exception.AdminRequired()  
AdminRequired: User does not have admin privileges


Version-Release number of selected component (if applicable):
# rpm -qa | grep openstack
openstack-neutron-ml2-2014.1-0.10.b3.fc21.noarch
openstack-swift-object-1.12.0-1.fc21.noarch
openstack-nova-conductor-2014.1-0.13.b3.fc20.noarch
openstack-nova-cert-2014.1-0.13.b3.fc20.noarch
openstack-dashboard-2014.1-0.5.b3.fc21.noarch
openstack-keystone-2014.1-0.5.b3.fc21.noarch
openstack-heat-engine-2014.1-0.5.b3.fc21.noarch
openstack-cinder-2014.1-0.6.b3.fc21.noarch
openstack-swift-1.12.0-1.fc21.noarch
openstack-neutron-2014.1-0.10.b3.fc21.noarch
openstack-swift-container-1.12.0-1.fc21.noarch
openstack-nova-common-2014.1-0.13.b3.fc20.noarch
openstack-nova-api-2014.1-0.13.b3.fc20.noarch
openstack-heat-api-2014.1-0.5.b3.fc21.noarch
openstack-neutron-openvswitch-2014.1-0.10.b3.fc21.noarch
openstack-swift-proxy-1.12.0-1.fc21.noarch
openstack-nova-scheduler-2014.1-0.13.b3.fc20.noarch
python-django-openstack-auth-1.1.4-1.fc20.noarch
openstack-heat-api-cloudwatch-2014.1-0.5.b3.fc21.noarch
openstack-utils-2013.2-3.fc21.noarch
openstack-glance-2014.1-0.4.b3.fc21.noarch
openstack-swift-plugin-swift3-1.7-3.fc20.noarch
openstack-swift-account-1.12.0-1.fc21.noarch
openstack-nova-console-2014.1-0.13.b3.fc20.noarch
openstack-heat-common-2014.1-0.5.b3.fc21.noarch
openstack-heat-api-cfn-2014.1-0.5.b3.fc21.noarch


How reproducible:
Reproduces every time. I did not see this issue before the icehouse-3 packages were released. But, I only tested it once before.

Steps to Reproduce:
I'm using the admin user here to show the "demo" user:
[jslagle@instack ~]$ keystone user-get demo
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |         demo         |
| enabled  |               True               |
|    id    | 96125637c9b543a68f9ab1fc70dacfc2 |
|   name   |               demo               |
| username |               demo               |
+----------+----------------------------------+
[jslagle@instack ~]$ keystone user-role-list --user demo


(No roles in above output).

Then I switch to the demo user:
Boot an instance (instance is also called demo):
nova boot --key-name default --flavor m1.tiny --image user demo

Once instance is running (as the demo user still:
[jslagle@instack ~]$ cinder create 1
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-03-26T13:23:58.775178      |
| display_description |                 None                 |
|     display_name    |                 None                 |
|      encrypted      |                False                 |
|          id         | 852810ea-7582-406e-b63a-d4f4b119ea6b |
|       metadata      |                  {}                  |
|         size        |                  1                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+
[jslagle@instack ~]$ nova volume-attach demo 852810ea-7582-406e-b63a-d4f4b119ea6b
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | 852810ea-7582-406e-b63a-d4f4b119ea6b |
| serverId | f0c34f0b-5be1-45ae-b695-31b7a5a7cf93 |
| volumeId | 852810ea-7582-406e-b63a-d4f4b119ea6b |
+----------+--------------------------------------+



Actual results:
No indication anything went wrong, but resultant traceback is the cinder-volume log on the control node, and volume is not actually attached to instance.

Expected results:
Volume gets attached

Comment 1 Charles Crouch 2014-03-31 15:09:36 UTC
IIUC this is a regression from I-2 packages.

Comment 5 Yogev Rabl 2014-04-06 11:00:09 UTC
Created attachment 883232 [details]
the cinder logs

Comment 6 Charles Crouch 2014-04-07 15:06:07 UTC
So does Eric have the ball again?

Comment 7 Eric Harney 2014-04-08 17:54:53 UTC
One problem here is that the following code results in the initial error not being logged:

        try:
            conn_info = self.driver.initialize_connection(volume, connector)
        except Exception as err:
            self.driver.remove_export(context, volume)
            err_msg = (_('Unable to fetch connection information from '
                         'backend: %(err)s') % {'err': err})
            LOG.error(err_msg)
            raise exception.VolumeBackendAPIException(data=err_msg)


Cinder should really call LOG.error(err_msg) before attempting to remove_export, which is what's failing here.  Unfortunately this means I don't know what actually caused the failure on your systems.

If I create one, can one of you try a scratch build to reproduce this which should produce a more useful log?

Comment 13 Eric Harney 2014-04-09 17:25:58 UTC
https://bugs.launchpad.net/cinder/+bug/1305197 is tracking the remove_export() failure shown in the Description but the reason for the original initialize_connection() failure is unclear.

Comment 14 James Slagle 2014-04-09 19:23:53 UTC
Strangely, I have tried to reproduce this today and am not having any luck. Though the original system has been reinstalled, I've still got the same set of openstack packages (haven't even applied your test build yet). So, I'm not sure what's different. Will keep an eye out and see if it ever reproduces again.

Comment 15 Attila Fazekas 2014-04-10 09:24:59 UTC
Simple script for reproducing the issue:
---------------------------------------
source  /root/keystonerc_admin

RES=resource-$RANDOM
IMAGE_NAME=cirros-0.3.2-x86_64-uec
FLAVOR=1

keystone tenant-create --name $RES
keystone user-create --name $RES --tenant $RES --pass verybadpass

CRED_ARGS="--os-username $RES --os-tenant-name $RES --os-password verybadpass "

nova $CRED_ARGS boot $RES --poll --image $IMAGE_NAME --flavor $FLAVOR
VOL_ID=`cinder $CRED_ARGS create 1 --display-name $RES | awk '/ id / {print $4}'`
while ! cinder $CRED_ARGS list | grep available; do
    echo "Wating for volume"
done

nova $CRED_ARGS volume-attach $RES $VOL_ID /dev/vdc
cinder $CRED_ARGS list
sleep 1
cinder $CRED_ARGS list
sleep 1
cinder $CRED_ARGS list
sleep 1
cinder $CRED_ARGS list

------------
output (RHEL7/ 'default' packstack config):
------------
# nova $CRED_ARGS volume-attach $RES $VOL_ID /dev/vdc
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdc                             |
| id       | 6ecf8032-8af8-4f64-b88d-434b7c02cd76 |
| serverId | c5bed64c-defc-4727-94ad-d24ac9f8b366 |
| volumeId | 6ecf8032-8af8-4f64-b88d-434b7c02cd76 |
+----------+--------------------------------------+
# cinder $CRED_ARGS list
+--------------------------------------+-----------+----------------+------+-------------+----------+-------------+
|                  ID                  |   Status  |  Display Name  | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+----------------+------+-------------+----------+-------------+
| 6ecf8032-8af8-4f64-b88d-434b7c02cd76 | available | resource-27171 |  1   |     None    |  false   |             |
+--------------------------------------+-----------+----------------+------+-------------+----------+-------------+

The volume not in the 'in-use' status.

You can see the stack traces in the upstream bug report.
https://bugs.launchpad.net/cinder/+bug/1300148


Note You need to log in before you can comment on or make changes to this bug.