Bug 1081022

Summary: Non-admin user can not attach cinder volume to their instance (LIO)
Product: [Community] RDO Reporter: James Slagle <jslagle>
Component: openstack-cinderAssignee: Eric Harney <eharney>
Status: CLOSED CURRENTRELEASE QA Contact: Dafna Ron <dron>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: unspecifiedCC: afazekas, apevec, eharney, jslagle, yeylon, yrabl
Target Milestone: RCKeywords: TestBlocker
Target Release: Icehouse   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openstack-cinder-2014.1-0.9.rc3.fc21 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-03-30 23:09:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
the cinder logs none

Description James Slagle 2014-03-26 13:45:13 UTC
Description of problem:
As a non-admin user, you can not attach a cinder volume to one of your instances. Resulting cinder traceback:

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/cinder/openstack/common/rpc/amqp.py", line 462, in _process_data
    **args)
  File "/usr/lib/python2.7/site-packages/cinder/openstack/common/rpc/dispatcher.py", line 172, in dispatch
    result = getattr(proxyobj, method)(ctxt, **kwargs)
  File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 801, in initialize_connection
    self.driver.remove_export(context, volume)
  File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/lvm.py", line 540, in remove_export
    self.target_helper.remove_export(context, volume)
  File "/usr/lib/python2.7/site-packages/cinder/volume/iscsi.py", line 232, in remove_export
    volume['id'])
  File "/usr/lib/python2.7/site-packages/cinder/db/api.py", line 232, in volume_get_iscsi_target_num
    return IMPL.volume_get_iscsi_target_num(context, volume_id)
  File "/usr/lib/python2.7/site-packages/cinder/db/sqlalchemy/api.py", line 116, in wrapper
    raise exception.AdminRequired()  
AdminRequired: User does not have admin privileges


Version-Release number of selected component (if applicable):
# rpm -qa | grep openstack
openstack-neutron-ml2-2014.1-0.10.b3.fc21.noarch
openstack-swift-object-1.12.0-1.fc21.noarch
openstack-nova-conductor-2014.1-0.13.b3.fc20.noarch
openstack-nova-cert-2014.1-0.13.b3.fc20.noarch
openstack-dashboard-2014.1-0.5.b3.fc21.noarch
openstack-keystone-2014.1-0.5.b3.fc21.noarch
openstack-heat-engine-2014.1-0.5.b3.fc21.noarch
openstack-cinder-2014.1-0.6.b3.fc21.noarch
openstack-swift-1.12.0-1.fc21.noarch
openstack-neutron-2014.1-0.10.b3.fc21.noarch
openstack-swift-container-1.12.0-1.fc21.noarch
openstack-nova-common-2014.1-0.13.b3.fc20.noarch
openstack-nova-api-2014.1-0.13.b3.fc20.noarch
openstack-heat-api-2014.1-0.5.b3.fc21.noarch
openstack-neutron-openvswitch-2014.1-0.10.b3.fc21.noarch
openstack-swift-proxy-1.12.0-1.fc21.noarch
openstack-nova-scheduler-2014.1-0.13.b3.fc20.noarch
python-django-openstack-auth-1.1.4-1.fc20.noarch
openstack-heat-api-cloudwatch-2014.1-0.5.b3.fc21.noarch
openstack-utils-2013.2-3.fc21.noarch
openstack-glance-2014.1-0.4.b3.fc21.noarch
openstack-swift-plugin-swift3-1.7-3.fc20.noarch
openstack-swift-account-1.12.0-1.fc21.noarch
openstack-nova-console-2014.1-0.13.b3.fc20.noarch
openstack-heat-common-2014.1-0.5.b3.fc21.noarch
openstack-heat-api-cfn-2014.1-0.5.b3.fc21.noarch


How reproducible:
Reproduces every time. I did not see this issue before the icehouse-3 packages were released. But, I only tested it once before.

Steps to Reproduce:
I'm using the admin user here to show the "demo" user:
[jslagle@instack ~]$ keystone user-get demo
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |         demo         |
| enabled  |               True               |
|    id    | 96125637c9b543a68f9ab1fc70dacfc2 |
|   name   |               demo               |
| username |               demo               |
+----------+----------------------------------+
[jslagle@instack ~]$ keystone user-role-list --user demo


(No roles in above output).

Then I switch to the demo user:
Boot an instance (instance is also called demo):
nova boot --key-name default --flavor m1.tiny --image user demo

Once instance is running (as the demo user still:
[jslagle@instack ~]$ cinder create 1
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2014-03-26T13:23:58.775178      |
| display_description |                 None                 |
|     display_name    |                 None                 |
|      encrypted      |                False                 |
|          id         | 852810ea-7582-406e-b63a-d4f4b119ea6b |
|       metadata      |                  {}                  |
|         size        |                  1                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+
[jslagle@instack ~]$ nova volume-attach demo 852810ea-7582-406e-b63a-d4f4b119ea6b
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | 852810ea-7582-406e-b63a-d4f4b119ea6b |
| serverId | f0c34f0b-5be1-45ae-b695-31b7a5a7cf93 |
| volumeId | 852810ea-7582-406e-b63a-d4f4b119ea6b |
+----------+--------------------------------------+



Actual results:
No indication anything went wrong, but resultant traceback is the cinder-volume log on the control node, and volume is not actually attached to instance.

Expected results:
Volume gets attached

Comment 1 Charles Crouch 2014-03-31 15:09:36 UTC
IIUC this is a regression from I-2 packages.

Comment 5 Yogev Rabl 2014-04-06 11:00:09 UTC
Created attachment 883232 [details]
the cinder logs

Comment 6 Charles Crouch 2014-04-07 15:06:07 UTC
So does Eric have the ball again?

Comment 7 Eric Harney 2014-04-08 17:54:53 UTC
One problem here is that the following code results in the initial error not being logged:

        try:
            conn_info = self.driver.initialize_connection(volume, connector)
        except Exception as err:
            self.driver.remove_export(context, volume)
            err_msg = (_('Unable to fetch connection information from '
                         'backend: %(err)s') % {'err': err})
            LOG.error(err_msg)
            raise exception.VolumeBackendAPIException(data=err_msg)


Cinder should really call LOG.error(err_msg) before attempting to remove_export, which is what's failing here.  Unfortunately this means I don't know what actually caused the failure on your systems.

If I create one, can one of you try a scratch build to reproduce this which should produce a more useful log?

Comment 13 Eric Harney 2014-04-09 17:25:58 UTC
https://bugs.launchpad.net/cinder/+bug/1305197 is tracking the remove_export() failure shown in the Description but the reason for the original initialize_connection() failure is unclear.

Comment 14 James Slagle 2014-04-09 19:23:53 UTC
Strangely, I have tried to reproduce this today and am not having any luck. Though the original system has been reinstalled, I've still got the same set of openstack packages (haven't even applied your test build yet). So, I'm not sure what's different. Will keep an eye out and see if it ever reproduces again.

Comment 15 Attila Fazekas 2014-04-10 09:24:59 UTC
Simple script for reproducing the issue:
---------------------------------------
source  /root/keystonerc_admin

RES=resource-$RANDOM
IMAGE_NAME=cirros-0.3.2-x86_64-uec
FLAVOR=1

keystone tenant-create --name $RES
keystone user-create --name $RES --tenant $RES --pass verybadpass

CRED_ARGS="--os-username $RES --os-tenant-name $RES --os-password verybadpass "

nova $CRED_ARGS boot $RES --poll --image $IMAGE_NAME --flavor $FLAVOR
VOL_ID=`cinder $CRED_ARGS create 1 --display-name $RES | awk '/ id / {print $4}'`
while ! cinder $CRED_ARGS list | grep available; do
    echo "Wating for volume"
done

nova $CRED_ARGS volume-attach $RES $VOL_ID /dev/vdc
cinder $CRED_ARGS list
sleep 1
cinder $CRED_ARGS list
sleep 1
cinder $CRED_ARGS list
sleep 1
cinder $CRED_ARGS list

------------
output (RHEL7/ 'default' packstack config):
------------
# nova $CRED_ARGS volume-attach $RES $VOL_ID /dev/vdc
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdc                             |
| id       | 6ecf8032-8af8-4f64-b88d-434b7c02cd76 |
| serverId | c5bed64c-defc-4727-94ad-d24ac9f8b366 |
| volumeId | 6ecf8032-8af8-4f64-b88d-434b7c02cd76 |
+----------+--------------------------------------+
# cinder $CRED_ARGS list
+--------------------------------------+-----------+----------------+------+-------------+----------+-------------+
|                  ID                  |   Status  |  Display Name  | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+----------------+------+-------------+----------+-------------+
| 6ecf8032-8af8-4f64-b88d-434b7c02cd76 | available | resource-27171 |  1   |     None    |  false   |             |
+--------------------------------------+-----------+----------------+------+-------------+----------+-------------+

The volume not in the 'in-use' status.

You can see the stack traces in the upstream bug report.
https://bugs.launchpad.net/cinder/+bug/1300148