Bug 1021185

Summary: openstack-nova: nvoa fail to attach volumes to instances
Product: Red Hat OpenStack Reporter: Yogev Rabl <yrabl>
Component: openstack-novaAssignee: Xavier Queralt <xqueralt>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Ami Jeain <ajeain>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.0CC: dallan, hateya, ndipanov, yeylon, yrabl
Target Milestone: ---   
Target Release: 4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: storage
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-11-15 15:02:58 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
logs of the nova and cinder while trying to attach a volume to an instance none

Description Yogev Rabl 2013-10-20 08:30:47 UTC
Description of problem:
Nova is trying to attach volumes, it seems it's attaching but the attachments is failing, without any notification. On the contrary, it seems like the action was a success and the CLI is displaying: 

+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/hda                             |
| serverId | b7141fea-99b0-4342-8c89-b45270f3beb0 |
| id       | 99f84efc-8ead-49f8-bbb8-ac7a9ab58a84 |
| volumeId | 99f84efc-8ead-49f8-bbb8-ac7a9ab58a84 |
+----------+--------------------------------------+

but the cinder list shows: 

| 99f84efc-8ead-49f8-bbb8-ac7a9ab58a84 | available |     20GB     |  20  |     None    |  false   |             |


The configuration of the RHOS is AIO with local TGT. 


Version-Release number of selected component (if applicable):
openstack-cinder-2013.2-0.11.rc1.el6ost.noarch
openstack-nova-compute-2013.2-0.25.rc1.el6ost.noarch
openstack-nova-conductor-2013.2-0.25.rc1.el
openstack-nova-scheduler-2013.2-0.25.rc1.el6ost.noarch6ost.noarch
openstack-nova-api-2013.2-0.25.rc1.el6ost.noarch

Red Hat Enterprise Linux Server release 6.5 Beta (Santiago)

How reproducible:
Was able to reproduce multiple times

Steps to Reproduce:
1. Create a volume (small / large, it doesn't matter) 
2. Create an instance 
3. Attach the volume to the running instance 

Actual results:
The volumes aren't been attached. 

Expected results:
the volumes are attached to the instance.

Additional info:

Comment 1 Yogev Rabl 2013-10-20 08:31:59 UTC
Created attachment 814163 [details]
logs of the nova and cinder while trying to attach a volume to an instance

Comment 4 Yogev Rabl 2013-10-20 08:58:10 UTC
after rebooting the machine the volume-attach worked.

Comment 5 Xavier Queralt 2013-10-29 10:18:58 UTC
I haven't been able to reproduce this issue with the latest puddle (which contains the final version of nova instead of the RC1). Could you try again with it?

Besides this, I'm missing some information in the description:

 * How was the volume created? I see that nova selects /dev/hda as device name which should only be valid for the ide bus (e.g. cdrom) when using the kvm or qemu hypervisors. If you haven't created the volume from a cdrom type image, an empty volume should use the virtio bus and the device name should look like /dev/vd?
 * I see an error in the compute logs claiming that /dev/hda is already in use. Did you specify this device name when calling attach-volume or you specified auto? Could you provide the full command list you run?
 * Is cinder using swift as its storage backend? I see a partial traceback in the beginning of the attached logs but it looks old.
 * I'd suggest to attach the logs for each service in a different file instead of attaching the output of tail on all the logs, otherwise it might get a bit confusing.

Comment 6 Dave Allan 2013-11-15 15:02:58 UTC
With the information provided, it's not possible to do anything further with this BZ, so I'm closing as INSUFFICIENT DATA.  If the information becomes available, please don't hesitate to reopen and provide it.

Comment 7 Yogev Rabl 2014-06-10 07:22:35 UTC
The problem never reproduced