Bug 1021185 - openstack-nova: nvoa fail to attach volumes to instances
openstack-nova: nvoa fail to attach volumes to instances
Status: CLOSED INSUFFICIENT_DATA
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova (Show other bugs)
4.0
Unspecified Unspecified
unspecified Severity high
: ---
: 4.0
Assigned To: Xavier Queralt
Ami Jeain
storage
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-10-20 04:30 EDT by Yogev Rabl
Modified: 2014-06-10 03:22 EDT (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-11-15 10:02:58 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
logs of the nova and cinder while trying to attach a volume to an instance (1.03 MB, text/x-log)
2013-10-20 04:31 EDT, Yogev Rabl
no flags Details

  None (edit)
Description Yogev Rabl 2013-10-20 04:30:47 EDT
Description of problem:
Nova is trying to attach volumes, it seems it's attaching but the attachments is failing, without any notification. On the contrary, it seems like the action was a success and the CLI is displaying: 

+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/hda                             |
| serverId | b7141fea-99b0-4342-8c89-b45270f3beb0 |
| id       | 99f84efc-8ead-49f8-bbb8-ac7a9ab58a84 |
| volumeId | 99f84efc-8ead-49f8-bbb8-ac7a9ab58a84 |
+----------+--------------------------------------+

but the cinder list shows: 

| 99f84efc-8ead-49f8-bbb8-ac7a9ab58a84 | available |     20GB     |  20  |     None    |  false   |             |


The configuration of the RHOS is AIO with local TGT. 


Version-Release number of selected component (if applicable):
openstack-cinder-2013.2-0.11.rc1.el6ost.noarch
openstack-nova-compute-2013.2-0.25.rc1.el6ost.noarch
openstack-nova-conductor-2013.2-0.25.rc1.el
openstack-nova-scheduler-2013.2-0.25.rc1.el6ost.noarch6ost.noarch
openstack-nova-api-2013.2-0.25.rc1.el6ost.noarch

Red Hat Enterprise Linux Server release 6.5 Beta (Santiago)

How reproducible:
Was able to reproduce multiple times

Steps to Reproduce:
1. Create a volume (small / large, it doesn't matter) 
2. Create an instance 
3. Attach the volume to the running instance 

Actual results:
The volumes aren't been attached. 

Expected results:
the volumes are attached to the instance.

Additional info:
Comment 1 Yogev Rabl 2013-10-20 04:31:59 EDT
Created attachment 814163 [details]
logs of the nova and cinder while trying to attach a volume to an instance
Comment 4 Yogev Rabl 2013-10-20 04:58:10 EDT
after rebooting the machine the volume-attach worked.
Comment 5 Xavier Queralt 2013-10-29 06:18:58 EDT
I haven't been able to reproduce this issue with the latest puddle (which contains the final version of nova instead of the RC1). Could you try again with it?

Besides this, I'm missing some information in the description:

 * How was the volume created? I see that nova selects /dev/hda as device name which should only be valid for the ide bus (e.g. cdrom) when using the kvm or qemu hypervisors. If you haven't created the volume from a cdrom type image, an empty volume should use the virtio bus and the device name should look like /dev/vd?
 * I see an error in the compute logs claiming that /dev/hda is already in use. Did you specify this device name when calling attach-volume or you specified auto? Could you provide the full command list you run?
 * Is cinder using swift as its storage backend? I see a partial traceback in the beginning of the attached logs but it looks old.
 * I'd suggest to attach the logs for each service in a different file instead of attaching the output of tail on all the logs, otherwise it might get a bit confusing.
Comment 6 Dave Allan 2013-11-15 10:02:58 EST
With the information provided, it's not possible to do anything further with this BZ, so I'm closing as INSUFFICIENT DATA.  If the information becomes available, please don't hesitate to reopen and provide it.
Comment 7 Yogev Rabl 2014-06-10 03:22:35 EDT
The problem never reproduced

Note You need to log in before you can comment on or make changes to this bug.