Bug 1092079 - nova vmware driver cannot attach >1 cinder volumes
Summary: nova vmware driver cannot attach >1 cinder volumes
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: 4.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 6.0 (Juno)
Assignee: Matthew Booth
QA Contact: Jaroslav Henner
URL:
Whiteboard:
Depends On:
Blocks: 1055536
TreeView+ depends on / blocked
 
Reported: 2014-04-28 17:36 UTC by Jaroslav Henner
Modified: 2019-09-09 13:51 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-04-27 14:51:39 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
compute.log (15.06 KB, text/plain)
2014-04-28 17:36 UTC, Jaroslav Henner
no flags Details

Description Jaroslav Henner 2014-04-28 17:36:37 UTC
Created attachment 890538 [details]
compute.log

Description of problem:
$subj. 

Version-Release number of selected component (if applicable):
openstack-nova-compute-2013.2.3-5.el6ost.noarch

How reproducible:
always

Steps to Reproduce:
# nova boot --image cirros-0.3.1-x86_64-disk.vmdk --flavor m1.tiny bar
| id                                   | c5822237-b054-4e26-81a9-629817f9f2e4 |
...
# cinder create --display-name first 1
...
|          id         | 5f1c4596-e3f0-4829-b82a-aa3fba621b40 |
...
# cinder create --display-name second 1
...
|          id         | cfc0327a-3b57-45ab-adea-71aa96b3c226 |
...
# watch cinder list
# nova stop bar

Then try to attach:
# nova volume-attach bar 5f1c4596-e3f0-4829-b82a-aa3fba621b40 auto
# nova volume-attach bar cfc0327a-3b57-45ab-adea-71aa96b3c226 auto

Actual results:
The first one attaches fine, second one fails, no matter which device I specify:
 * auto
 * /dev/sdb
 * /dev/sdc

Expected results:
both attached

Additional info:
Same error is produced when I try to attach the first volume as sdc, instead of sdb:
nova volume-attach bar cfc0327a-3b57-45ab-adea-71aa96b3c226 /dev/sdc

Comment 2 Stephen Gordon 2014-05-22 19:57:11 UTC
Are you able to reproduce on Icehouse and is there an upstream bug for this?

Comment 3 Matthew Booth 2014-05-23 09:59:14 UTC
This appears to work in Icehouse. Can you provide any error messages it gives when it fails? Or are you just saying that when you boot the instance the volume is not present?

Comment 4 Matthew Booth 2014-05-23 12:32:02 UTC
I can't replicate this on RHOS 4.0 with default cinder volumes, but it looks like you were using iscsi volumes. The error you're seeing is thrown by vsphere in response to an invalid api call. Will try to reproduce with iscsi volumes.

Comment 5 Matthew Booth 2014-05-23 16:11:26 UTC
So, this is a bit broken upstream. Specifically, the driver doesn't use any authentication information provided by cinder, and consequently ESX can't actually connect to an iscsi volume which requires authentication. Also, you need to ensure you manually add a software iscsi hba to each host. I'll probably have a short series of patches for this upstream.

I can't be 100% sure that this is the error you're hitting, but it's a good start. For the moment I'd say cinder drivers other than vsphere are unsupported.

Comment 6 Stephen Gordon 2014-05-23 16:57:31 UTC
(In reply to Matthew Booth from comment #5)
> I can't be 100% sure that this is the error you're hitting, but it's a good
> start. For the moment I'd say cinder drivers other than vsphere are
> unsupported.


By that do you mean the VMDK driver? I thought that was currently explicitly stated as the only one we are supporting with vCenter.

Comment 7 Matthew Booth 2014-05-27 08:55:49 UTC
Haven't checked the support statement, but yes: cinder.volume.drivers.vmware.vmdk.VMwareVcVmdkDriver

Comment 8 Stephen Gordon 2014-05-27 16:22:30 UTC
Yeah, I think we're pretty clear on the support stance for RHELOSP 5.


Note You need to log in before you can comment on or make changes to this bug.