Bug 1451284 - [cinder] Failed to run VM with attached cinder disk
Summary: [cinder] Failed to run VM with attached cinder disk
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.1.2.2
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ovirt-4.2.0
: ---
Assignee: Daniel Erez
QA Contact: Raz Tamir
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-05-16 10:00 UTC by Eyal Shenitzky
Modified: 2017-06-18 07:00 UTC (History)
2 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2017-06-18 07:00:19 UTC
oVirt Team: Storage
Embargoed:
rule-engine: ovirt-4.2+


Attachments (Terms of Use)

Description Eyal Shenitzky 2017-05-16 10:00:57 UTC
Description of problem:

Failed to run VM with cinder disk.
Exception is thrown in the engine (see under 'additional info')

The following error appear on volume.log on the cinder side:

2017-05-16 11:28:02.841 32452 ERROR cinder.volume.targets.iscsi [req-5a9ef451-38e0-493c-ba56-ea38252a5225 2f99b04274524140b06a6b23bbbb1429 85c0085b49b44e5dbe4b78c2b3a8eaf8 
- - -] The volume driver requires the iSCSI initiator name in the connector.
2017-05-16 11:28:02.841 32452 ERROR oslo_messaging.rpc.dispatcher [req-5a9ef451-38e0-493c-ba56-ea38252a5225 2f99b04274524140b06a6b23bbbb1429 85c0085b49b44e5dbe4b78c2b3a8eaf
8 - - -] Exception during message handling: Invalid input received: Connector doesn't have required information: initiator
2017-05-16 11:28:02.841 32452 TRACE oslo_messaging.rpc.dispatcher Traceback (most recent call last):
2017-05-16 11:28:02.841 32452 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_and_rep
ly
2017-05-16 11:28:02.841 32452 TRACE oslo_messaging.rpc.dispatcher     executor_callback))
2017-05-16 11:28:02.841 32452 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch
2017-05-16 11:28:02.841 32452 TRACE oslo_messaging.rpc.dispatcher     executor_callback)
2017-05-16 11:28:02.841 32452 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, in _do_dispatch
2017-05-16 11:28:02.841 32452 TRACE oslo_messaging.rpc.dispatcher     result = func(ctxt, **new_args)
2017-05-16 11:28:02.841 32452 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 105, in wrapper
2017-05-16 11:28:02.841 32452 TRACE oslo_messaging.rpc.dispatcher     return f(*args, **kwargs)
2017-05-16 11:28:02.841 32452 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 1081, in initialize_connection
2017-05-16 11:28:02.841 32452 TRACE oslo_messaging.rpc.dispatcher     raise exception.InvalidInput(reason=err)
2017-05-16 11:28:02.841 32452 TRACE oslo_messaging.rpc.dispatcher InvalidInput: Invalid input received: Connector doesn't have required information: initiator
2017-05-16 11:28:02.841 32452 TRACE oslo_messaging.rpc.dispatcher 


Version-Release number of selected component (if applicable):
VDSM - 4.19.14-1.el7ev.x86_64
Engine - 4.1.2.2-0.1.el7

How reproducible:
100%

Steps to Reproduce:
1.Add cinder external provider with ceph as backends
2.Create a VM with disk based on cinder
3.Run the VM

Actual results:
Operation failed and exception is thrown

Expected results:
VM should run


Additional info:

* the exception is defferent then bug - https://bugzilla.redhat.com/show_bug.cgi?id=1448793

Exception on the engine:

2017-05-16 10:47:42,584+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-14) [6f3296bf-610c-4ba2-b76a-c4c61e925572] EVENT_ID: CINDER_DISK_CONNECTION_FAILURE(10,751), Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Failed to retrieve connection information for Cinder Disk 'vm_TestCase18894_REST_CIND_1418265953_Disk1'.
2017-05-16 10:47:42,584+03 INFO  [org.ovirt.engine.core.bll.RunVmCommandBase] (org.ovirt.thread.pool-7-thread-14) [6f3296bf-610c-4ba2-b76a-c4c61e925572] Update cinder disk connection failure: com.woorea.openstack.base.client.OpenStackResponseException: Internal Server Error
	at com.woorea.openstack.connector.RESTEasyConnector.request(RESTEasyConnector.java:125) [resteasy-connector.jar:]
	at com.woorea.openstack.base.client.OpenStackClient.request(OpenStackClient.java:51) [openstack-client.jar:]
	at com.woorea.openstack.base.client.OpenStackClient.execute(OpenStackClient.java:66) [openstack-client.jar:]
	at com.woorea.openstack.base.client.OpenStackRequest.execute(OpenStackRequest.java:98) [openstack-client.jar:]
	at org.ovirt.engine.core.bll.provider.storage.OpenStackVolumeProviderProxy.initializeConnectionForVolume(OpenStackVolumeProviderProxy.java:100) [bll.jar:]
	at org.ovirt.engine.core.bll.storage.disk.cinder.CinderBroker.lambda$initializeConnectionForDisk$8(CinderBroker.java:188) [bll.jar:]
	at org.ovirt.engine.core.bll.storage.disk.cinder.CinderBroker.execute(CinderBroker.java:49) [bll.jar:]
	at org.ovirt.engine.core.bll.storage.disk.cinder.CinderBroker.initializeConnectionForDisk(CinderBroker.java:186) [bll.jar:]
	at org.ovirt.engine.core.bll.storage.disk.cinder.CinderBroker.updateConnectionInfoForDisk(CinderBroker.java:194) [bll.jar:]
	at org.ovirt.engine.core.bll.RunVmCommandBase.updateCinderDisksConnections(RunVmCommandBase.java:297) [bll.jar:]
	at org.ovirt.engine.core.bll.RunVmCommand.runVm(RunVmCommand.java:232) [bll.jar:]
	at org.ovirt.engine.core.bll.RunVmCommand.perform(RunVmCommand.java:402) [bll.jar:]
	at org.ovirt.engine.core.bll.RunVmCommand.executeVmCommand(RunVmCommand.java:327) [bll.jar:]
	at org.ovirt.engine.core.bll.VmCommand.executeCommand(VmCommand.java:120) [bll.jar:]
	at org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1251) [bll.jar:]
	at org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1391) [bll.jar:]
	at org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:2055) [bll.jar:]
	at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:164) [utils.jar:]
	at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:103) [utils.jar:]
	at org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1451) [bll.jar:]
	at org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:397) [bll.jar:]
	at org.ovirt.engine.core.bll.RunVmCommandBase.reexecuteCommand(RunVmCommandBase.java:164) [bll.jar:]
	at org.ovirt.engine.core.bll.RunVmCommandBase.rerun(RunVmCommandBase.java:139) [bll.jar:]
	at org.ovirt.engine.core.bll.RunVmCommand.rerun(RunVmCommand.java:334) [bll.jar:]
	at org.ovirt.engine.core.bll.RunVmCommand.runVm(RunVmCommand.java:275) [bll.jar:]
	at org.ovirt.engine.core.bll.RunVmCommand.perform(RunVmCommand.java:402) [bll.jar:]
	at org.ovirt.engine.core.bll.RunVmCommand.executeVmCommand(RunVmCommand.java:327) [bll.jar:]
	at org.ovirt.engine.core.bll.VmCommand.executeCommand(VmCommand.java:120) [bll.jar:]
	at org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1251) [bll.jar:]
	at org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1391) [bll.jar:]
	at org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:2055) [bll.jar:]
	at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:164) [utils.jar:]
	at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:103) [utils.jar:]
	at org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1451) [bll.jar:]
	at org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:397) [bll.jar:]
	at org.ovirt.engine.core.bll.RunVmCommandBase.reexecuteCommand(RunVmCommandBase.java:164) [bll.jar:]
	at org.ovirt.engine.core.bll.RunVmCommandBase.rerun(RunVmCommandBase.java:139) [bll.jar:]
	at org.ovirt.engine.core.bll.RunVmCommand.rerun(RunVmCommand.java:334) [bll.jar:]
	at org.ovirt.engine.core.bll.RunVmCommand.runVm(RunVmCommand.java:275) [bll.jar:]
	at org.ovirt.engine.core.bll.RunVmCommand.perform(RunVmCommand.java:402) [bll.jar:]
	at org.ovirt.engine.core.bll.RunVmCommand.executeVmCommand(RunVmCommand.java:327) [bll.jar:]
	at org.ovirt.engine.core.bll.VmCommand.executeCommand(VmCommand.java:120) [bll.jar:]
	at org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1251) [bll.jar:]
	at org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1391) [bll.jar:]
	at org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:2055) [bll.jar:]
	at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:164) [utils.jar:]
	at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:103) [utils.jar:]
	at org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1451) [bll.jar:]
	at org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:397) [bll.jar:]
	at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.executeValidatedCommand(PrevalidatingMultipleActionsRunner.java:204) [bll.jar:]
	at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.runCommands(PrevalidatingMultipleActionsRunner.java:176) [bll.jar:]
	at org.ovirt.engine.core.bll.SortedMultipleActionsRunnerBase.runCommands(SortedMultipleActionsRunnerBase.java:20) [bll.jar:]
	at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.lambda$invokeCommands$3(PrevalidatingMultipleActionsRunner.java:182) [bll.jar:]
	at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalWrapperRunnable.run(ThreadPoolUtil.java:84) [utils.jar:]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [rt.jar:1.8.0_131]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [rt.jar:1.8.0_131]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_131]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_131]
	at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_131]

Comment 1 Allon Mureinik 2017-05-16 10:51:43 UTC
(In reply to Eyal Shenitzky from comment #0)
> Steps to Reproduce:
> 1.Add cinder external provider with ceph as backends
Are you SURE this is a Ceph backend?

There's a bunch of errors about iSCSI there:

> 2017-05-16 11:28:02.841 32452 ERROR cinder.volume.targets.iscsi
> [req-5a9ef451-38e0-493c-ba56-ea38252a5225 2f99b04274524140b06a6b23bbbb1429
> 85c0085b49b44e5dbe4b78c2b3a8eaf8 
> - - -] The volume driver requires the iSCSI initiator name in the connector.
> 2017-05-16 11:28:02.841 32452 ERROR oslo_messaging.rpc.dispatcher
> [req-5a9ef451-38e0-493c-ba56-ea38252a5225 2f99b04274524140b06a6b23bbbb1429
> 85c0085b49b44e5dbe4b78c2b3a8eaf
> 8 - - -] Exception during message handling: Invalid input received:
> Connector doesn't have required information: initiator
> 2017-05-16 11:28:02.841 32452 TRACE oslo_messaging.rpc.dispatcher Traceback
> (most recent call last):
> 2017-05-16 11:28:02.841 32452 TRACE oslo_messaging.rpc.dispatcher   File
> "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line
> 142, in _dispatch_and_reply

Comment 2 Eyal Shenitzky 2017-05-24 08:52:59 UTC
(In reply to Allon Mureinik from comment #1)
> (In reply to Eyal Shenitzky from comment #0)
> > Steps to Reproduce:
> > 1.Add cinder external provider with ceph as backends
> Are you SURE this is a Ceph backend?
> 
> There's a bunch of errors about iSCSI there:
> 
> > 2017-05-16 11:28:02.841 32452 ERROR cinder.volume.targets.iscsi
> > [req-5a9ef451-38e0-493c-ba56-ea38252a5225 2f99b04274524140b06a6b23bbbb1429
> > 85c0085b49b44e5dbe4b78c2b3a8eaf8 
> > - - -] The volume driver requires the iSCSI initiator name in the connector.
> > 2017-05-16 11:28:02.841 32452 ERROR oslo_messaging.rpc.dispatcher
> > [req-5a9ef451-38e0-493c-ba56-ea38252a5225 2f99b04274524140b06a6b23bbbb1429
> > 85c0085b49b44e5dbe4b78c2b3a8eaf
> > 8 - - -] Exception during message handling: Invalid input received:
> > Connector doesn't have required information: initiator
> > 2017-05-16 11:28:02.841 32452 TRACE oslo_messaging.rpc.dispatcher Traceback
> > (most recent call last):
> > 2017-05-16 11:28:02.841 32452 TRACE oslo_messaging.rpc.dispatcher   File
> > "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line
> > 142, in _dispatch_and_reply

Yes, we use only ceph as enabled_backend

Comment 3 Eyal Shenitzky 2017-05-24 11:09:12 UTC
Change severity, prevent us from running automation on cinder

Comment 4 Daniel Erez 2017-05-24 14:04:14 UTC
Seems that extra_specs are missing for ceph volume-type:

cinder extra-specs-list 
+--------------------------------------+-------+----------------------------------+
|                  ID                  |  Name |           extra_specs            |
+--------------------------------------+-------+----------------------------------+
| 182f8d3b-0558-46d8-918a-346e4ca53fb6 |  ceph |                {}                |
| 6fb917b4-0f6b-4a8b-9dff-154f542111e7 | iscsi | {u'volume_backend_name': u'lvm'} |
+--------------------------------------+-------+----------------------------------+

Please set it to 'volume_backend_name':'ceph' and try again.

Comment 5 Eyal Shenitzky 2017-06-18 05:24:59 UTC
I try to create a cinder disk after you add the extra_specs and it failed.

I looking for the storage to see if the problem is there.

Will update when there are some findings.

Comment 6 Daniel Erez 2017-06-18 07:00:19 UTC
(In reply to Eyal Shenitzky from comment #5)
> I try to create a cinder disk after you add the extra_specs and it failed.
> 
> I looking for the storage to see if the problem is there.
> 
> Will update when there are some findings.

So it seems like a configuration issue indeed (normal flow works for me on latest build). Closing for now, please re-open again when relevant.


Note You need to log in before you can comment on or make changes to this bug.