Bug 1714889 - [OSP15][Regression] Unable to attach encrypted volume to an instance
Summary: [OSP15][Regression] Unable to attach encrypted volume to an instance
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: python-os-brick
Version: 15.0 (Stein)
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: rc
: 15.0 (Stein)
Assignee: Lee Yarwood
QA Contact: Tzach Shefi
URL:
Whiteboard:
Depends On:
Blocks: 1624490 1624491
TreeView+ depends on / blocked
 
Reported: 2019-05-29 06:14 UTC by bkopilov
Modified: 2019-09-26 10:51 UTC (History)
16 users (show)

Fixed In Version: python-os-brick-2.8.2-0.20190719153749.4c2b253.el8ost
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1718253 1718255 (view as bug list)
Environment:
Last Closed: 2019-09-21 11:22:34 UTC
Target Upstream Version:


Attachments (Terms of Use)
compute configuration (113.93 KB, application/gzip)
2019-05-29 06:16 UTC, bkopilov
no flags Details
compute logs (180.93 KB, application/gzip)
2019-05-29 06:16 UTC, bkopilov
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1831994 0 None None None 2019-06-07 11:56:30 UTC
OpenStack gerrit 663914 0 'None' 'MERGED' 'luks: Default to LUKS v1 when formatting volumes' 2019-11-29 08:21:17 UTC
OpenStack gerrit 663999 0 'None' 'MERGED' 'luks: Default to LUKS v1 when formatting volumes' 2019-11-29 08:21:17 UTC
OpenStack gerrit 668448 0 'None' 'MERGED' 'luks: Explicitly use the luks1 type to ensure LUKS v1 is used' 2019-11-29 08:21:17 UTC
Red Hat Product Errata RHEA-2019:2811 0 None None None 2019-09-21 11:22:55 UTC

Description bkopilov 2019-05-29 06:14:34 UTC
Description of problem:
rhos15 , cinder backend LVM.

Scenario :
create a signed image (barbican) 
created an encrypted volume - empty
extend encrypted volume - passed
boot an instance from image ---> works.
Try to attach encrypted volume to the instance - > Fails.
Try to attach clear volume - works.

From compute logs , here is the traceback:

handling: libvirt.libvirtError: internal error: unable to execute QEMU command 'device_add': Property 'vi
rtio-blk-device.drive' can't find value 'drive-virtio-disk1'
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/oslo_m
essaging/rpc/server.py", line 166, in _process_incoming
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/oslo_m
essaging/rpc/dispatcher.py", line 265, in dispatch
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, 
ctxt, args)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/oslo_m
essaging/rpc/dispatcher.py", line 194, in _do_dispatch
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/nova/e
xception_wrapper.py", line 79, in wrapped
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     function_name, call_dict, binary, tb)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/oslo_u
tils/excutils.py", line 220, in __exit__
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     self.force_reraise()
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/oslo_u
tils/excutils.py", line 196, in force_reraise
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     six.reraise(self.type_, self.value, self.tb
)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/six.py
", line 693, in reraise
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     raise value
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/nova/e
xception_wrapper.py", line 69, in wrapped
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/nova/c
ompute/utils.py", line 1323, in decorated_function
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwa
rgs)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/nova/c
ompute/manager.py", line 214, in decorated_function
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     kwargs['instance'], e, sys.exc_info())
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/oslo_u
tils/excutils.py", line 220, in __exit__
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     self.force_reraise()
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/oslo_u
tils/excutils.py", line 196, in force_reraise
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     six.reraise(self.type_, self.value, self.tb
)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/six.py
", line 693, in reraise
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     raise value
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/nova/c
ompute/manager.py", line 202, in decorated_function
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwa
rgs)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/nova/c
ompute/manager.py", line 5613, in attach_volume
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     do_attach_volume(context, instance, driver_
bdm)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/oslo_c
oncurrency/lockutils.py", line 328, in inner
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     return f(*args, **kwargs)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/nova/c
ompute/manager.py", line 5611, in do_attach_volume
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     bdm.destroy()
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/oslo_u
tils/excutils.py", line 220, in __exit__
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     self.force_reraise()
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/oslo_u
tils/excutils.py", line 196, in force_reraise
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     six.reraise(self.type_, self.value, self.tb
)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/six.py
", line 693, in reraise
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     raise value
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/nova/c
ompute/manager.py", line 5608, in do_attach_volume
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     return self._attach_volume(context, instanc
e, driver_bdm)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/nova/c
ompute/manager.py", line 5655, in _attach_volume
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     volume_id=bdm.volume_id, tb=tb)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/oslo_u
tils/excutils.py", line 220, in __exit__
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     self.force_reraise()
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/oslo_u
tils/excutils.py", line 196, in force_reraise
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     six.reraise(self.type_, self.value, self.tb
)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/six.py
", line 693, in reraise
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     raise value
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/nova/c
ompute/manager.py", line 5628, in _attach_volume
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     do_driver_attach=True)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/nova/v
irt/block_device.py", line 46, in wrapped
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     ret_val = method(obj, context, *args, **kwa
rgs)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/nova/v
irt/block_device.py", line 651, in attach
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     virt_driver, do_driver_attach)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/nova/v
irt/block_device.py", line 629, in _do_attach
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     do_driver_attach)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/nova/v
irt/block_device.py", line 576, in _volume_attach
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     attachment_id)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/oslo_u
tils/excutils.py", line 220, in __exit__
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     self.force_reraise()
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/oslo_u
tils/excutils.py", line 196, in force_reraise
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     six.reraise(self.type_, self.value, self.tb
)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/six.py
", line 693, in reraise
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     raise value
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/nova/v
irt/block_device.py", line 567, in _volume_attach
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     device_type=self['device_type'], encryption
=encryption)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/nova/v
irt/libvirt/driver.py", line 1535, in attach_volume
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     encryption=encryption)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/oslo_u
tils/excutils.py", line 220, in __exit__
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     self.force_reraise()
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/oslo_u
tils/excutils.py", line 196, in force_reraise
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     six.reraise(self.type_, self.value, self.tb
)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/six.py
", line 693, in reraise
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     raise value
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/nova/v
irt/libvirt/driver.py", line 1508, in attach_volume
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     guest.attach_device(conf, persistent=True, 
live=live)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/nova/v
irt/libvirt/guest.py", line 306, in attach_device
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     self._domain.attachDeviceFlags(device_xml, 
flags=flags)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/eventl
et/tpool.py", line 190, in doit
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     result = proxy_call(self._autowrap, f, *arg
s, **kwargs)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/eventl
et/tpool.py", line 148, in proxy_call
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     rv = execute(f, *args, **kwargs)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/eventl
et/tpool.py", line 129, in execute
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     six.reraise(c, e, tb)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/six.py
", line 693, in reraise
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     raise value
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/eventl
et/tpool.py", line 83, in tworker
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     rv = meth(*args, **kwargs)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server   File "/usr/lib64/python3.6/site-packages/libv
irt.py", line 605, in attachDeviceFlags
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server     if ret == -1: raise libvirtError ('virDomai
nAttachDeviceFlags() failed', dom=self)
2019-05-29 05:42:59.687 8 ERROR oslo_messaging.rpc.server libvirt.libvirtError: internal error: unable to
 execute QEMU command 'device_add': Property 'virtio-blk-device.drive' can't find value 'drive-virtio-dis
k1'



Version-Release number of selected component (if applicable):


How reproducible:
Try to attach encrypted volume to instance.




Attaching logs from compute - logs directory and configuration files.

Thanks,
Benny

Comment 1 bkopilov 2019-05-29 06:16:10 UTC
Created attachment 1574603 [details]
compute configuration

Comment 2 bkopilov 2019-05-29 06:16:44 UTC
Created attachment 1574604 [details]
compute logs

Comment 3 Matthew Booth 2019-05-31 13:39:56 UTC
req-d0319eae-3f07-46bb-bded-497fa51b88b3
instance: 9900e5e5-bfb0-4339-9e40-66215278e98f
instance-00000050

Comment 4 Matthew Booth 2019-05-31 13:42:27 UTC
2019-05-29 05:42:57.652 8 DEBUG nova.virt.libvirt.host [req-d0319eae-3f07-46bb-bded-497fa51b88b3 57de679df5a2492ab1f47fc3afc03bdb 09e26080023a47e793886983aae2930c - default default] Secret XML: <secret ephemeral="no" private="no">
  <usage type="volume">
    <volume>858316ff-aa10-4948-a5f2-3a0ec32609f9</volume>
  </usage>
</secret>
 create_secret /usr/lib/python3.6/site-packages/nova/virt/libvirt/host.py:754
2019-05-29 05:42:57.665 8 DEBUG nova.virt.libvirt.guest [req-d0319eae-3f07-46bb-bded-497fa51b88b3 57de679df5a2492ab1f47fc3afc03bdb 09e26080023a47e793886983aae2930c - default default] attach device xml: <disk type="block" device="disk">
  <driver name="qemu" type="raw" cache="none" io="native"/>
  <source dev="/dev/disk/by-id/scsi-36001405938c441cb4cb441691dc6b64c"/>
  <target bus="virtio" dev="vdb"/>
  <serial>858316ff-aa10-4948-a5f2-3a0ec32609f9</serial>
  <encryption format="luks">
    <secret type="passphrase" uuid="c329cd59-33a2-4ffc-a96f-9bdeb74cbb9f"/>
  </encryption>
</disk>
 attach_device /usr/lib/python3.6/site-packages/nova/virt/libvirt/guest.py:305
2019-05-29 05:42:57.707 8 ERROR nova.virt.libvirt.driver [req-d0319eae-3f07-46bb-bded-497fa51b88b3 57de679df5a2492ab1f47fc3afc03bdb 09e26080023a47e793886983aae2930c - default default] [instance: 9900e5e5-bfb0-4339-9e40-66215278e98f] Failed to attach volume at mountpoint: /dev/vdb: libvirt.libvirtError: internal error: unable to execute QEMU command 'device_add': Property 'virtio-blk-device.drive' can't find value 'drive-virtio-disk1'
2019-05-29 05:42:57.707 8 ERROR nova.virt.libvirt.driver [instance: 9900e5e5-bfb0-4339-9e40-66215278e98f] Traceback (most recent call last):
2019-05-29 05:42:57.707 8 ERROR nova.virt.libvirt.driver [instance: 9900e5e5-bfb0-4339-9e40-66215278e98f]   File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 1508, in attach_volume
2019-05-29 05:42:57.707 8 ERROR nova.virt.libvirt.driver [instance: 9900e5e5-bfb0-4339-9e40-66215278e98f]     guest.attach_device(conf, persistent=True, live=live)
2019-05-29 05:42:57.707 8 ERROR nova.virt.libvirt.driver [instance: 9900e5e5-bfb0-4339-9e40-66215278e98f]   File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/guest.py", line 306, in attach_device
2019-05-29 05:42:57.707 8 ERROR nova.virt.libvirt.driver [instance: 9900e5e5-bfb0-4339-9e40-66215278e98f]     self._domain.attachDeviceFlags(device_xml, flags=flags)
2019-05-29 05:42:57.707 8 ERROR nova.virt.libvirt.driver [instance: 9900e5e5-bfb0-4339-9e40-66215278e98f]   File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 190, in doit
2019-05-29 05:42:57.707 8 ERROR nova.virt.libvirt.driver [instance: 9900e5e5-bfb0-4339-9e40-66215278e98f]     result = proxy_call(self._autowrap, f, *args, **kwargs)
2019-05-29 05:42:57.707 8 ERROR nova.virt.libvirt.driver [instance: 9900e5e5-bfb0-4339-9e40-66215278e98f]   File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 148, in proxy_call
2019-05-29 05:42:57.707 8 ERROR nova.virt.libvirt.driver [instance: 9900e5e5-bfb0-4339-9e40-66215278e98f]     rv = execute(f, *args, **kwargs)
2019-05-29 05:42:57.707 8 ERROR nova.virt.libvirt.driver [instance: 9900e5e5-bfb0-4339-9e40-66215278e98f]   File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 129, in execute
2019-05-29 05:42:57.707 8 ERROR nova.virt.libvirt.driver [instance: 9900e5e5-bfb0-4339-9e40-66215278e98f]     six.reraise(c, e, tb)
2019-05-29 05:42:57.707 8 ERROR nova.virt.libvirt.driver [instance: 9900e5e5-bfb0-4339-9e40-66215278e98f]   File "/usr/lib/python3.6/site-packages/six.py", line 693, in reraise
2019-05-29 05:42:57.707 8 ERROR nova.virt.libvirt.driver [instance: 9900e5e5-bfb0-4339-9e40-66215278e98f]     raise value
2019-05-29 05:42:57.707 8 ERROR nova.virt.libvirt.driver [instance: 9900e5e5-bfb0-4339-9e40-66215278e98f]   File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 83, in tworker
2019-05-29 05:42:57.707 8 ERROR nova.virt.libvirt.driver [instance: 9900e5e5-bfb0-4339-9e40-66215278e98f]     rv = meth(*args, **kwargs)
2019-05-29 05:42:57.707 8 ERROR nova.virt.libvirt.driver [instance: 9900e5e5-bfb0-4339-9e40-66215278e98f]   File "/usr/lib64/python3.6/site-packages/libvirt.py", line 605, in attachDeviceFlags
2019-05-29 05:42:57.707 8 ERROR nova.virt.libvirt.driver [instance: 9900e5e5-bfb0-4339-9e40-66215278e98f]     if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', dom=self)
2019-05-29 05:42:57.707 8 ERROR nova.virt.libvirt.driver [instance: 9900e5e5-bfb0-4339-9e40-66215278e98f] libvirt.libvirtError: internal error: unable to execute QEMU command 'device_add': Property 'virtio-blk-device.drive' can't find value 'drive-virtio-disk1'
2019-05-29 05:42:57.707 8 ERROR nova.virt.libvirt.driver [instance: 9900e5e5-bfb0-4339-9e40-66215278e98f]

Comment 5 Matthew Booth 2019-05-31 14:19:28 UTC
Please can I login to the environment to debug?

Comment 8 Matthew Booth 2019-06-07 11:28:58 UTC
Here's the issue:

2019-06-07 11:15:29.247+0000: 583393: info : qemuMonitorIOWrite:549 : QEMU_MONITOR_IO_WRITE: mon=0x7f085801bcf0 buf={"execute":"human-monitor-command","arguments":{"command-line":"drive_add dummy file=/dev/disk/by-id/scsi-36001405564a251ee00c42f7841f42d56,key-secret=virtio-disk1-luks-secret0,format=luks,if=none,id=drive-virtio-disk1,cache=none,aio=native"},"id":"libvirt-74"}
 len=263 ret=263 errno=0
2019-06-07 11:15:29.262+0000: 583393: debug : qemuMonitorJSONIOProcessLine:196 : Line [{"return": "LUKS version 2 is not supported\r\n", "id": "libvirt-74"}]
2019-06-07 11:15:29.262+0000: 583393: info : qemuMonitorJSONIOProcessLine:216 : QEMU_MONITOR_RECV_REPLY: mon=0x7f085801bcf0 reply={"return": "LUKS version 2 is not supported\r\n", "id": "libvirt-74"}
2019-06-07 11:15:29.262+0000: 583408: debug : qemuMonitorJSONCommandWithFd:309 : Receive command reply ret=0 rxObject=0x55942a5d4d70

It looks like nova (via os-brick) is formatting the volume with LUKSv2, but qemu only supports LUKSv1. We need to fix os-brick to create LUKSv1.

Comment 12 bkopilov 2019-07-01 07:03:27 UTC
Encryption provider: 

#1 
Tested on setup :[libvirt]  virt_type=kvm

Tested with two:
# nova.volume.encryptors.cryptsetup.CryptsetupEncryptor

Attached volume to an instance - pass


# nova.volume.encryptors.luks.LuksEncryptor

 Attached volume to an instance - Fails

10, in createWithFlags
/var/log/containers/nova/nova-compute.log:452:2019-07-01 06:34:03.458 6 ERROR nova.compute.manager [instance: 6141bc09-eed0-49b6-b5cc-55bbcc1c5e55]     if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
/var/log/containers/nova/nova-compute.log:453:2019-07-01 06:34:03.458 6 ERROR nova.compute.manager [instance: 6141bc09-eed0-49b6-b5cc-55bbcc1c5e55] libvirt.libvirtError: internal error: process exited while connecting to monitor: 2019-07-01T06:34:01.893543Z qemu-kvm: -drive file=/dev/disk/by-id/scsi-3600140561acf3eb5c8648dabf6a240f9,key-secret=virtio-disk0-luks-secret0,format=luks,if=none,id=drive-virtio-disk0,cache=none,aio=native: LUKS version 2 is not supported


#2 tested on [libvirt]  virt_type=qemu

nova.volume.encryptors.luks.LuksEncryptor


/var/log/containers/nova/nova-compute.log:461:2019-07-01 06:56:51.031 6 ERROR nova.compute.manager [instance: 30f01680-617d-4bc9-923c-eab0b2811223] libvirt.libvirtError: internal error: process exited while connecting to monitor: 2019-07-01T06:56:49.493908Z qemu-kvm: -drive file=/dev/disk/by-id/scsi-360014059f925afae7214960ae451c7b3,key-secret=virtio-disk0-luks-secret0,format=luks,if=none,id=drive-virtio-disk0,cache=none,aio=native: LUKS version 2 is not supported

Please let me know if i am missing something here ,looks like its still broken

Comment 13 Lee Yarwood 2019-07-01 10:06:48 UTC
(In reply to bkopilov from comment #12)
> Encryption provider: 
> 
> #1 
> Tested on setup :[libvirt]  virt_type=kvm
> 
> Tested with two:
> # nova.volume.encryptors.cryptsetup.CryptsetupEncryptor
> 
> Attached volume to an instance - pass
> 
> 
> # nova.volume.encryptors.luks.LuksEncryptor
> 
>  Attached volume to an instance - Fails
> 
> 10, in createWithFlags
> /var/log/containers/nova/nova-compute.log:452:2019-07-01 06:34:03.458 6
> ERROR nova.compute.manager [instance: 6141bc09-eed0-49b6-b5cc-55bbcc1c5e55] 
> if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed',
> dom=self)
> /var/log/containers/nova/nova-compute.log:453:2019-07-01 06:34:03.458 6
> ERROR nova.compute.manager [instance: 6141bc09-eed0-49b6-b5cc-55bbcc1c5e55]
> libvirt.libvirtError: internal error: process exited while connecting to
> monitor: 2019-07-01T06:34:01.893543Z qemu-kvm: -drive
> file=/dev/disk/by-id/scsi-3600140561acf3eb5c8648dabf6a240f9,key-
> secret=virtio-disk0-luks-secret0,format=luks,if=none,id=drive-virtio-disk0,
> cache=none,aio=native: LUKS version 2 is not supported
> 
> 
> #2 tested on [libvirt]  virt_type=qemu
> 
> nova.volume.encryptors.luks.LuksEncryptor
> 
> 
> /var/log/containers/nova/nova-compute.log:461:2019-07-01 06:56:51.031 6
> ERROR nova.compute.manager [instance: 30f01680-617d-4bc9-923c-eab0b2811223]
> libvirt.libvirtError: internal error: process exited while connecting to
> monitor: 2019-07-01T06:56:49.493908Z qemu-kvm: -drive
> file=/dev/disk/by-id/scsi-360014059f925afae7214960ae451c7b3,key-
> secret=virtio-disk0-luks-secret0,format=luks,if=none,id=drive-virtio-disk0,
> cache=none,aio=native: LUKS version 2 is not supported
> 
> Please let me know if i am missing something here ,looks like its still
> broken

Can you please provide nova-compute logs from this test run?

Comment 15 Lee Yarwood 2019-07-01 11:57:05 UTC
Ah, I missed that the luks type itself can point to either v1 or v2 depending on how --with-default-luks-format is configured at build time. Appears we are defaulting to v2 now thus the creation of a v2 header.

luks: Explicitly use the luks1 type to ensure LUKS v1 is used
https://review.opendev.org/#/c/668448/

Comment 22 errata-xmlrpc 2019-09-21 11:22:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:2811


Note You need to log in before you can comment on or make changes to this bug.