Bug 1889228 - A cloned encrypted volume cannot be attached [NEEDINFO]
Summary: A cloned encrypted volume cannot be attached
Keywords:
Status: ON_DEV
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 16.1 (Train)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Eric Harney
QA Contact: Tzach Shefi
Chuck Copello
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-10-19 05:48 UTC by Masayuki Igawa
Modified: 2020-11-24 15:51 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:
pgrist: needinfo? (tshefi)


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
OpenStack gerrit 762884 None NEW Fix volume rekey during clone 2020-11-23 15:55:53 UTC

Description Masayuki Igawa 2020-10-19 05:48:28 UTC
Description of problem:

A cloned encrypted volume cannot be attached to the same instance.


Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1. Create an encrypted volume with following this procedure.
https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html-single/manage_secrets_with_openstack_key_manager/index#encrypting_cinder_volumes

2. Attach the volume to an instance

3. Detach the volume from the instance

4. Clone the encrypted volume like below
$ openstack volume create --source Encrypted-Test-Volume Encrypted-Test-Volume-2 --type LuksEncryptor-Template-256

5. Attach the second encrypted volume to the same instance

6. The volume status is changed to "available" after all.

Actual results:

An cloned volume can't be attached to the same instance.

Expected results:

An cloned volume can be attached to the same instance.

Additional info:

I found a error log in nova-compute.log when attaching the cloned volume like below.
~~~
...
2020-10-19 05:16:34.780 8 DEBUG nova.virt.libvirt.host [req-5b7f3b30-d13f-448b-af19-85b891d954f9 3447e99bd5f241449862d08a3a18922f 45f33e86ac2443b6a0be595ca06e5f02 - default default] Secret XML: <secret ephemeral="no" private="no">
  <usage type="volume">
    <volume>0a631f2f-6e61-49a3-8744-ca33f57290a6</volume>
  </usage>
</secret>
 create_secret /usr/lib/python3.6/site-packages/nova/virt/libvirt/host.py:1027
2020-10-19 05:16:34.789 8 DEBUG nova.virt.libvirt.guest [req-5b7f3b30-d13f-448b-af19-85b891d954f9 3447e99bd5f241449862d08a3a18922f 45f33e86ac2443b6a0be595ca06e5f02 - default default] attach device xml: <disk type="block" device="disk">
  <driver name="qemu" type="raw" cache="none" io="native"/>
  <source dev="/dev/disk/by-id/scsi-36001405e384c1ec1fdc4028bdb757e5c"/>
  <target bus="virtio" dev="vdb"/>
  <serial>0a631f2f-6e61-49a3-8744-ca33f57290a6</serial>
  <encryption format="luks">
    <secret type="passphrase" uuid="b6de2589-1df6-400f-8677-6d661ff44c76"/>
  </encryption>
</disk>
 attach_device /usr/lib/python3.6/site-packages/nova/virt/libvirt/guest.py:304
2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [req-5b7f3b30-d13f-448b-af19-85b891d954f9 3447e99bd5f241449862d08a3a18922f 45f33e86ac2443b6a0be595ca06e5f02 - default default] [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] Failed to attach volume at mountpoint: /dev/vdb: libvirt.libvirtError: internal error
: unable to execute QEMU command 'blockdev-add': Invalid password, cannot unlock any keyslot
2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] Traceback (most recent call last):
2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4]   File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 1873, in attach_volume
2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4]     guest.attach_device(conf, persistent=True, live=live)
2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4]   File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/guest.py", line 305, in attach_device
2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4]     self._domain.attachDeviceFlags(device_xml, flags=flags)
2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4]   File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 190, in doit
2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4]     result = proxy_call(self._autowrap, f, *args, **kwargs)
2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4]   File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 148, in proxy_call
2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4]     rv = execute(f, *args, **kwargs)
2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4]   File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 129, in execute
2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4]     six.reraise(c, e, tb)
2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4]   File "/usr/lib/python3.6/site-packages/six.py", line 693, in reraise
2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4]     raise value
2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4]   File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 83, in tworker
2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4]     rv = meth(*args, **kwargs)
2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4]   File "/usr/lib64/python3.6/site-packages/libvirt.py", line 630, in attachDeviceFlags
2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4]     if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', dom=self)
2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] libvirt.libvirtError: internal error: unable to execute QEMU command 'blockdev-add': Invalid password, cannot unlock any keyslot
2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] 
2020-10-19 05:16:37.452 8 DEBUG nova.virt.libvirt.volume.iscsi [req-5b7f3b30-d13f-448b-af19-85b891d954f9 3447e99bd5f241449862d08a3a18922f 45f33e86ac2443b6a0be595ca06e5f02 - default default] [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] calling os-brick to detach iSCSI Volume disconnect_volume /usr/lib/python3.6/si
te-packages/nova/virt/libvirt/volume/iscsi.py:72
2020-10-19 05:16:37.453 8 DEBUG os_brick.initiator.connectors.iscsi [req-5b7f3b30-d13f-448b-af19-85b891d954f9 3447e99bd5f241449862d08a3a18922f 45f33e86ac2443b6a0be595ca06e5f02 - default default] ==> disconnect_volume: call "{'args': (<os_brick.initiator.connectors.iscsi.ISCSIConnector object at 0x7fa78e44bfd0>, {'tar
get_discovered': False, 'target_portal': '172.17.3.104:3260', 'target_iqn': 'iqn.2010-10.org.openstack:volume-0a631f2f-6e61-49a3-8744-ca33f57290a6', 'target_lun': 0, 'volume_id': '0a631f2f-6e61-49a3-8744-ca33f57290a6', 'auth_method': 'CHAP', 'auth_username': 'pJixTja4Yo5WJSc7dRud', 'auth_password': '***', 'encrypted'
: True, 'qos_specs': None, 'access_mode': 'rw', 'device_path': '/dev/disk/by-id/scsi-36001405e384c1ec1fdc4028bdb757e5c'}, None), 'kwargs': {}}" trace_logging_wrapper /usr/lib/python3.6/site-packages/os_brick/utils.py:146
2020-10-19 05:16:37.453 8 DEBUG oslo_concurrency.lockutils [req-5b7f3b30-d13f-448b-af19-85b891d954f9 3447e99bd5f241449862d08a3a18922f 45f33e86ac2443b6a0be595ca06e5f02 - default default] Lock "connect_volume" acquired by "os_brick.initiator.connectors.iscsi.ISCSIConnector.disconnect_volume" :: waited 0.000s inner /usr
/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:327
...
~~~

Is this a related bug?
https://bugzilla.redhat.com/show_bug.cgi?id=1814975

Comment 1 Lee Yarwood 2020-10-19 09:21:12 UTC
This smells like a c-vol bug when cloning the volume, maybe using a new passphrase but still referencing the old secret uuid somewhere? n-cpu isn't involved at that point so I'm moving this over to openstack-cinder.

Comment 4 Aharon Canan 2020-11-04 14:01:01 UTC
adding regression keyword following comment #2


Note You need to log in before you can comment on or make changes to this bug.