Description of problem: A cloned encrypted volume cannot be attached to the same instance. Version-Release number of selected component (if applicable): How reproducible: 100% Steps to Reproduce: 1. Create an encrypted volume with following this procedure. https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html-single/manage_secrets_with_openstack_key_manager/index#encrypting_cinder_volumes 2. Attach the volume to an instance 3. Detach the volume from the instance 4. Clone the encrypted volume like below $ openstack volume create --source Encrypted-Test-Volume Encrypted-Test-Volume-2 --type LuksEncryptor-Template-256 5. Attach the second encrypted volume to the same instance 6. The volume status is changed to "available" after all. Actual results: An cloned volume can't be attached to the same instance. Expected results: An cloned volume can be attached to the same instance. Additional info: I found a error log in nova-compute.log when attaching the cloned volume like below. ~~~ ... 2020-10-19 05:16:34.780 8 DEBUG nova.virt.libvirt.host [req-5b7f3b30-d13f-448b-af19-85b891d954f9 3447e99bd5f241449862d08a3a18922f 45f33e86ac2443b6a0be595ca06e5f02 - default default] Secret XML: <secret ephemeral="no" private="no"> <usage type="volume"> <volume>0a631f2f-6e61-49a3-8744-ca33f57290a6</volume> </usage> </secret> create_secret /usr/lib/python3.6/site-packages/nova/virt/libvirt/host.py:1027 2020-10-19 05:16:34.789 8 DEBUG nova.virt.libvirt.guest [req-5b7f3b30-d13f-448b-af19-85b891d954f9 3447e99bd5f241449862d08a3a18922f 45f33e86ac2443b6a0be595ca06e5f02 - default default] attach device xml: <disk type="block" device="disk"> <driver name="qemu" type="raw" cache="none" io="native"/> <source dev="/dev/disk/by-id/scsi-36001405e384c1ec1fdc4028bdb757e5c"/> <target bus="virtio" dev="vdb"/> <serial>0a631f2f-6e61-49a3-8744-ca33f57290a6</serial> <encryption format="luks"> <secret type="passphrase" uuid="b6de2589-1df6-400f-8677-6d661ff44c76"/> </encryption> </disk> attach_device /usr/lib/python3.6/site-packages/nova/virt/libvirt/guest.py:304 2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [req-5b7f3b30-d13f-448b-af19-85b891d954f9 3447e99bd5f241449862d08a3a18922f 45f33e86ac2443b6a0be595ca06e5f02 - default default] [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] Failed to attach volume at mountpoint: /dev/vdb: libvirt.libvirtError: internal error : unable to execute QEMU command 'blockdev-add': Invalid password, cannot unlock any keyslot 2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] Traceback (most recent call last): 2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 1873, in attach_volume 2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] guest.attach_device(conf, persistent=True, live=live) 2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/guest.py", line 305, in attach_device 2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] self._domain.attachDeviceFlags(device_xml, flags=flags) 2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 190, in doit 2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] result = proxy_call(self._autowrap, f, *args, **kwargs) 2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 148, in proxy_call 2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] rv = execute(f, *args, **kwargs) 2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 129, in execute 2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] six.reraise(c, e, tb) 2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] File "/usr/lib/python3.6/site-packages/six.py", line 693, in reraise 2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] raise value 2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 83, in tworker 2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] rv = meth(*args, **kwargs) 2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] File "/usr/lib64/python3.6/site-packages/libvirt.py", line 630, in attachDeviceFlags 2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', dom=self) 2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] libvirt.libvirtError: internal error: unable to execute QEMU command 'blockdev-add': Invalid password, cannot unlock any keyslot 2020-10-19 05:16:37.445 8 ERROR nova.virt.libvirt.driver [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] 2020-10-19 05:16:37.452 8 DEBUG nova.virt.libvirt.volume.iscsi [req-5b7f3b30-d13f-448b-af19-85b891d954f9 3447e99bd5f241449862d08a3a18922f 45f33e86ac2443b6a0be595ca06e5f02 - default default] [instance: 37e44769-a18b-4b3e-adf7-ed754a13c8a4] calling os-brick to detach iSCSI Volume disconnect_volume /usr/lib/python3.6/si te-packages/nova/virt/libvirt/volume/iscsi.py:72 2020-10-19 05:16:37.453 8 DEBUG os_brick.initiator.connectors.iscsi [req-5b7f3b30-d13f-448b-af19-85b891d954f9 3447e99bd5f241449862d08a3a18922f 45f33e86ac2443b6a0be595ca06e5f02 - default default] ==> disconnect_volume: call "{'args': (<os_brick.initiator.connectors.iscsi.ISCSIConnector object at 0x7fa78e44bfd0>, {'tar get_discovered': False, 'target_portal': '172.17.3.104:3260', 'target_iqn': 'iqn.2010-10.org.openstack:volume-0a631f2f-6e61-49a3-8744-ca33f57290a6', 'target_lun': 0, 'volume_id': '0a631f2f-6e61-49a3-8744-ca33f57290a6', 'auth_method': 'CHAP', 'auth_username': 'pJixTja4Yo5WJSc7dRud', 'auth_password': '***', 'encrypted' : True, 'qos_specs': None, 'access_mode': 'rw', 'device_path': '/dev/disk/by-id/scsi-36001405e384c1ec1fdc4028bdb757e5c'}, None), 'kwargs': {}}" trace_logging_wrapper /usr/lib/python3.6/site-packages/os_brick/utils.py:146 2020-10-19 05:16:37.453 8 DEBUG oslo_concurrency.lockutils [req-5b7f3b30-d13f-448b-af19-85b891d954f9 3447e99bd5f241449862d08a3a18922f 45f33e86ac2443b6a0be595ca06e5f02 - default default] Lock "connect_volume" acquired by "os_brick.initiator.connectors.iscsi.ISCSIConnector.disconnect_volume" :: waited 0.000s inner /usr /lib/python3.6/site-packages/oslo_concurrency/lockutils.py:327 ... ~~~ Is this a related bug? https://bugzilla.redhat.com/show_bug.cgi?id=1814975
This smells like a c-vol bug when cloning the volume, maybe using a new passphrase but still referencing the old secret uuid somewhere? n-cpu isn't involved at that point so I'm moving this over to openstack-cinder.
adding regression keyword following comment #2
Verified on: openstack-cinder-15.3.1-6.el8ost.noarch Following reproduce steps: 0. Create an encrypted volume type: (overcloud) [stack@undercloud-0 ~]$ cinder type-create LUKS +--------------------------------------+------+-------------+-----------+ | ID | Name | Description | Is_Public | +--------------------------------------+------+-------------+-----------+ | 0c007642-e949-44eb-b016-ad0489987a81 | LUKS | - | True | +--------------------------------------+------+-------------+-----------+ (overcloud) [stack@undercloud-0 ~]$ cinder encryption-type-create --cipher aes-xts-plain64 --key_size 256 --control_location front-end LUKS nova.volume.encryptors.luks.LuksEncryptor +--------------------------------------+-------------------------------------------+-----------------+----------+------------------+ | Volume Type ID | Provider | Cipher | Key Size | Control Location | +--------------------------------------+-------------------------------------------+-----------------+----------+------------------+ | 0c007642-e949-44eb-b016-ad0489987a81 | nova.volume.encryptors.luks.LuksEncryptor | aes-xts-plain64 | 256 | front-end | +--------------------------------------+-------------------------------------------+-----------------+----------+------------------+ (overcloud) [stack@undercloud-0 ~]$ cinder type-key LUKS set volume_backend_name=tripleo_iscsi 1. Create an encrypted volume: (overcloud) [stack@undercloud-0 ~]$ cinder create 4 --volume-type LUKS --name EncVol1 +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2021-01-21T11:27:04.000000 | | description | None | | encrypted | True | | id | 01978948-6f94-4927-96e2-6193e888cf8a | | metadata | {} | | migration_status | None | | multiattach | False | | name | EncVol1 | | os-vol-host-attr:host | None | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 890bdf68e1fb4e2cad562c477cc57df4 | | replication_status | None | | size | 4 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | None | | user_id | 31ad1a04179a4d658c581d172ddd0999 | | volume_type | LUKS | +--------------------------------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+---------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+---------+------+-------------+----------+-------------+ | 01978948-6f94-4927-96e2-6193e888cf8a | available | EncVol1 | 4 | LUKS | false | | +--------------------------------------+-----------+---------+------+-------------+----------+-------------+ 2. Attach the volume to an instance: (overcloud) [stack@undercloud-0 ~]$ nova volume-attach inst1 01978948-6f94-4927-96e2-6193e888cf8a auto +-----------------------+--------------------------------------+ | Property | Value | +-----------------------+--------------------------------------+ | delete_on_termination | False | | device | /dev/vdb | | id | 01978948-6f94-4927-96e2-6193e888cf8a | | serverId | d6bf97c8-ed9f-42c7-9f8a-c0b15ac265b1 | | tag | - | | volumeId | 01978948-6f94-4927-96e2-6193e888cf8a | +-----------------------+--------------------------------------+ Make FS/write some data on attached volume 3. Detach the volume from the instance: (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+--------+---------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+---------+------+-------------+----------+--------------------------------------+ | 01978948-6f94-4927-96e2-6193e888cf8a | in-use | EncVol1 | 4 | LUKS | false | d6bf97c8-ed9f-42c7-9f8a-c0b15ac265b1 | +--------------------------------------+--------+---------+------+-------------+----------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ nova volume-detach inst1 01978948-6f94-4927-96e2-6193e888cf8a (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+---------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+---------+------+-------------+----------+-------------+ | 01978948-6f94-4927-96e2-6193e888cf8a | available | EncVol1 | 4 | LUKS | false | | +--------------------------------------+-----------+---------+------+-------------+----------+-------------+ 4. Clone the encrypted volume: (overcloud) [stack@undercloud-0 ~]$ openstack volume create --source EncVol1 EncVolClone --type LUKS +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2021-01-21T11:53:52.000000 | | description | None | | encrypted | True | | id | d2ac4d17-a5e9-468f-a083-47f68d1763b8 | | migration_status | None | | multiattach | False | | name | EncVolClone | | properties | | | replication_status | None | | size | 4 | | snapshot_id | None | | source_volid | 01978948-6f94-4927-96e2-6193e888cf8a | | status | creating | | type | LUKS | | updated_at | None | | user_id | 31ad1a04179a4d658c581d172ddd0999 | +---------------------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+-------------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+-------------+------+-------------+----------+-------------+ | 01978948-6f94-4927-96e2-6193e888cf8a | available | EncVol1 | 4 | LUKS | false | | | d2ac4d17-a5e9-468f-a083-47f68d1763b8 | available | EncVolClone | 4 | LUKS | false | | +--------------------------------------+-----------+-------------+------+-------------+----------+-------------+ 5. Attach the second (cloned) encrypted volume to the same instance (overcloud) [stack@undercloud-0 ~]$ nova volume-attach inst1 d2ac4d17-a5e9-468f-a083-47f68d1763b8 auto +-----------------------+--------------------------------------+ | Property | Value | +-----------------------+--------------------------------------+ | delete_on_termination | False | | device | /dev/vdb | | id | d2ac4d17-a5e9-468f-a083-47f68d1763b8 | | serverId | d6bf97c8-ed9f-42c7-9f8a-c0b15ac265b1 | | tag | - | | volumeId | d2ac4d17-a5e9-468f-a083-47f68d1763b8 | +-----------------------+--------------------------------------+ 6. The volume status is changed to "available" after all: (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+-------------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+-------------+------+-------------+----------+--------------------------------------+ | 01978948-6f94-4927-96e2-6193e888cf8a | available | EncVol1 | 4 | LUKS | false | | | d2ac4d17-a5e9-468f-a083-47f68d1763b8 | in-use | EncVolClone | 4 | LUKS | false | d6bf97c8-ed9f-42c7-9f8a-c0b15ac265b1 | +--------------------------------------+-----------+-------------+------+-------------+----------+--------------------------------------+ This now works as expected, volume failed to attached before this fix. The cloned encrypted volume is successfully attached to original instance. Adding an extra validation step, confirm data was cloned. Inside Cirros instance: # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 1G 0 disk |-vda1 253:1 0 1015M 0 part / `-vda15 253:15 0 8M 0 part vdb 253:16 0 4G 0 disk # # mount /dev/vdb mnt/ [ 703.289963] EXT4-fs (vdb): couldn't mount as ext3 due to feature incompatibilities [ 703.321525] EXT4-fs (vdb): couldn't mount as ext2 due to feature incompatibilities # # cat mnt/tshefi.txt Hello -> confirming original data is present on cloned volume. Good to verify, we also added automation test case which will soon be added.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenStack Platform 16.1.4 director bug fix advisory), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:0817