Upstream bug: https://bugs.launchpad.net/cinder/+bug/1852168 Creating a volume from a snapshot of an encrypted volume may result in an unusable volume. Detectable only by looking at behavior inside the instance upon attach.
[RBD only bug] Creating encrypted volume from a snapshot -created from a source encrypted volume as well- will be forced to resize and lose the encrypted header. When creating an encrypted volume from a snapshot of an encrypted volume, if the amount of data in the original volume at the time the snapshot was created is very close to the gibibyte boundary given by the volume's size, it is possible for the data in the new volume to be silently truncated. Usually the source volume would be the same size or smaller than the destination volume and they *must* share the same volume-type. In particular RBD workflow would be something like this: A source luks volume would be 1026M, we write some data and create a snap from it. We like to create a new luks volume from a snapshot so the create_volume_from_snapshot() method performs a RBD clone first and then a resize if needed. In addition the _clone() method creates a clone (copy-on-write child) of the parent snapshot. Object size will be identical to that of the parent image unless specified (we don't in cinder) so size will be the same as the parent snapshot. If the desired size of the destination luks volume is 1G the create_volume_from_snapshot() won't perform any resize and will be 1026M as the parent. This solves bug https://bugs.launchpad.net/cinder/+bug/1922408 because we don't force it to resize and because of that we don't truncate the data anymore. The second case scenario is when we would like to increase the size of the destination volume. As far as I test it this won't face the encryption header problem but we still need to calculate the difference size to provide the size that the user is expecting.
You need to get it reviewed now so that we can merge it
Verified on: openstack-cinder-15.6.1-2.20210528143332.el8ost.3.noarch On a Ceph backed deployment, 1. configure an encrypted volume type: (overcloud) [stack@undercloud-0 ~]$ cinder type-create LUKS +--------------------------------------+------+-------------+-----------+ | ID | Name | Description | Is_Public | +--------------------------------------+------+-------------+-----------+ | 60a197a0-e8ea-4b41-82c1-94896edb7b0b | LUKS | - | True | +--------------------------------------+------+-------------+-----------+ (overcloud) [stack@undercloud-0 ~]$ cinder encryption-type-create --cipher aes-xts-plain64 --key_size 256 --control_location front-end LUKS nova.volume.encryptors.luks.LuksEncryptor +--------------------------------------+-------------------------------------------+-----------------+----------+------------------+ | Volume Type ID | Provider | Cipher | Key Size | Control Location | +--------------------------------------+-------------------------------------------+-----------------+----------+------------------+ | 60a197a0-e8ea-4b41-82c1-94896edb7b0b | nova.volume.encryptors.luks.LuksEncryptor | aes-xts-plain64 | 256 | front-end | +--------------------------------------+-------------------------------------------+-----------------+----------+------------------+ (overcloud) [stack@undercloud-0 ~]$ cinder type-key LUKS set volume_backend_name=tripleo_ceph 2. Create an empty encrypted volume: (overcloud) [stack@undercloud-0 ~]$ cinder create 2 --volume-type LUKS --name enc_vol1 +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2021-09-09T17:05:18.000000 | | description | None | | encrypted | True | | id | f2f85485-4802-45d0-96f7-1a4daffbcb6f | | metadata | {} | | migration_status | None | | multiattach | False | | name | enc_vol1 | | os-vol-host-attr:host | hostgroup@tripleo_ceph#tripleo_ceph | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 14ea9978798646f3b73d9f9f83b346a3 | | replication_status | None | | size | 2 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | 2021-09-09T17:05:18.000000 | | user_id | f760f95650fd42f1a17542ff72524859 | | volume_type | LUKS | +--------------------------------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ | f2f85485-4802-45d0-96f7-1a4daffbcb6f | available | enc_vol1 | 2 | LUKS | false | | +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ 2. Boot an instance, attach enc volume to it: (overcloud) [stack@undercloud-0 ~]$ nova volume-attach inst1 f2f85485-4802-45d0-96f7-1a4daffbcb6f +-----------------------+--------------------------------------+ | Property | Value | +-----------------------+--------------------------------------+ | delete_on_termination | False | | device | /dev/vdb | | id | f2f85485-4802-45d0-96f7-1a4daffbcb6f | | serverId | 6e6a092e-b6dd-4f1a-a896-27a0a8b130df | | tag | - | | volumeId | f2f85485-4802-45d0-96f7-1a4daffbcb6f | +-----------------------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+--------+----------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+----------+------+-------------+----------+--------------------------------------+ | f2f85485-4802-45d0-96f7-1a4daffbcb6f | in-use | enc_vol1 | 2 | LUKS | false | 6e6a092e-b6dd-4f1a-a896-27a0a8b130df | +--------------------------------------+--------+----------+------+-------------+----------+--------------------------------------+ 3. ssh into instance, mount enc volume fill it with data: (overcloud) [stack@undercloud-0 ~]$ ssh cirros.0.230 Warning: Permanently added '10.0.0.230' (ECDSA) to the list of known hosts. $ sudo -i # # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 1G 0 disk |-vda1 253:1 0 1015M 0 part / `-vda15 253:15 0 8M 0 part vdb 253:16 0 2G 0 disk # mkfs.ext4 /dev/vdb mke2fs 1.42.12 (29-Aug-2014) Creating filesystem with 524288 4k blocks and 131072 inodes Filesystem UUID: b1857fee-ed12-4e5e-b124-1cc0794fd8fa Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912 Allocating group tables: done Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done # mkdir mnt # mount /dev/vdb mnt/ # df -h Filesystem Size Used Available Use% Mounted on /dev 240.1M 0 240.1M 0% /dev /dev/vda1 978.9M 24.0M 914.1M 3% / tmpfs 244.2M 0 244.2M 0% /dev/shm tmpfs 244.2M 92.0K 244.1M 0% /run /dev/vdb 1.9G 3.0M 1.8G 0% /root/mnt # dd if=/dev/urandom of=/root/mnt/data_file.bin bs=10M count=184 184+0 records in 184+0 records out # df -h Filesystem Size Used Available Use% Mounted on /dev 240.1M 0 240.1M 0% /dev /dev/vda1 978.9M 24.0M 914.1M 3% / tmpfs 244.2M 0 244.2M 0% /dev/shm tmpfs 244.2M 92.0K 244.1M 0% /run /dev/vdb 1.9G 1.8G 0 100% /root/mnt -> filled to the max. 4. Lets create a snapshot of the volume: (overcloud) [stack@undercloud-0 ~]$ cinder snapshot-create f2f85485-4802-45d0-96f7-1a4daffbcb6f --force --name EncVol1Snap +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | created_at | 2021-09-09T18:30:57.829487 | | description | None | | id | 4a8d0b6d-59e9-483a-8d25-f9f0a0d669d4 | | metadata | {} | | name | EncVol1Snap | | size | 2 | | status | creating | | updated_at | None | | volume_id | f2f85485-4802-45d0-96f7-1a4daffbcb6f | +-------------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ cinder snapshot-list +--------------------------------------+--------------------------------------+-----------+-------------+------+ | ID | Volume ID | Status | Name | Size | +--------------------------------------+--------------------------------------+-----------+-------------+------+ | 4a8d0b6d-59e9-483a-8d25-f9f0a0d669d4 | f2f85485-4802-45d0-96f7-1a4daffbcb6f | available | EncVol1Snap | 2 | +--------------------------------------+--------------------------------------+-----------+-------------+------+ 5. Create a new encrypted volume from snap: (overcloud) [stack@undercloud-0 ~]$ cinder create 2 --snapshot-id 4a8d0b6d-59e9-483a-8d25-f9f0a0d669d4 --name enc_vol2_from_snap +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2021-09-09T18:32:27.000000 | | description | None | | encrypted | True | | id | af239828-8897-45e1-8131-2d2ce54e7b86 | | metadata | {} | | migration_status | None | | multiattach | False | | name | enc_vol2_from_snap | | os-vol-host-attr:host | None | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 14ea9978798646f3b73d9f9f83b346a3 | | replication_status | None | | size | 2 | | snapshot_id | 4a8d0b6d-59e9-483a-8d25-f9f0a0d669d4 | | source_volid | None | | status | creating | | updated_at | None | | user_id | f760f95650fd42f1a17542ff72524859 | | volume_type | LUKS | +--------------------------------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+--------------------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------------+------+-------------+----------+--------------------------------------+ | af239828-8897-45e1-8131-2d2ce54e7b86 | available | enc_vol2_from_snap | 2 | LUKS | false | | | f2f85485-4802-45d0-96f7-1a4daffbcb6f | in-use | enc_vol1 | 2 | LUKS | false | 6e6a092e-b6dd-4f1a-a896-27a0a8b130df | +--------------------------------------+-----------+--------------------+------+-------------+----------+--------------------------------------+ 6. Attach volume to instance: (overcloud) [stack@undercloud-0 ~]$ nova volume-attach inst1 af239828-8897-45e1-8131-2d2ce54e7b86 +-----------------------+--------------------------------------+ | Property | Value | +-----------------------+--------------------------------------+ | delete_on_termination | False | | device | /dev/vdc | | id | af239828-8897-45e1-8131-2d2ce54e7b86 | | serverId | 6e6a092e-b6dd-4f1a-a896-27a0a8b130df | | tag | - | | volumeId | af239828-8897-45e1-8131-2d2ce54e7b86 | +-----------------------+--------------------------------------+ 7. Mount cloned vol compare content: # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 1G 0 disk |-vda1 253:1 0 1015M 0 part / `-vda15 253:15 0 8M 0 part vdb 253:16 0 2G 0 disk /root/mnt vdc 253:32 0 2G 0 disk # mkdir mnt2 # mount /dev/vd vda vda1 vda15 vdb vdc # mount /dev/vdc mnt2/ # ls mnt2/ data_file.bin lost+found # df -h Filesystem Size Used Available Use% Mounted on /dev 240.1M 0 240.1M 0% /dev /dev/vda1 978.9M 24.0M 914.1M 3% / tmpfs 244.2M 0 244.2M 0% /dev/shm tmpfs 244.2M 92.0K 244.1M 0% /run /dev/vdb 1.9G 1.8G 0 100% /root/mnt /dev/vdc 1.9G 1.8G 0 100% /root/mnt/mnt2 Both volumes appear to contain the same data, Yet on closer examination, I noticed a minor gap, The volume created from the snapshot consumes a tiny bit less space: # df Filesystem 1K-blocks Used Available Use% Mounted on /dev 245908 0 245908 0% /dev /dev/vda1 1002422 24542 936063 3% / tmpfs 250076 0 250076 0% /dev/shm tmpfs 250076 92 249984 0% /run /dev/vdb 1998672 1887240 0 100% /root/mnt /dev/vdc 1998672 1887236 0 100% /root/mnt/mnt2 Just for comparison sake, retested same thing with non encrypted volumes attached to inst2 Same gap showed up in used space: Filesystem 1K-blocks Used Available Use% Mounted on /dev 245908 0 245908 0% /dev /dev/vda1 1002422 24530 936075 3% / tmpfs 250076 0 250076 0% /dev/shm tmpfs 250076 92 249984 0% /run /dev/vdb 1998672 1887240 0 100% /root/mnt /dev/vdc 1998672 1887236 0 100% /root/mnt/mnt2 We see the exact same gap in used space, Eric explained as both sets of volumes and clones for inst1 and inst2 show identical size 1998672. The minor gap on used space may be related to file system metadata. I also compared via diff-ing data files on source and cloned volumes, on both inst1 and inst2, the source and cloned data files were identical. As we were able to create an encrypted cloned volume from an encrypted source volume, we are good to verify.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenStack Platform (RHOSP) 16.2 enhancement advisory), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2021:3483