Implement support for Ceph clone v2 in the RBD driver This will permit deletion of images which still have COW clones created either by Cinder or Nova in different pools, deferring to Ceph management of the "trash" [1]. It is complementary to the corresponding Cinder feature [2] 1. https://github.com/ceph/ceph/pull/27521 2. https://bugzilla.redhat.com/show_bug.cgi?id=1997715
Verified on: python3-glance-store-1.0.2-2.20220111043148.el8ost.noarch On a ceph backed deployment, First I had to bump up the min client version ceph osd set-require-min-compat-client mimic To confirm it works [root@controller-2 /]# ceph osd dump | grep client require_min_compat_client mimic FYI you must set this before you upload the image, at first I had set it after the image was uploaded and verification failed. I used a script to upload a cirros image to Glance and create a volume from said image: (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | fe409ae3-d227-4aab-b874-bb1b5c39aad3 | available | Pansible_vol | 1 | tripleo | true | | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ (overcloud) [stack@undercloud-0 ~]$ glance image-list +--------------------------------------+--------+ | ID | Name | +--------------------------------------+--------+ | 33d0d4a6-36d9-45a1-9e3b-ff5bcd7e5b15 | cirros | +--------------------------------------+--------+ Now lets try to delete the image, it should fail: (overcloud) [stack@undercloud-0 ~]$ glance image-delete 33d0d4a6-36d9-45a1-9e3b-ff5bcd7e5b15 Unable to delete image '33d0d4a6-36d9-45a1-9e3b-ff5bcd7e5b15' because it is in use. Great we expected this error, plus we see the image is still available. (overcloud) [stack@undercloud-0 ~]$ glance image-list +--------------------------------------+--------+ | ID | Name | +--------------------------------------+--------+ | 33d0d4a6-36d9-45a1-9e3b-ff5bcd7e5b15 | cirros | +--------------------------------------+--------+ Lastly we try to consume this image again, before this fix this would fail (overcloud) [stack@undercloud-0 ~]$ cinder create 1 --image 33d0d4a6-36d9-45a1-9e3b-ff5bcd7e5b15 --name volBFromImageAfterDelete +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2022-02-24T20:51:40.000000 | | description | None | | encrypted | False | | id | 790d43e1-378e-478e-b10a-b1a9cd05f620 | | metadata | {} | | migration_status | None | | multiattach | False | | name | volBFromImageAfterDelete | | os-vol-host-attr:host | None | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 4e0fc4a8b79d4dfba3ee51ae20d0a4df | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | None | | user_id | 7165e9408a0a447681534d9c5f8e6b5a | | volume_type | tripleo | +--------------------------------+--------------------------------------+ Now however the image isn't broken after the failed deletion attempt. (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+--------------------------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------------------+------+-------------+----------+-------------+ | 790d43e1-378e-478e-b10a-b1a9cd05f620 | available | volBFromImageAfterDelete | 1 | tripleo | true | | | fe409ae3-d227-4aab-b874-bb1b5c39aad3 | available | Pansible_vol | 1 | tripleo | true | | +--------------------------------------+-----------+--------------------------+------+-------------+----------+-------------+ As can be seen above both volumes are available, image wasn't delete and is still usable, I call this verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Release of components for Red Hat OpenStack Platform 16.2.2), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:1001