In 17.x we'll only fix Glance to behave correctly (and not delete the image when there are clones) even if clone v2 is enabled in Ceph.
Verified on: python3-glance-store-2.5.1-0.20220629200342.5f1cee6.el9ost.noarch On a deployment with ceph, lets try to reproduce. Upload an image to glance: (overcloud) [stack@undercloud-0 ~]$ glance image-create --disk-format raw --container-format bare --file rhel-server-7.9-update-12-x86_64-kvm.raw --name rhel7.9.raw --progress [=============================>] 100% +------------------+----------------------------------------------------------------------------------+ | Property | Value | +------------------+----------------------------------------------------------------------------------+ | checksum | 115061285377b8bb9061fa165954f764 | | container_format | bare | | created_at | 2022-07-18T11:51:49Z | | direct_url | rbd://0e34e789-ccf8-5261-b7ca-d09030a2adca/images/a78fb9b2-c6be-47d1-acb8-f4cd95 | | | 91c0b9/snap | | disk_format | raw | | id | a78fb9b2-c6be-47d1-acb8-f4cd9591c0b9 | | locations | [{"url": "rbd://0e34e789-ccf8-5261-b7ca-d09030a2adca/images/a78fb9b2-c6be-47d1-a | | | cb8-f4cd9591c0b9/snap", "metadata": {"store": "default_backend"}}] | | min_disk | 0 | | min_ram | 0 | | name | rhel7.9.raw | | os_hash_algo | sha512 | | os_hash_value | 66bd8bef4a6b0cfae1ca3e87f1e2dd6e9d55768a101a451970a9e44cb66acf92c2d2e46431866423 | | | 2f852a433ce5b4287d1027427e79ef1a17e54f7a8fdfc675 | | os_hidden | False | | owner | 239b7d271ad44ed69eff6443a3d06da3 | | protected | False | | size | 10737418240 | | status | active | | stores | default_backend | | tags | [] | | updated_at | 2022-07-18T11:55:54Z | | virtual_size | 10737418240 | | visibility | shared | +------------------+----------------------------------------------------------------------------------+ Now lets create the first volume from this image, while we do this before the volume is available try to delete the source image: (overcloud) [stack@undercloud-0 ~]$ cinder create 11 --image a78fb9b2-c6be-47d1-acb8-f4cd9591c0b9 --name FirstVolFromImage +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2022-07-18T11:57:43.000000 | | description | None | | encrypted | False | | id | 3cca4f9f-fb18-4cd4-97c1-e9c50b06a238 | | metadata | {} | | migration_status | None | | multiattach | False | | name | FirstVolFromImage | | os-vol-host-attr:host | None | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 239b7d271ad44ed69eff6443a3d06da3 | | replication_status | None | | size | 11 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | None | | user_id | c7c0243b6ec34577be0264ff3b056ce7 | | volume_type | tripleo_default | +--------------------------------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ glance image-delete a78fb9b2-c6be-47d1-acb8-f4cd9591c0b9 Unable to delete image 'a78fb9b2-c6be-47d1-acb8-f4cd9591c0b9' because it is in use. Volume is available, image can't be deleted as seen above: (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+-------------------+------+-----------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+-------------------+------+-----------------+----------+-------------+ | 3cca4f9f-fb18-4cd4-97c1-e9c50b06a238 | available | FirstVolFromImage | 11 | tripleo_default | true | | +--------------------------------------+-----------+-------------------+------+-----------------+----------+-------------+ Now lets try to create a second volume from same image, before this fix this would fail. With this fix in place the second volume should also reach available status. (overcloud) [stack@undercloud-0 ~]$ cinder create 11 --image a78fb9b2-c6be-47d1-acb8-f4cd9591c0b9 --name SecondVolFromImage +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2022-07-18T12:01:27.000000 | | description | None | | encrypted | False | | id | ed7a3baa-601e-44c9-946f-f305499cc48e | | metadata | {} | | migration_status | None | | multiattach | False | | name | SecondVolFromImage | | os-vol-host-attr:host | None | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 239b7d271ad44ed69eff6443a3d06da3 | | replication_status | None | | size | 11 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | None | | user_id | c7c0243b6ec34577be0264ff3b056ce7 | | volume_type | tripleo_default | +--------------------------------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+--------------------+------+-----------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------------+------+-----------------+----------+-------------+ | 3cca4f9f-fb18-4cd4-97c1-e9c50b06a238 | available | FirstVolFromImage | 11 | tripleo_default | true | | | ed7a3baa-601e-44c9-946f-f305499cc48e | available | SecondVolFromImage | 11 | tripleo_default | true | | +--------------------------------------+-----------+--------------------+------+-----------------+----------+-------------+ Both volumes were created successfully, Attempting to delete an in-use image as can be seen above now doesn't break it, the image is still usable/available and works fine, good to verify.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Release of components for Red Hat OpenStack Platform 17.0 (Wallaby)), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2022:6543