In 16.x we'll only fix Glance to behave correctly (and not delete the image when there are clones) even if clone v2 is enabled in Ceph.
Verified on: python3-glance-store-1.0.2-1.20220110223456.el8ost.noarch On a ceph backed deployment, First I had to bump up the min client version [root@controller-2 /]# ceph osd set-require-min-compat-client mimic set require_min_compat_client to mimic [root@controller-2 /]# ceph osd dump | grep client require_min_compat_client mimic min_compat_client jewel FYI you must set this before you upload the image, at first I had set it after the image was uploaded and verification failed. First I uploaded a cirros image to glance: (overcloud) [stack@undercloud-0 ~]$ glance image-list +--------------------------------------+--------+ | ID | Name | +--------------------------------------+--------+ | 41427008-d91e-456f-89ef-5c435fc0d3ad | cirros | +--------------------------------------+--------+ Now lets create a volume from this image: (overcloud) [stack@undercloud-0 ~]$ cinder create 2 --image 41427008-d91e-456f-89ef-5c435fc0d3ad --name VOl1FromImage +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2022-03-01T10:23:15.000000 | | description | None | | encrypted | False | | id | 97450c4a-bc6a-4977-8532-b8345bd9d736 | | metadata | {} | | migration_status | None | | multiattach | False | | name | VOl1FromImage | | os-vol-host-attr:host | None | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 1335a4b7d60b49808d45f2fb1f64dd0d | | replication_status | None | | size | 2 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | None | | user_id | f43303691e2c4cc5ad82801fed72328b | | volume_type | tripleo | +--------------------------------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+---------------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+---------------+------+-------------+----------+-------------+ | 97450c4a-bc6a-4977-8532-b8345bd9d736 | available | VOl1FromImage | 2 | tripleo | true | | +--------------------------------------+-----------+---------------+------+-------------+----------+-------------+ And now lets to delete the source image, which we should fail to do: (overcloud) [stack@undercloud-0 ~]$ glance image-delete 41427008-d91e-456f-89ef-5c435fc0d3ad Unable to delete image '41427008-d91e-456f-89ef-5c435fc0d3ad' because it is in use. Image remains active: (overcloud) [stack@undercloud-0 ~]$ glance image-show 41427008-d91e-456f-89ef-5c435fc0d3ad +----------------------------------+----------------------------------------------------------------------------------+ | Property | Value | +----------------------------------+----------------------------------------------------------------------------------+ | checksum | ba3cd24377dde5dfdd58728894004abb | | container_format | bare | | created_at | 2022-03-01T10:19:40Z | | direct_url | rbd://10e11939-371f-4dd7-8fd3-b6ca63180633/images/41427008-d91e-456f-89ef-5c435f | | | c0d3ad/snap | | disk_format | raw | | id | 41427008-d91e-456f-89ef-5c435fc0d3ad | | locations | [{"url": "rbd://10e11939-371f-4dd7-8fd3-b6ca63180633/images/41427008-d91e-456f-8 | | | 9ef-5c435fc0d3ad/snap", "metadata": {"store": "default_backend"}}] | | min_disk | 0 | | min_ram | 0 | | name | cirros | | os_hash_algo | sha512 | | os_hash_value | b795f047a1b10ba0b7c95b43b2a481a59289dc4cf2e49845e60b194a911819d3ada03767bbba4143 | | | b44c93fd7f66c96c5a621e28dff51d1196dae64974ce240e | | os_hidden | False | | owner | 1335a4b7d60b49808d45f2fb1f64dd0d | | owner_specified.openstack.md5 | | | owner_specified.openstack.object | images/cirros | | owner_specified.openstack.sha256 | | | protected | False | | size | 46137344 | | status | active | | stores | default_backend | | tags | [] | | updated_at | 2022-03-01T10:19:42Z | | virtual_size | Not available | | visibility | public | +----------------------------------+----------------------------------------------------------------------------------+ And now lets create a new second volume from this image: (overcloud) [stack@undercloud-0 ~]$ cinder create 2 --image 41427008-d91e-456f-89ef-5c435fc0d3ad --name VOl2After +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2022-03-01T10:26:47.000000 | | description | None | | encrypted | False | | id | 1da25189-51ab-498a-a2cf-31ec6cce729e | | metadata | {} | | migration_status | None | | multiattach | False | | name | VOl2After | | os-vol-host-attr:host | None | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 1335a4b7d60b49808d45f2fb1f64dd0d | | replication_status | None | | size | 2 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | None | | user_id | f43303691e2c4cc5ad82801fed72328b | | volume_type | tripleo | +--------------------------------+--------------------------------------+ And we successfully created a new volume from the image we attempted to delete: (overcloud) [stack@undercloud-0 ~]$ cinder show 1da25189-51ab-498a-a2cf-31ec6cce729e +--------------------------------+--------------------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------------------+ | attached_servers | [] | | attachment_ids | [] | | availability_zone | nova | | bootable | true | | consistencygroup_id | None | | created_at | 2022-03-01T10:26:47.000000 | | description | None | | encrypted | False | | id | 1da25189-51ab-498a-a2cf-31ec6cce729e | | metadata | | | migration_status | None | | multiattach | False | | name | VOl2After | | os-vol-host-attr:host | hostgroup@tripleo_ceph#tripleo_ceph | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 1335a4b7d60b49808d45f2fb1f64dd0d | | replication_status | None | | size | 2 | | snapshot_id | None | | source_volid | None | | status | available | | updated_at | 2022-03-01T10:26:48.000000 | | user_id | f43303691e2c4cc5ad82801fed72328b | | volume_image_metadata | checksum : ba3cd24377dde5dfdd58728894004abb | | | container_format : bare | | | disk_format : raw | | | image_id : 41427008-d91e-456f-89ef-5c435fc0d3ad | | | image_name : cirros | | | min_disk : 0 | | | min_ram : 0 | | | owner_specified.openstack.md5 : | | | owner_specified.openstack.object : images/cirros | | | owner_specified.openstack.sha256 : | | | size : 46137344 | | volume_type | tripleo | +--------------------------------+--------------------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+---------------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+---------------+------+-------------+----------+-------------+ | 1da25189-51ab-498a-a2cf-31ec6cce729e | available | VOl2After | 2 | tripleo | true | | | 97450c4a-bc6a-4977-8532-b8345bd9d736 | available | VOl1FromImage | 2 | tripleo | true | | +--------------------------------------+-----------+---------------+------+-------------+----------+-------------+ Both volume from image are available, good to verify. Before this fix the second volume would reach error state, as image would be broken.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenStack Platform 16.1.8 bug fix and enhancement advisory), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:0986