Bug 1886013

Summary: Creating multiple volumes often leads to error as 'Volume status must be available to reserve, but the status is attaching'
Product: Red Hat OpenStack Reporter: Andre <afariasa>
Component: python-glance-storeAssignee: Rajat Dhasmana <rdhasman>
Status: CLOSED ERRATA QA Contact: Tzach Shefi <tshefi>
Severity: high Docs Contact: RHOS Documentation Team <rhos-docs>
Priority: high    
Version: 16.1 (Train)CC: abishop, apevec, athomas, cyril, eglynn, jschluet, lhh, msava, nnavarat, rdhasman, senrique, sputhenp, tshefi
Target Milestone: betaKeywords: Triaged
Target Release: 17.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: python-glance-store-2.5.1-0.20220629200342.5f1cee6.el9ost Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-09-21 12:12:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1760183    

Description Andre 2020-10-07 13:09:50 UTC
Description of problem:
Customer is trying to create bootable volumes from Glance image and sometimes it goes into error:
> openstack volume create --size 14 --image rhel-7.7 test1

Glance is using cinder as backend;
Cinder uses HPE 3PAR iSCSI as backend storage;

From the logs:
~~~
2020-10-06 18:52:13.649 46 ERROR glance_store._drivers.cinder [] Failed to reserve volume VOLUME_UUID: Invalid volume: Volume status must be available to reserve, but the status is attaching. (HTTP 400) (Request-ID: req-UUID): cinderclient.exceptions.BadRequest: Invalid volume: Volume status must be available to reserve, but the status is attaching. (HTTP 400) (Request-ID: req-UUID)
2020-10-06 18:52:13.651 46 INFO eventlet.wsgi.server [] Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/glance_store/_drivers/cinder.py", line 542, in _open_cinder_volume
    volume.reserve(volume)
  File "/usr/lib/python3.6/site-packages/cinderclient/v2/volumes.py", line 73, in reserve
    return self.manager.reserve(self)
  File "/usr/lib/python3.6/site-packages/cinderclient/v2/volumes.py", line 373, in reserve
    return self._action('os-reserve', volume)
  File "/usr/lib/python3.6/site-packages/cinderclient/v2/volumes.py", line 336, in _action
    resp, body = self.api.client.post(url, body=body)
  File "/usr/lib/python3.6/site-packages/cinderclient/client.py", line 477, in post
    return self._cs_request(url, 'POST', **kwargs)
  File "/usr/lib/python3.6/site-packages/cinderclient/client.py", line 430, in _cs_request
    resp, body = self.request(url, method, **kwargs)
  File "/usr/lib/python3.6/site-packages/cinderclient/client.py", line 412, in request
    raise exceptions.from_response(resp, body)
cinderclient.exceptions.BadRequest: Invalid volume: Volume status must be available to reserve, but the status is attaching. (HTTP 400) (Request-ID: req-UUID)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/eventlet/wsgi.py", line 582, in handle_one_response
    for data in result:
  File "/usr/lib/python3.6/site-packages/glance/notifier.py", line 414, in _get_chunk_data_iterator
    for chunk in data:
  File "/usr/lib/python3.6/site-packages/glance_store/_drivers/cinder.py", line 599, in _cinder_volume_data_iterator
    with self._open_cinder_volume(client, volume, 'rb') as fp:
  File "/usr/lib64/python3.6/contextlib.py", line 81, in __enter__
    return next(self.gen)
  File "/usr/lib/python3.6/site-packages/glance_store/_drivers/cinder.py", line 547, in _open_cinder_volume
    raise exceptions.BackendException(msg)
glance_store.exceptions.BackendException: Failed to reserve volume UUID: Invalid volume: Volume status must be available to reserve, but the status is attaching. (HTTP 400) (Request-ID: req-UUID)
~~~

~~~
2020-10-06 18:52:13.642 27 INFO cinder.volume.api [] Volume info retrieved successfully.
2020-10-06 18:52:13.646 27 ERROR cinder.volume.api [] Volume status must be available to reserve, but the status is attaching.
2020-10-06 18:52:13.647 27 INFO cinder.api.openstack.wsgi [] https://URL:13776/v2/UUID/volumes/UUID/action returned with HTTP 400
~~~


The logs will be posted on the next comment as private, as it contains customer sensitive information.
All the logs can be found at supportshell under /cases/02769971/

Version-Release number of selected component (if applicable):
rhosp-rhel8/openstack-cinder-volume:16.1-48-hpe3par
rhosp-rhel8/openstack-cinder-scheduler:16.1-49
rhosp-rhel8/openstack-cinder-api:16.1-49
rhosp-rhel8/openstack-cinder-api:16.1-49
rhosp-rhel8/openstack-glance-api:16.1-47


How reproducible:
Intermittent

Steps to Reproduce:
1. Run openstack volume create
2. 
3.

Actual results:
Sometimes error

Expected results:


Additional info:

Comment 2 Alan Bishop 2020-10-07 16:16:40 UTC
This is not a cinder bug, per se, but a side effect when glance is using cinder for its backend. To serve an image, glance must attach the associated cinder volume. When there are multiple simultaneous requests for the glance image, this creates a situation where each glance request is competing for access to the same cinder volume, and once the volume is attached to serve one glance request, it will not be available for others until after the first request has finished.

I don't know how this could be handled within cinder, and may be necessary for glance to handle it.

Comment 4 Cyril Roelandt 2020-10-09 14:42:44 UTC
So Glance should probably be able to wait for a Cinder volume to be available before reserving it, right?

Cinder 'reserve_volume' method raises an InvalidVolume exception if the volume is already reserved, but I think this exception may also be raised for other reasons. I think it might be a bit tricky for Glance to know whether a volume is truly "invalid", or whether it is just unavailable at the moment. Is there a way to be perfectly sure why the reservation of a volume fails?

Comment 5 Alan Bishop 2020-10-09 17:11:50 UTC
(In reply to Cyril Roelandt from comment #4)
> So Glance should probably be able to wait for a Cinder volume to be
> available before reserving it, right?

Conceptually, yes, but how you implement that is what matters. If the volume is in-use and there are two more requests waiting for it to be available, the two waiting requests will need to compete. 

Ideally what you need is a resource lock, but glance-api runs on multiple nodes and that would require a distributed lock manager (DLM). I can tell you from painful experience that this is a difficult requirement to meet with OSP. 

I don't know if some sort of retry mechanism would be effective, but that still won't help if there are too many outstanding requests. Plus, unless you can queue the requests, they won't be served in a time ordered fashion. Lets say you have N requests waiting, and a new one arrives and happens to get lucky reserving the volume. The other N waiting requests will be unhappy to discover a newcomer got serviced before theirs.

> Cinder 'reserve_volume' method raises an InvalidVolume exception if the
> volume is already reserved, but I think this exception may also be raised
> for other reasons. I think it might be a bit tricky for Glance to know
> whether a volume is truly "invalid", or whether it is just unavailable at
> the moment. Is there a way to be perfectly sure why the reservation of a
> volume fails?

Although InvalidVolume sounds like a generic failure, it's rather specific in the context of reserving a volume. From glance's perspective, the only failure is when the cinder volume's status isn't "available."

Additional thoughts:

Glance caching (if enabled) and nova caching should help, because it would reduce the need to access the cinder volume. But caching won't eliminate the problem.

Another idea would be for cinder to use multiattach volumes wherever possible (it's backend dependent). I think this idea bears investigation, but it would require an RFE.

Comment 7 Andre 2020-10-21 12:38:16 UTC
Hi,


It's still not clear to me how (or if) this issue will be addressed at all, do we have any efforts on that?

I got a new feedback from the customer about using the cinder image caching, it improves the performance, but in one situation the issue still persists:

~~~
This cinder image caching is helping, because 3PAR supports efficient volume cloning (RAW format only). This is why this multiple volume creation failure problem is not happening on images with RAW format.


So now, the behaviour become like this:



Upload new RHEL-7 qcow2 image, let's say that this image size is 1GB, the resulting glance image ID is "abcdefg" and the cinder volume backing that image exist in "service" project and has the name "image-abcdefg" with size 1GB. 

Create a bootable volume using the image

After the volume is successfully created, in "service" project there is a volume with the same name "image-abcdefg" but is bigger in size (10GB, the full size of the virtual disk)

create another volume, it is instantly created (remember, 3PAR supports efficient volume cloning)

create multiple volume, using the same image, all is succesfully created in an instant.


This proved it helps to reduce the problem, *BUT * if I try to create multiple volumes at the same time using a newly uploaded qcow2 image that hasn't been cached yet, some will success, and most will fail (the main problem is still there).
~~~

Comment 8 Cyril Roelandt 2020-10-21 19:02:57 UTC
@Andre: According to the driver support matrix[1], 3Par supports multi-attach. If I understand Alan's comment (see #5), this could help:

> Another idea would be for cinder to use multiattach volumes wherever possible (it's backend dependent). 


Is this something that could be tried on the customer side?


[1] https://docs.openstack.org/cinder/latest/reference/support-matrix.html#driver-support-matrix

Comment 9 Alan Bishop 2020-10-21 19:25:01 UTC
@Cyril,

No, this isn't something for the customer to try. I was speculating about improvements in glance and/or cinder with regards to how glance stores images in cinder volumes. These details would be totally transparent to users.

Comment 10 Cyril Roelandt 2020-10-21 22:02:10 UTC
Oh I see. So:

1) Could this be a topic for next week's PTG?

2) I think it is unlikely that we will get a good workaround before such an RFE is implemented, am I right?

Comment 11 Alan Bishop 2020-10-22 19:19:10 UTC
Sure that sounds like a good topic. There are others more familiar with glance's use of cinder.

Comment 13 Cyril Roelandt 2020-11-17 20:12:51 UTC
This is one of many Glance/Cinder issues, and I don't think we have another workaround. To be honest, I really think this is something we're gonna have to live with for now :-/ I'm targeting this for 17.0, but the fix might happen later than that.

Comment 19 Cyril Roelandt 2021-12-08 21:49:03 UTC
Hello,

The various Glance/Cinder fixes that will be shipped in OSP17.0 should also be available in 16.2.2 and 16.1.8. I just need to clone the relevant bugs, push the cherry-picks and make sure the CI passes before they can be properly tested downstream :)

Comment 28 Tzach Shefi 2022-07-14 15:41:35 UTC
Ideally I'd verify this bz using our 3par, but we have an issue with it at the moment, thus used our netapp iSCSI instead.

Verified on:
python3-glance-store-2.5.1-0.20220629200342.5f1cee6.el9ost.noarch


Deployed a system with Glance over Cinder over netapp iSCSI. 
First lets try Glace over Cinder with the default volume type which isn't multipath enabled. 
Glance cache is disabled by default. 

Lets upload an image:
(overcloud) [stack@undercloud-0 ~]$ glance image-create --disk-format qcow2 --container-format bare --file rhel-server-7.9-update-12-x86_64-kvm.qcow2 --name rhel7.9  --progress
[=============================>] 100%
+------------------+----------------------------------------------------------------------------------+
| Property         | Value                                                                            |
+------------------+----------------------------------------------------------------------------------+
| checksum         | f77fc7e3cf31a210a8244e486466ce34                                                 |
| container_format | bare                                                                             |
| created_at       | 2022-07-14T14:12:27Z                                                             |
| direct_url       | cinder://default_backend/3e47a4c1-8117-43ec-ad86-470c8b4825d9                    |
| disk_format      | qcow2                                                                            |
| id               | ba1c2eb1-b054-4b3f-b0be-2e49bf1a5709                                             |
| min_disk         | 0                                                                                |
| min_ram          | 0                                                                                |
| name             | rhel7.9                                                                          |
| os_hash_algo     | sha512                                                                           |
| os_hash_value    | 16f2afc236708b215b3dfb372e7414f1d410713b8352580313e582d6ebb96035edad4a9a4c9645df |
|                  | e6e1ab131286d183fd21592a9f9afa96cde03b4cab6dde55                                 |
| os_hidden        | False                                                                            |
| owner            | e0a3325fe7194923917d341e0dd0e8f2                                                 |
| protected        | False                                                                            |
| size             | 838036992                                                                        |
| status           | active                                                                           |
| stores           | default_backend                                                                  |
| tags             | []                                                                               |
| updated_at       | 2022-07-14T14:12:45Z                                                             |
| virtual_size     | 10737418240                                                                      |
| visibility       | shared                                                                           |
+------------------+----------------------------------------------------------------------------------+


Now lets try to create a volume from said image, this should work fine:

(overcloud) [stack@undercloud-0 ~]$ cinder create 10 --image ba1c2eb1-b054-4b3f-b0be-2e49bf1a5709 --name testVolFromImage
+--------------------------------+--------------------------------------+
| Property                       | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2022-07-14T14:14:00.000000           |
| description                    | None                                 |
| encrypted                      | False                                |
| id                             | 90e5767d-7aa4-43b1-a2aa-8bd921e4e780 |
| metadata                       | {}                                   |
| migration_status               | None                                 |
| multiattach                    | False                                |
| name                           | testVolFromImage                     |
| os-vol-host-attr:host          | None                                 |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | e0a3325fe7194923917d341e0dd0e8f2     |
| replication_status             | None                                 |
| size                           | 10                                   |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | creating                             |
| updated_at                     | None                                 |
| user_id                        | 5ec5d8fcf6084fb58a8012584f788c54     |
| volume_type                    | tripleo                              |
+--------------------------------+--------------------------------------+

(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+------------------+------+-------------+----------+-------------+
| ID                                   | Status    | Name             | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+------------------+------+-------------+----------+-------------+
| 90e5767d-7aa4-43b1-a2aa-8bd921e4e780 | available | testVolFromImage | 10   | tripleo     | true     |             |
+--------------------------------------+-----------+------------------+------+-------------+----------+-------------+

Now lets try to create a few volumes simultaneously:
(overcloud) [stack@undercloud-0 ~]$ for i in {1..10} ; do  cinder create 10 --image ba1c2eb1-b054-4b3f-b0be-2e49bf1a5709 --name vol$i ; done
+--------------------------------+--------------------------------------+
| Property                       | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2022-07-14T14:17:45.000000           |
| description                    | None                                 |
| encrypted                      | False                                |
| id                             | b7f86f85-4ab3-42b8-866e-112c955935f6 |
| metadata                       | {}                                   |
| migration_status               | None                                 |
| multiattach                    | False                                |
| name                           | vol1                                 |
| os-vol-host-attr:host          | None                                 |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | e0a3325fe7194923917d341e0dd0e8f2     |
| replication_status             | None                                 |
| size                           | 10                                   |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | creating                             |
| updated_at                     | None                                 |
| user_id                        | 5ec5d8fcf6084fb58a8012584f788c54     |
| volume_type                    | tripleo                              |
+--------------------------------+--------------------------------------+
....
..

(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-------------+------------------+------+-------------+----------+-------------+
| ID                                   | Status      | Name             | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-------------+------------------+------+-------------+----------+-------------+
| 09a09a34-249c-4429-b8b5-0730ad690406 | creating    | vol5             | 10   | tripleo     | false    |             |
| 14c63fcf-ca2c-428d-881a-f0ca65968f05 | creating    | vol4             | 10   | tripleo     | false    |             |
| 171b0056-f53f-4ce8-a9f3-10bbe7d374d3 | creating    | vol6             | 10   | tripleo     | false    |             |
| 1caf57a4-9ab3-4d99-9b46-46eb92b228c1 | creating    | vol10            | 10   | tripleo     | false    |             |
| 3b857e68-cb88-42b6-bbf8-e046e12ba3c5 | creating    | vol8             | 10   | tripleo     | false    |             |
| 3c2d515d-9f24-4158-9dc7-450f901b4504 | downloading | vol3             | 10   | tripleo     | false    |             |
| 620d61c2-7f15-4c6d-9b8c-7b47b55da618 | downloading | vol9             | 10   | tripleo     | false    |             |
| 90e5767d-7aa4-43b1-a2aa-8bd921e4e780 | available   | testVolFromImage | 10   | tripleo     | true     |             |
| 94ce8772-d88f-4d0f-8bf3-4e6d1e4e35dd | creating    | vol2             | 10   | tripleo     | false    |             |
| b7f86f85-4ab3-42b8-866e-112c955935f6 | downloading | vol1             | 10   | tripleo     | false    |             |
| c9f6e299-7d25-46ac-b0e2-ba8d5d233a6f | creating    | vol7             | 10   | tripleo     | false    |             |
+--------------------------------------+-------------+------------------+------+-------------+----------+-------------+

Within a few minutes we should hit the issue, some volumes are created fine while other reach error state:
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+------------------+------+-------------+----------+-------------+
| ID                                   | Status    | Name             | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+------------------+------+-------------+----------+-------------+
| 09a09a34-249c-4429-b8b5-0730ad690406 | error     | vol5             | 10   | tripleo     | false    |             |
| 14c63fcf-ca2c-428d-881a-f0ca65968f05 | available | vol4             | 10   | tripleo     | true     |             |
| 171b0056-f53f-4ce8-a9f3-10bbe7d374d3 | available | vol6             | 10   | tripleo     | true     |             |
| 1caf57a4-9ab3-4d99-9b46-46eb92b228c1 | available | vol10            | 10   | tripleo     | true     |             |
| 3b857e68-cb88-42b6-bbf8-e046e12ba3c5 | available | vol8             | 10   | tripleo     | true     |             |
| 3c2d515d-9f24-4158-9dc7-450f901b4504 | available | vol3             | 10   | tripleo     | true     |             |
| 620d61c2-7f15-4c6d-9b8c-7b47b55da618 | available | vol9             | 10   | tripleo     | true     |             |
| 90e5767d-7aa4-43b1-a2aa-8bd921e4e780 | available | testVolFromImage | 10   | tripleo     | true     |             |
| 94ce8772-d88f-4d0f-8bf3-4e6d1e4e35dd | error     | vol2             | 10   | tripleo     | false    |             |
| b7f86f85-4ab3-42b8-866e-112c955935f6 | available | vol1             | 10   | tripleo     | true     |             |
| c9f6e299-7d25-46ac-b0e2-ba8d5d233a6f | error     | vol7             | 10   | tripleo     | false    |             |
+--------------------------------------+-----------+------------------+------+-------------+----------+-------------+

Delete all the volumes and image


Create a multiattach volume type:
(overcloud) [stack@undercloud-0 ~]$ cinder extra-specs-list
+--------------------------------------+-------------+-----------------------------------------------------------------------+
| ID                                   | Name        | extra_specs                                                           |
+--------------------------------------+-------------+-----------------------------------------------------------------------+
| 815bed31-82b6-4420-a75a-32240dd3c63e | tripleo     | {}                                                                    |
| df9e3050-64e0-4e4d-ad38-7e2c96e61657 | multiattach | {'multiattach': '<is> True', 'volume_backend_name': 'tripleo_netapp'} |
+--------------------------------------+-------------+-----------------------------------------------------------------------+

On Glance's api.conf  I've set:
cinder_volume_type=multiattach
Restart glance docker, repeat on all 3 controllers. 

Upload a new rhel image:
(overcloud) [stack@undercloud-0 ~]$ glance image-create --disk-format qcow2 --container-format bare --file rhel-server-7.9-update-12-x86_64-kvm.qcow2 --name MA_rhel7.9  --progress
[=============================>] 100%
+------------------+----------------------------------------------------------------------------------+
| Property         | Value                                                                            |
+------------------+----------------------------------------------------------------------------------+
| checksum         | f77fc7e3cf31a210a8244e486466ce34                                                 |
| container_format | bare                                                                             |
| created_at       | 2022-07-14T14:54:45Z                                                             |
| direct_url       | cinder://default_backend/cef61adf-bfba-42f8-a8e9-a85a7cc1ce65                    |
| disk_format      | qcow2                                                                            |
| id               | da157550-f531-415c-b7be-24a7386bedb2                                             |
| min_disk         | 0                                                                                |
| min_ram          | 0                                                                                |
| name             | MA_rhel7.9                                                                       |
| os_hash_algo     | sha512                                                                           |
| os_hash_value    | 16f2afc236708b215b3dfb372e7414f1d410713b8352580313e582d6ebb96035edad4a9a4c9645df |
|                  | e6e1ab131286d183fd21592a9f9afa96cde03b4cab6dde55                                 |
| os_hidden        | False                                                                            |
| owner            | e0a3325fe7194923917d341e0dd0e8f2                                                 |
| protected        | False                                                                            |
| size             | 838036992                                                                        |
| status           | active                                                                           |
| stores           | default_backend                                                                  |
| tags             | []                                                                               |
| updated_at       | 2022-07-14T14:55:04Z                                                             |
| virtual_size     | 10737418240                                                                      |
| visibility       | shared                                                                           |
+------------------+----------------------------------------------------------------------------------+
(overcloud) [stack@undercloud-0 ~]$ cinder show cef61adf-bfba-42f8-a8e9-a85a7cc1ce65
+--------------------------------+--------------------------------------------------------+
| Property                       | Value                                                  |
+--------------------------------+--------------------------------------------------------+
| attached_servers               | []                                                     |
| attachment_ids                 | []                                                     |
| availability_zone              | nova                                                   |
| bootable                       | false                                                  |
| consistencygroup_id            | None                                                   |
| created_at                     | 2022-07-14T14:54:46.000000                             |
| description                    | None                                                   |
| encrypted                      | False                                                  |
| id                             | cef61adf-bfba-42f8-a8e9-a85a7cc1ce65                   |
| metadata                       | glance_image_id : da157550-f531-415c-b7be-24a7386bedb2 |
|                                | image_owner : e0a3325fe7194923917d341e0dd0e8f2         |
|                                | image_size : 838036992                                 |
|                                | readonly : True                                        |
| migration_status               | None                                                   |
| multiattach                    | True                                                   |
| name                           | image-da157550-f531-415c-b7be-24a7386bedb2             |
| os-vol-host-attr:host          | hostgroup@tripleo_netapp#cinder_volumes                |
| os-vol-mig-status-attr:migstat | None                                                   |
| os-vol-mig-status-attr:name_id | None                                                   |
| os-vol-tenant-attr:tenant_id   | ae7d7af3dc98488885bafae37a609db1                       |
| readonly                       | True                                                   |
| replication_status             | None                                                   |
| size                           | 1                                                      |
| snapshot_id                    | None                                                   |
| source_volid                   | None                                                   |
| status                         | available                                              |
| updated_at                     | 2022-07-14T14:55:04.000000                             |
| user_id                        | 348d3b0d5c24496f8bbb9606bb3ff881                       |
| volume_type                    | multiattach                                            |---> image is backed by a multiattach volume
+--------------------------------+--------------------------------------------------------+


Now lets try a two volumes from this image first, should work
(overcloud) [stack@undercloud-0 ~]$ for i in {1..2} ; do  cinder create 10 --image da157550-f531-415c-b7be-24a7386bedb2 --name vol$i ; done
+--------------------------------+--------------------------------------+
| Property                       | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2022-07-14T15:04:13.000000           |
| description                    | None                                 |
| encrypted                      | False                                |
| id                             | 123b04b2-e874-4baa-bc9c-ec3bd7ef2f4e |
| metadata                       | {}                                   |
| migration_status               | None                                 |
| multiattach                    | False                                |
| name                           | vol1                                 |
| os-vol-host-attr:host          | None                                 |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | e0a3325fe7194923917d341e0dd0e8f2     |
| replication_status             | None                                 |
| size                           | 10                                   |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | creating                             |
| updated_at                     | None                                 |
| user_id                        | 5ec5d8fcf6084fb58a8012584f788c54     |
| volume_type                    | tripleo                              |
+--------------------------------+--------------------------------------+
+--------------------------------+--------------------------------------+
| Property                       | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2022-07-14T15:04:15.000000           |
| description                    | None                                 |
| encrypted                      | False                                |
| id                             | d7d2400b-e407-4789-b699-4cbd9bb302b4 |
| metadata                       | {}                                   |
| migration_status               | None                                 |
| multiattach                    | False                                |
| name                           | vol2                                 |
| os-vol-host-attr:host          | None                                 |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | e0a3325fe7194923917d341e0dd0e8f2     |
| replication_status             | None                                 |
| size                           | 10                                   |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | creating                             |
| updated_at                     | None                                 |
| user_id                        | 5ec5d8fcf6084fb58a8012584f788c54     |
| volume_type                    | tripleo                              |
+--------------------------------+--------------------------------------+

Both are created fine:
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| ID                                   | Status    | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| 123b04b2-e874-4baa-bc9c-ec3bd7ef2f4e | available | vol1 | 10   | tripleo     | true     |             |
| d7d2400b-e407-4789-b699-4cbd9bb302b4 | available | vol2 | 10   | tripleo     | true     |             |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+

Lets delete them both and try to create 10 at once:
(overcloud) [stack@undercloud-0 ~]$ for i in $(cinder list | grep avail | awk '{print $2}'); do cinder delete $i; done
Request to delete volume 123b04b2-e874-4baa-bc9c-ec3bd7ef2f4e has been accepted.
Request to delete volume d7d2400b-e407-4789-b699-4cbd9bb302b4 has been accepted.

(overcloud) [stack@undercloud-0 ~]$ for i in {1..10} ; do  cinder create 10 --image da157550-f531-415c-b7be-24a7386bedb2 --name vol$i ; done
+--------------------------------+--------------------------------------+
| Property                       | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2022-07-14T15:09:36.000000           |
| description                    | None                                 |
| encrypted                      | False                                |
| id                             | 98e5a30e-1d51-4437-9971-38c24a0a20e6 |
| metadata                       | {}                                   |
| migration_status               | None                                 |
| multiattach                    | False                                |
| name                           | vol1                                 |
| os-vol-host-attr:host          | None                                 |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | e0a3325fe7194923917d341e0dd0e8f2     |
| replication_status             | None                                 |
| size                           | 10                                   |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | creating                             |
| updated_at                     | None                                 |
| user_id                        | 5ec5d8fcf6084fb58a8012584f788c54     |
| volume_type                    | tripleo                              |
+--------------------------------+--------------------------------------+
..
..

Cinder list
+--------------------------------------+-------------+-------+------+-------------+----------+-------------+
| ID                                   | Status      | Name  | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-------------+-------+------+-------------+----------+-------------+
| 09ab74dc-4492-401b-98bb-abc2029ee0bf | creating    | vol5  | 10   | tripleo     | false    |             |
| 1d92f3fe-a0e1-49bf-bccc-d4efa694cb57 | creating    | vol6  | 10   | tripleo     | false    |             |
| 47f7ef96-2d35-4df2-9d93-0c9111a796f2 | downloading | vol2  | 10   | tripleo     | false    |             |
| 787cbed9-805a-4f6e-a69f-d27638aa4c6c | creating    | vol9  | 10   | tripleo     | false    |             |
| 9534a23d-c2ec-4de5-a7fe-3128b82b8ac2 | downloading | vol4  | 10   | tripleo     | false    |             |
| 98e5a30e-1d51-4437-9971-38c24a0a20e6 | downloading | vol1  | 10   | tripleo     | false    |             |
| ac59cb23-f41d-4775-9d8f-22738531432c | creating    | vol8  | 10   | tripleo     | false    |             |
| ae3cfdb0-4cdb-4bd4-b4e9-b19771601a57 | downloading | vol3  | 10   | tripleo     | false    |             |
| e9b8df27-4ab4-4abf-aa25-8b5d3981a140 | creating    | vol10 | 10   | tripleo     | false    |             |
| f0c2f458-82a3-4600-9935-efc286ac96fa | creating    | vol7  | 10   | tripleo     | false    |             |
+--------------------------------------+-------------+-------+------+-------------+----------+-------------+


(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+-------+------+-------------+----------+-------------+
| ID                                   | Status    | Name  | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+-------+------+-------------+----------+-------------+
| 09ab74dc-4492-401b-98bb-abc2029ee0bf | available | vol5  | 10   | tripleo     | true     |             |
| 1d92f3fe-a0e1-49bf-bccc-d4efa694cb57 | available | vol6  | 10   | tripleo     | true     |             |
| 47f7ef96-2d35-4df2-9d93-0c9111a796f2 | available | vol2  | 10   | tripleo     | true     |             |
| 787cbed9-805a-4f6e-a69f-d27638aa4c6c | available | vol9  | 10   | tripleo     | true     |             |
| 9534a23d-c2ec-4de5-a7fe-3128b82b8ac2 | available | vol4  | 10   | tripleo     | true     |             |
| 98e5a30e-1d51-4437-9971-38c24a0a20e6 | available | vol1  | 10   | tripleo     | true     |             |
| ac59cb23-f41d-4775-9d8f-22738531432c | available | vol8  | 10   | tripleo     | true     |             |
| ae3cfdb0-4cdb-4bd4-b4e9-b19771601a57 | available | vol3  | 10   | tripleo     | true     |             |
| e9b8df27-4ab4-4abf-aa25-8b5d3981a140 | available | vol10 | 10   | tripleo     | true     |             |
| f0c2f458-82a3-4600-9935-efc286ac96fa | available | vol7  | 10   | tripleo     | true     |             |
+--------------------------------------+-----------+-------+------+----simaltensloy---------+----------+-------------+

Yay this time around we managed to create 10 volumes simultaneously from new MA image. 

Lets delete them all again, recreate 10 new ones, this time also checking attachment-list 
(overcloud) [stack@undercloud-0 ~]$ for i in $(cinder list | grep avail | awk '{print $2}'); do cinder delete $i; done
Request to delete volume 09ab74dc-4492-401b-98bb-abc2029ee0bf has been accepted.
Request to delete volume 1d92f3fe-a0e1-49bf-bccc-d4efa694cb57 has been accepted.
..


-bash: syntax error near unexpected token `[stack@undercloud-0'
(overcloud) [stack@undercloud-0 ~]$  for i in {1..10} ; do  cinder create 10 --image da157550-f531-415c-b7be-24a7386bedb2 --name vol$i ; done
+--------------------------------+--------------------------------------+
| Property                       | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2022-07-14T15:33:55.000000           |
| description                    | None                                 |
| encrypted                      | False                                |
| id                             | 53cb8759-ebcf-4090-ac7a-eea1c0c30074 |
| metadata                       | {}                                   |
| migration_status               | None                                 |
| multiattach                    | False                                |
| name                           | vol1                                 |
| os-vol-host-attr:host          | None                                 |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | e0a3325fe7194923917d341e0dd0e8f2     |
| replication_status             | None                                 |
| size                           | 10                                   |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | creating                             |
| updated_at                     | None                                 |
| user_id                        | 5ec5d8fcf6084fb58a8012584f788c54     |
| volume_type                    | tripleo                              |
+--------------------------------+--------------------------------------+
...
..


We notice this in the background
 [stack@undercloud-0 ~]$ cinder --os-volume-api-version 3.77  attachment-list --all-tenants
WARNING:cinderclient.shell:API version 3.77 requested, 
WARNING:cinderclient.shell:downgrading to 3.64 based on server support.
+--------------------------------------+--------------------------------------+----------+-----------+
| ID                                   | Volume ID                            | Status   | Server ID |
+--------------------------------------+--------------------------------------+----------+-----------+
| 1d127e2b-40e1-49a4-9cab-5a52506184c6 | cef61adf-bfba-42f8-a8e9-a85a7cc1ce65 | attached | -         |
| 3aba3b2a-f78d-4819-b0a0-216dda47557f | cef61adf-bfba-42f8-a8e9-a85a7cc1ce65 | attached | -         |
| 7b771ff0-e2d3-445b-bd07-3333035f5865 | cef61adf-bfba-42f8-a8e9-a85a7cc1ce65 | attached | -         |
| eff0718e-f125-443e-b749-9497a0aaa83a | cef61adf-bfba-42f8-a8e9-a85a7cc1ce65 | attached | -         |
+--------------------------------------+--------------------------------------+----------+-----------+

(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+-------+------+-------------+----------+-------------+
| ID                                   | Status    | Name  | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+-------+------+-------------+----------+-------------+
| 09a924aa-e15d-4e03-9c60-bd7c12279237 | available | vol9  | 10   | tripleo     | true     |             |
| 1d97eef1-b4d1-4a69-af15-835fdbae5f81 | available | vol8  | 10   | tripleo     | true     |             |
| 28c19ff9-6704-485e-9ab0-73f413554dbc | available | vol10 | 10   | tripleo     | true     |             |
| 29ef82c3-6766-4c5e-8847-ca456cbb5816 | available | vol6  | 10   | tripleo     | true     |             |
| 53cb8759-ebcf-4090-ac7a-eea1c0c30074 | available | vol1  | 10   | tripleo     | true     |             |
| b3bc1440-6581-4f25-8e31-a2483d679cb4 | available | vol4  | 10   | tripleo     | true     |             |
| c6d486f7-c369-4a9f-87b1-0532289b6bab | available | vol3  | 10   | tripleo     | true     |             |
| ca29ec62-164b-43a9-9233-f9c9a15382b3 | available | vol2  | 10   | tripleo     | true     |             |
| f09fb399-2ef3-439a-9781-6fe2041f2198 | available | vol7  | 10   | tripleo     | true     |             |
| f1a151a4-6fa3-4bab-98a0-db2c76af5a59 | available | vol5  | 10   | tripleo     | true     |             |
+--------------------------------------+-----------+-------+------+-------------+----------+-------------+


Again all 10 volumes created fine, when image was saved on a MA volume, looks good to verify.

Comment 34 errata-xmlrpc 2022-09-21 12:12:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Release of components for Red Hat OpenStack Platform 17.0 (Wallaby)), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2022:6543