Bug 1705563 - [openstack-cinder][OSP 13]Volume (no multi-attach) attached multiple time to an Instance
Summary: [openstack-cinder][OSP 13]Volume (no multi-attach) attached multiple time to ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Linux
urgent
urgent
Target Milestone: z8
: 13.0 (Queens)
Assignee: Eric Harney
QA Contact: Tzach Shefi
Tana
URL:
Whiteboard:
: 1804562 (view as bug list)
Depends On: 1734849
Blocks: 1734034
TreeView+ depends on / blocked
 
Reported: 2019-05-02 13:37 UTC by Luigi Tamagnone
Modified: 2024-06-13 22:06 UTC (History)
25 users (show)

Fixed In Version: openstack-cinder-12.0.7-5.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1734034 (view as bug list)
Environment:
Last Closed: 2019-09-03 16:54:21 UTC
Target Upstream Version:
Embargoed:
tshefi: automate_bug+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1833736 0 None None None 2019-06-21 15:34:09 UTC
OpenStack gerrit 671370 0 'None' MERGED Prevent double-attachment race in attachment_reserve 2020-12-14 14:28:21 UTC
Red Hat Issue Tracker OSP-16306 0 None None None 2022-07-08 11:30:32 UTC
Red Hat Knowledge Base (Solution) 4095341 0 None None None 2019-05-02 13:43:23 UTC
Red Hat Product Errata RHBA-2019:2627 0 None None None 2019-09-03 16:54:29 UTC

Description Luigi Tamagnone 2019-05-02 13:37:55 UTC
Description of problem:
Concurrent requests to attach the same no-multi-attach volume to the same instance can succeed

Version-Release number of selected component (if applicable):
I tested on RHOSP13 env, create from CL210 course, nova-version 17.0.3, no bug find.
Customer tested on his RHOSP13 env nova-version 17.0.7, bug find!
I tested on RHOSP13 env, create on penstack01.gsslab.brq.redhat.com, nova-version 17.0.9, bug find!
I tried with customer script and my script (see Additional Information). With customer script, I saw more occurrences attached that with mine.

How reproducible:
It can be reproduced in a RHOSP13 environment following the step to reproduce.

Steps to Reproduce:
1. Have instance_A without volume_A attached
2. Call multiple time nova API to attach volume_A to instance_A
3. Result: volume_A attached to instance_A multiple time

Actual results:
The same volume attach multiple time

Expected results:
Volume attach one time

Additional info:
My test script:
for i in {1..10}; do openstack server add volume InstanceID volumeID & done
or
for i in {1..10}; do nova volume-attach InstanceID volumeID auto & done

Customer script:
#!/usr/bin/bash
NOVA_URL=https://$OPENSTACKENDPOINT:13774/v2.1
INSTANCE_ID=$INSTANCEUUID
VOLUME_ID=$VOLUMEUUID
function call {
        curl -s -H "X-Auth-Token: $OS_TOKEN" -H "X-Subject-Token: $OS_TOKEN" "$@" | jq .
}
echo "Using instance_id: $INSTANCE_ID"
echo "Using volume_id: $VOLUME_ID"
echo
echo "Attachments before test:"
call $NOVA_URL/servers/$INSTANCE_ID/os-volume_attachments
echo
echo "Attempting 10 concurrent attachments..."
for i in {1..10}
do
  call -H 'Content-Type: application/json' $NOVA_URL/servers/$INSTANCE_ID/os-volume_attachments -d "{\"volumeAttachment\": {\"volumeId\": \"$VOLUME_ID\"}}" > /dev/null &
done
sleep 15
echo
echo "Attachements after test:"
call $NOVA_URL/servers/$INSTANCE_ID/os-volume_attachments

Comment 1 Matthew Booth 2019-05-24 13:00:06 UTC
Multiattach policy isn't Nova's domain: Cinder should have refused to create an attachment for us. I suspect a race or oversight in cinder's attachment_reserve.

Comment 2 Eric Harney 2019-06-20 13:05:32 UTC
Do you have Cinder logs from this event?

How many nodes are involved?

Comment 3 David Hill 2019-06-20 13:09:58 UTC
We have another similar case so I'll request the controller logs too ...

Comment 19 Luigi Tamagnone 2019-07-23 07:42:28 UTC
If it could be useful, the Cu noticed that when he run the loop. it gets more request responses then commands he sends.

(overcloud) [stack@ieatrheldir13a scripts]$ openstack server list
+--------------------------------------+-------------+--------+------------------+--------+--------------+
| ID                                   | Name        | Status | Networks         | Image  | Flavor       |
+--------------------------------------+-------------+--------+------------------+--------+--------------+
| 7b925320-eaf9-41f4-b28d-4b3291979af5 | test-nova-5 | ACTIVE | test=10.10.10.15 | cirros | flavor_1vC1M |
+--------------------------------------+-------------+--------+------------------+--------+--------------+
(overcloud) [stack@ieatrheldir13a scripts]$ openstack volume list
+--------------------------------------+--------------+-----------+------+-------------+
| ID                                   | Name         | Status    | Size | Attached to |
+--------------------------------------+--------------+-----------+------+-------------+
| efb14565-310c-4d15-801c-5e1e4cf35b9a | test-break-5 | available |    7 |             |
+--------------------------------------+--------------+-----------+------+-------------+

(overcloud) [stack@ieatrheldir13a scripts]$ for i in {1..10}; do openstack server add volume 7b925320-eaf9-41f4-b28d-4b3291979af5 efb14565-310c-4d15-801c-5e1e4cf35b9a & doneInvalid volume: volume efb14565-310c-4d15-801c-5e1e4cf35b9a already attached (HTTP 400) (Request-ID: req-482f9d16-92e4-4f79-ac22-71a646a0d00e)
Invalid volume: volume efb14565-310c-4d15-801c-5e1e4cf35b9a already attached (HTTP 400) (Request-ID: req-ac6fe9eb-830e-41f9-8aaf-99933922c286)
Invalid volume: volume efb14565-310c-4d15-801c-5e1e4cf35b9a already attached (HTTP 400) (Request-ID: req-0482743f-57a4-45a1-b464-8e2419d7f689)
Invalid volume: volume efb14565-310c-4d15-801c-5e1e4cf35b9a already attached (HTTP 400) (Request-ID: req-a354cbaa-690b-497c-9498-887cff20f6ef)

[11] 21584
[12] 21585
[13] 21586
[14] 21587
[15] 21588
[16] 21589
[17] 21590
[18] 21591
[19] 21592
[20] 21593
[1]   Done                    openstack server add volume 7b925320-eaf9-41f4-b28d-4b3291979af5 efb14565-310c-4d15-801c-5e1e4cf35b9a
[2]   Done                    openstack server add volume 7b925320-eaf9-41f4-b28d-4b3291979af5 efb14565-310c-4d15-801c-5e1e4cf35b9a
[3]   Done                    openstack server add volume 7b925320-eaf9-41f4-b28d-4b3291979af5 efb14565-310c-4d15-801c-5e1e4cf35b9a
[4]   Done                    openstack server add volume 7b925320-eaf9-41f4-b28d-4b3291979af5 efb14565-310c-4d15-801c-5e1e4cf35b9a
[5]   Done                    openstack server add volume 7b925320-eaf9-41f4-b28d-4b3291979af5 efb14565-310c-4d15-801c-5e1e4cf35b9a
[6]   Exit 1                  openstack server add volume 7b925320-eaf9-41f4-b28d-4b3291979af5 efb14565-310c-4d15-801c-5e1e4cf35b9a
[7]   Exit 1                  openstack server add volume 7b925320-eaf9-41f4-b28d-4b3291979af5 efb14565-310c-4d15-801c-5e1e4cf35b9a
[8]   Exit 1                  openstack server add volume 7b925320-eaf9-41f4-b28d-4b3291979af5 efb14565-310c-4d15-801c-5e1e4cf35b9a
[9]   Done                    openstack server add volume 7b925320-eaf9-41f4-b28d-4b3291979af5 efb14565-310c-4d15-801c-5e1e4cf35b9a
[10]   Exit 1                  openstack server add volume 7b925320-eaf9-41f4-b28d-4b3291979af5 efb14565-310c-4d15-801c-5e1e4cf35b9a
(overcloud) [stack@ieatrheldir13a scripts]$ Invalid volume: volume efb14565-310c-4d15-801c-5e1e4cf35b9a already attached (HTTP 400) (Request-ID: req-857bd88f-8780-42b1-88e9-c6c76fb5bb0f)
Invalid volume: volume efb14565-310c-4d15-801c-5e1e4cf35b9a already attached (HTTP 400) (Request-ID: req-5ac19981-7ca5-451a-9e5d-e7387010ca7d)
Invalid volume: volume efb14565-310c-4d15-801c-5e1e4cf35b9a already attached (HTTP 400) (Request-ID: req-a31170d6-bdae-46df-8c33-1eebaa5ab03a)
Invalid volume: volume efb14565-310c-4d15-801c-5e1e4cf35b9a already attached (HTTP 400) (Request-ID: req-a74275dd-965d-4528-936e-f3977e2925f4)
Invalid volume: volume efb14565-310c-4d15-801c-5e1e4cf35b9a already attached (HTTP 400) (Request-ID: req-fb1bb8b2-4e17-4c87-8929-9914d69a4603)
Invalid volume: volume efb14565-310c-4d15-801c-5e1e4cf35b9a already attached (HTTP 400) (Request-ID: req-00843833-fbc3-4d40-b29d-9f3b7c383c16)
Invalid volume: volume efb14565-310c-4d15-801c-5e1e4cf35b9a already attached (HTTP 400) (Request-ID: req-06a8acb2-05cc-4fac-88dd-701e77dfc6c4)
Invalid volume: volume efb14565-310c-4d15-801c-5e1e4cf35b9a already attached (HTTP 400) (Request-ID: req-75a2f6ac-f671-4655-b5b0-7646425de213)
Invalid volume: volume efb14565-310c-4d15-801c-5e1e4cf35b9a already attached (HTTP 400) (Request-ID: req-1ccbde43-77eb-4498-ad36-78cdfe9222cc)
Invalid volume: volume efb14565-310c-4d15-801c-5e1e4cf35b9a already attached (HTTP 400) (Request-ID: req-0269b9ba-8e72-48c6-9be0-c307312cc93d)

[11]   Exit 1                  openstack server add volume 7b925320-eaf9-41f4-b28d-4b3291979af5 efb14565-310c-4d15-801c-5e1e4cf35b9a
[12]   Exit 1                  openstack server add volume 7b925320-eaf9-41f4-b28d-4b3291979af5 efb14565-310c-4d15-801c-5e1e4cf35b9a
[13]   Exit 1                  openstack server add volume 7b925320-eaf9-41f4-b28d-4b3291979af5 efb14565-310c-4d15-801c-5e1e4cf35b9a
[14]   Exit 1                  openstack server add volume 7b925320-eaf9-41f4-b28d-4b3291979af5 efb14565-310c-4d15-801c-5e1e4cf35b9a
[15]   Exit 1                  openstack server add volume 7b925320-eaf9-41f4-b28d-4b3291979af5 efb14565-310c-4d15-801c-5e1e4cf35b9a
[16]   Exit 1                  openstack server add volume 7b925320-eaf9-41f4-b28d-4b3291979af5 efb14565-310c-4d15-801c-5e1e4cf35b9a
[17]   Exit 1                  openstack server add volume 7b925320-eaf9-41f4-b28d-4b3291979af5 efb14565-310c-4d15-801c-5e1e4cf35b9a
[18]   Exit 1                  openstack server add volume 7b925320-eaf9-41f4-b28d-4b3291979af5 efb14565-310c-4d15-801c-5e1e4cf35b9a
[19]-  Exit 1                  openstack server add volume 7b925320-eaf9-41f4-b28d-4b3291979af5 efb14565-310c-4d15-801c-5e1e4cf35b9a
[20]+  Exit 1                  openstack server add volume 7b925320-eaf9-41f4-b28d-4b3291979af5 efb14565-310c-4d15-801c-5e1e4cf35b9a

Comment 46 Tzach Shefi 2019-08-20 09:12:34 UTC
Not landed yet, P1 ->  13  2019-08-13.1

Produced a prefixed-in of:
openstack-cinder-12.0.7-4.0.bz1705563.el7ost.noarch 

Waiting for newer build.

Comment 51 Tzach Shefi 2019-08-25 09:36:28 UTC
Verified on:
openstack-cinder-12.0.7-5.el7ost.noarch

Booted an instance
Created a volume
Attached volume to instance:

(overcloud) [stack@undercloud-0 ~]$ nova list
cinder list
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+
| ID                                   | Name  | Status | Task State | Power State | Networks                          |
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+
| 290fd8cd-9e1d-4db2-8266-e5081eac40db | inst1 | ACTIVE | -          | Running     | internal=192.168.0.13, 10.0.0.210 |
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
| ID                                   | Status | Name         | Size | Volume Type | Bootable | Attached to                          |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
| ecb47eba-362a-4fe7-a556-74949e0b8846 | in-use | Pansible_vol | 1    | tripleo     | true     | 290fd8cd-9e1d-4db2-8266-e5081eac40db |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+

Now try:
(overcloud) [stack@undercloud-0 ~]$ for i in {1..10}; do openstack server add volume inst1 ecb47eba-362a-4fe7-a556-74949e0b8846 & done
[1] 17041
[2] 17042
[3] 17043
[4] 17044
[5] 17045
[6] 17046
[7] 17047
[8] 17048
[9] 17049
[10] 17050
(overcloud) [stack@undercloud-0 ~]$ 

(overcloud) [stack@undercloud-0 ~]$ cinder Invalid volume: volume ecb47eba-362a-4fe7-a556-74949e0b8846 already attached (HTTP 400) (Request-ID: req-5c62dc38-0b05-478b-9318-fd9c7222c9df)
Invalid volume: volume ecb47eba-362a-4fe7-a556-74949e0b8846 already attached (HTTP 400) (Request-ID: req-c5b169fd-2626-4c65-86c5-8a4da318f225)
lInvalid volume: volume ecb47eba-362a-4fe7-a556-74949e0b8846 already attached (HTTP 400) (Request-ID: req-59858f7d-3153-488e-8077-e0e704d6ce49)
Invalid volume: volume ecb47eba-362a-4fe7-a556-74949e0b8846 already attached (HTTP 400) (Request-ID: req-2a40ea1c-c431-45af-923a-ece28d06bee2)
isInvalid volume: volume ecb47eba-362a-4fe7-a556-74949e0b8846 already attached (HTTP 400) (Request-ID: req-17b6a97f-6da3-4860-b429-6caef0b76264)
Invalid volume: volume ecb47eba-362a-4fe7-a556-74949e0b8846 already attached (HTTP 400) (Request-ID: req-695a7d7d-b09e-4df7-ae2d-c814f55f1cc1)
Invalid volume: volume ecb47eba-362a-4fe7-a556-74949e0b8846 already attached (HTTP 400) (Request-ID: req-b7b9e3e1-65d8-400b-b89d-91b2c623902a)
Invalid volume: volume ecb47eba-362a-4fe7-a556-74949e0b8846 already attached (HTTP 400) (Request-ID: req-f128f67c-7c06-4c4d-acd4-e2fe7013ae57)
Invalid volume: volume ecb47eba-362a-4fe7-a556-74949e0b8846 already attached (HTTP 400) (Request-ID: req-d57966fb-463b-4e51-a8fb-261728259b00)
Invalid volume: volume ecb47eba-362a-4fe7-a556-74949e0b8846 already attached (HTTP 400) (Request-ID: req-3b467d56-0941-4a88-aa4f-9c4264e47be4)
[1]   Exit 1                  openstack server add volume inst1 ecb47eba-362a-4fe7-a556-74949e0b8846
[2]   Exit 1                  openstack server add volume inst1 ecb47eba-362a-4fe7-a556-74949e0b8846
[3]   Exit 1                  openstack server add volume inst1 ecb47eba-362a-4fe7-a556-74949e0b8846
[4]   Exit 1                  openstack server add volume inst1 ecb47eba-362a-4fe7-a556-74949e0b8846
[5]   Exit 1                  openstack server add volume inst1 ecb47eba-362a-4fe7-a556-74949e0b8846
[6]   Exit 1                  openstack server add volume inst1 ecb47eba-362a-4fe7-a556-74949e0b8846
[7]   Exit 1                  openstack server add volume inst1 ecb47eba-362a-4fe7-a556-74949e0b8846
[8]   Exit 1                  openstack server add volume inst1 ecb47eba-362a-4fe7-a556-74949e0b8846
[9]-  Exit 1                  openstack server add volume inst1 ecb47eba-362a-4fe7-a556-74949e0b8846
[10]+  Exit 1                  openstack server add volume inst1 ecb47eba-362a-4fe7-a556-74949e0b8846

Resulting single instance attached only once to volume, ,look good:
(overcloud) [stack@undercloud-0 ~]$ nova list
cinder list
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+
| ID                                   | Name  | Status | Task State | Power State | Networks                          |
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+
| 290fd8cd-9e1d-4db2-8266-e5081eac40db | inst1 | ACTIVE | -          | Running     | internal=192.168.0.13, 10.0.0.210 |
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
| ID                                   | Status | Name         | Size | Volume Type | Bootable | Attached to                          |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
| ecb47eba-362a-4fe7-a556-74949e0b8846 | in-use | Pansible_vol | 1    | tripleo     | true     | 290fd8cd-9e1d-4db2-8266-e5081eac40db |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+


Now this:

(overcloud) [stack@undercloud-0 ~]$ for i in {1..10}; do nova volume-attach inst1 ecb47eba-362a-4fe7-a556-74949e0b8846 auto & done
[1] 17519
[2] 17520
[3] 17521
[4] 17522
[5] 17523
[6] 17524
[7] 17525
[8] 17526
[9] 17527
[10] 17528
(overcloud) [stack@undercloud-0 ~]$ ERROR (BadRequest): Invalid volume: volume ecb47eba-362a-4fe7-a556-74949e0b8846 already attached (HTTP 400) (Request-ID: req-aa29ea54-a8b1-40b8-a416-d742172947e2)
ERROR (BadRequest): Invalid volume: volume ecb47eba-362a-4fe7-a556-74949e0b8846 already attached (HTTP 400) (Request-ID: req-07a62f27-f7b7-43b9-8563-5fb69c9c6031)
ERROR (BadRequest): Invalid volume: volume ecb47eba-362a-4fe7-a556-74949e0b8846 already attached (HTTP 400) (Request-ID: req-ef6f0e48-1987-44a9-b6ec-5c9626bee094)
ERROR (BadRequest): Invalid volume: volume ecb47eba-362a-4fe7-a556-74949e0b8846 already attached (HTTP 400) (Request-ID: req-bdb10be2-e9ef-49bc-8938-1b6a1868f8a0)
ERROR (BadRequest): Invalid volume: volume ecb47eba-362a-4fe7-a556-74949e0b8846 already attached (HTTP 400) (Request-ID: req-dc361e73-0866-4eb6-84f8-3f510cd4bc3a)
ERROR (BadRequest): Invalid volume: volume ecb47eba-362a-4fe7-a556-74949e0b8846 already attached (HTTP 400) (Request-ID: req-db779dcc-7bef-4da7-b1f3-7e2f7e339e8d)
ERROR (BadRequest): Invalid volume: volume ecb47eba-362a-4fe7-a556-74949e0b8846 already attached (HTTP 400) (Request-ID: req-c2e68aed-d4b8-4fb7-ab79-78e927cb8320)
ERROR (BadRequest): Invalid volume: volume ecb47eba-362a-4fe7-a556-74949e0b8846 already attached (HTTP 400) (Request-ID: req-ed4f5f64-85db-4b2c-816d-149113ca0d5a)
ERROR (BadRequest): Invalid volume: volume ecb47eba-362a-4fe7-a556-74949e0b8846 already attached (HTTP 400) (Request-ID: req-1ecc6156-753f-43e9-b15b-17d6f9fc80c7)
ERROR (BadRequest): Invalid volume: volume ecb47eba-362a-4fe7-a556-74949e0b8846 already attached (HTTP 400) (Request-ID: req-e94f50b3-7b1b-4e47-aec4-3638ee0377db)

[1]   Exit 1                  nova volume-attach inst1 ecb47eba-362a-4fe7-a556-74949e0b8846 auto
[2]   Exit 1                  nova volume-attach inst1 ecb47eba-362a-4fe7-a556-74949e0b8846 auto
[3]   Exit 1                  nova volume-attach inst1 ecb47eba-362a-4fe7-a556-74949e0b8846 auto
[4]   Exit 1                  nova volume-attach inst1 ecb47eba-362a-4fe7-a556-74949e0b8846 auto
[5]   Exit 1                  nova volume-attach inst1 ecb47eba-362a-4fe7-a556-74949e0b8846 auto
[6]   Exit 1                  nova volume-attach inst1 ecb47eba-362a-4fe7-a556-74949e0b8846 auto
[7]   Exit 1                  nova volume-attach inst1 ecb47eba-362a-4fe7-a556-74949e0b8846 auto
[8]   Exit 1                  nova volume-attach inst1 ecb47eba-362a-4fe7-a556-74949e0b8846 auto
[9]-  Exit 1                  nova volume-attach inst1 ecb47eba-362a-4fe7-a556-74949e0b8846 auto
[10]+  Exit 1                  nova volume-attach inst1 ecb47eba-362a-4fe7-a556-74949e0b8846 auto


Again looks good, our single instance remains attached to single volume:
(overcloud) [stack@undercloud-0 ~]$ nova list
cinder list
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+
| ID                                   | Name  | Status | Task State | Power State | Networks                          |
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+
| 290fd8cd-9e1d-4db2-8266-e5081eac40db | inst1 | ACTIVE | -          | Running     | internal=192.168.0.13, 10.0.0.210 |
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
| ID                                   | Status | Name         | Size | Volume Type | Bootable | Attached to                          |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
| ecb47eba-362a-4fe7-a556-74949e0b8846 | in-use | Pansible_vol | 1    | tripleo     | true     | 290fd8cd-9e1d-4db2-8266-e5081eac40db |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+


A last but not least try:
Customer's script adapted to my own env:

(overcloud) [stack@undercloud-0 ~]$ cat test.sh 
#!/usr/bin/bash
source /home/stack/overcloudrc
NOVA_URL=http://172.17.1.20:8774/v2.1
INSTANCE_ID=290fd8cd-9e1d-4db2-8266-e5081eac40db
VOLUME_ID=ecb47eba-362a-4fe7-a556-74949e0b8846
function call {
        curl -s -H "X-Auth-Token: $OS_TOKEN" -H "X-Subject-Token: $OS_TOKEN" "$@" | jq .
}
echo "Using instance_id: $INSTANCE_ID"
echo "Using volume_id: $VOLUME_ID"
echo
echo "Attachments before test:"
call $NOVA_URL/servers/$INSTANCE_ID/os-volume_attachments
echo
echo "Attempting 10 concurrent attachments..."
for i in {1..10}
do
  call -H 'Content-Type: application/json' $NOVA_URL/servers/$INSTANCE_ID/os-volume_attachments -d "{\"volumeAttachment\": {\"volumeId\": \"$VOLUME_ID\"}}" > /dev/null &
done
sleep 15
echo
echo "Attachements after test:"
call $NOVA_URL/servers/$INSTANCE_ID/os-volume_attachments

And the output:
(overcloud) [stack@undercloud-0 ~]$ ./test.sh 
Using instance_id: 290fd8cd-9e1d-4db2-8266-e5081eac40db
Using volume_id: ecb47eba-362a-4fe7-a556-74949e0b8846

Attachments before test:

Attempting 10 concurrent attachments...

Attachements after test:
(overcloud) [stack@undercloud-0 ~]$ nova list
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+
| ID                                   | Name  | Status | Task State | Power State | Networks                          |
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+
| 290fd8cd-9e1d-4db2-8266-e5081eac40db | inst1 | ACTIVE | -          | Running     | internal=192.168.0.13, 10.0.0.210 |
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
| ID                                   | Status | Name         | Size | Volume Type | Bootable | Attached to                          |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
| ecb47eba-362a-4fe7-a556-74949e0b8846 | in-use | Pansible_vol | 1    | tripleo     | true     | 290fd8cd-9e1d-4db2-8266-e5081eac40db |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+


As can been seen above on 3 separate cycles, 
volume doesn't get reattached again which happened before. 
We get warning/error stating volume is already attached. 
Good to verify.

Comment 56 errata-xmlrpc 2019-09-03 16:54:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2627

Comment 58 Luigi Toscano 2020-02-19 09:59:51 UTC
*** Bug 1804562 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.