This bug was initially created as a copy of Bug #1705563 Description of problem: Due to a race condition in the Cinder API, concurrent requests to attach the same non-multi-attach volume to the same instance can succeed.
Note to for self verification steps see: https://bugzilla.redhat.com/show_bug.cgi?id=1705563#c0
Verified on: openstack-cinder-14.0.4-0.20200107100455.a59c01e.el8ost.noarch First test, try to attach same volume more than once to same instance First volume gets attached: (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+ | 88820157-a5f2-44ac-acb2-dde0d8ee4fc1 | in-use | - | 1 | tripleo | false | 908ae406-b466-469c-8586-83ecf81101c0 | +--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ for i in {1..10}; do openstack server add volume inst1 88820157-a5f2-44ac-acb2-dde0d8ee4fc1 & done [1] 361372 [2] 361373 [3] 361374 [4] 361375 [5] 361376 [6] 361377 [7] 361378 [8] 361379 [9] 361380 [10] 361381 (overcloud) [stack@undercloud-0 ~]$ Invalid volume: volume 88820157-a5f2-44ac-acb2-dde0d8ee4fc1 already attached (HTTP 400) (Request-ID: req-4ccf7db2-cbb5-4b8b-ab65-ca01c2635db5) Invalid volume: volume 88820157-a5f2-44ac-acb2-dde0d8ee4fc1 already attached (HTTP 400) (Request-ID: req-e26e211b-5b8b-4403-94a4-8c7d44535a24) Invalid volume: volume 88820157-a5f2-44ac-acb2-dde0d8ee4fc1 already attached (HTTP 400) (Request-ID: req-e6ae511b-3fe8-4ead-9584-673c70914a71) Invalid volume: volume 88820157-a5f2-44ac-acb2-dde0d8ee4fc1 already attached (HTTP 400) (Request-ID: req-08583805-baf1-47a6-bb6b-b581872ef902) Invalid volume: volume 88820157-a5f2-44ac-acb2-dde0d8ee4fc1 already attached (HTTP 400) (Request-ID: req-c35b9962-6a99-4a8f-ae2b-4709ba891415) Invalid volume: volume 88820157-a5f2-44ac-acb2-dde0d8ee4fc1 already attached (HTTP 400) (Request-ID: req-d9f7f571-fb86-4e8e-9d7b-7c224a013bda) Invalid volume: volume 88820157-a5f2-44ac-acb2-dde0d8ee4fc1 already attached (HTTP 400) (Request-ID: req-d33f69e1-af9d-41ee-92e1-27b1df8b58b1) Invalid volume: volume 88820157-a5f2-44ac-acb2-dde0d8ee4fc1 already attached (HTTP 400) (Request-ID: req-d7508f62-9731-49f5-9c98-744f9ae947f7) Invalid volume: volume 88820157-a5f2-44ac-acb2-dde0d8ee4fc1 already attached (HTTP 400) (Request-ID: req-3695bcc8-34f3-4417-b1a1-68dc2b81e612) Invalid volume: volume 88820157-a5f2-44ac-acb2-dde0d8ee4fc1 already attached (HTTP 400) (Request-ID: req-8e482044-d596-4a92-ac92-0f1fe4c14087) Can't reattach an already attached volume - great. Now lets clone the original customer script that hit this. Customer's script adapted to my own env: (overcloud) [stack@undercloud-0 ~]$ cat test.sh #!/usr/bin/bash source /home/stack/overcloudrc NOVA_URL=http://172.17.1.34:8774/v2.1 INSTANCE_ID=908ae406-b466-469c-8586-83ecf81101c0 VOLUME_ID=88820157-a5f2-44ac-acb2-dde0d8ee4fc1 function call { curl -s -H "X-Auth-Token: $OS_TOKEN" -H "X-Subject-Token: $OS_TOKEN" "$@" | jq . } echo "Using instance_id: $INSTANCE_ID" echo "Using volume_id: $VOLUME_ID" echo echo "Attachments before test:" call $NOVA_URL/servers/$INSTANCE_ID/os-volume_attachments echo echo "Attempting 10 concurrent attachments..." for i in {1..10} do call -H 'Content-Type: application/json' $NOVA_URL/servers/$INSTANCE_ID/os-volume_attachments -d "{\"volumeAttachment\": {\"volumeId\": \"$VOLUME_ID\"}}" > /dev/null & done sleep 15 echo echo "Attachements after test:" call $NOVA_URL/servers/$INSTANCE_ID/os-volume_attachments (overcloud) [stack@undercloud-0 ~]$ nova volume-attach 908ae406-b466-469c-8586-83ecf81101c0 88820157-a5f2-44ac-acb2-dde0d8ee4fc1 +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdb | | id | 88820157-a5f2-44ac-acb2-dde0d8ee4fc1 | | serverId | 908ae406-b466-469c-8586-83ecf81101c0 | | tag | - | | volumeId | 88820157-a5f2-44ac-acb2-dde0d8ee4fc1 | +----------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ ./test.sh Using instance_id: 908ae406-b466-469c-8586-83ecf81101c0 Using volume_id: 88820157-a5f2-44ac-acb2-dde0d8ee4fc1 Attachments before test: Attempting 10 concurrent attachments... Attachements after test: (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+ | 88820157-a5f2-44ac-acb2-dde0d8ee4fc1 | in-use | - | 1 | tripleo | false | 908ae406-b466-469c-8586-83ecf81101c0 | +--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+ Attached volume to instance, ran script After which still only see a single instance attached to a single volume. looks fine to verify.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0712