+++ This bug was initially created as a clone of Bug #1725012 +++ Description of problem: While verifying backport of Cinder multi attach support to OSP13 detach of isnt1 from a mulitattach volume succeeded, however detaching second instance (isnt2) from same volume fails, volume remains in "detaching" status. Version-Release number of selected component (if applicable): How reproducible: Unsure Steps to Reproduce: 1. Create a multi attached volume cinder type-create multiattach cinder type-key multiattach set volume_backend_name=3pariscsi_1 cinder type-key multiattach set multiattach="<is> True" openstack volume create --size 1 --type multiattach vol1 vol id - > c5d00d17-5eaa-49d1-aac6-5ecc2c429f2d 2. Boot two instances nova list +--------------------------------------+-------+--------+------------+-------------+--------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------+--------+------------+-------------+--------------------+ | 438c5fcd-bd81-44c8-ad88-1f8ef0e1b50d | inst1 | ACTIVE | - | Running | sneha=10.50.9.122 | | fd2acc67-25ab-4a56-bc6b-d8bdf5732f6b | inst2 | ACTIVE | - | Running | default=172.20.1.6 | +--------------------------------------+-------+--------+------------+-------------+--------------------+ | 3. Attach volume to both instances cinder list +--------------------------------------+-----------+------------------------+------+-------------+----------+---------------------------------------------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+------------------------+------+-------------+----------+---------------------------------------------------------------------------+ | c5d00d17-5eaa-49d1-aac6-5ecc2c429f2d | in-use| vol1 | 1 | multiattach | false | 438c5fcd-bd81-44c8-ad88-1f8ef0e1b50d, fd2acc67-25ab-4a56-bc6b-d8bdf5732f6b | +--------------------------------------+-----------+------------------------+------+-------------+----------+---------------------------------------------------------------------------+ 4. Detach inst2 instance openstack server remove volume inst2 vol1 cinder list +--------------------------------------+-----------+------------------------+------+-------------+----------+---------------------------------------------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+------------------------+------+-------------+----------+---------------------------------------------------------------------------+ | c5d00d17-5eaa-49d1-aac6-5ecc2c429f2d | in-use| vol1 | 1 | multiattach | false | 438c5fcd-bd81-44c8-ad88-1f8ef0e1b50d | +--------------------------------------+-----------+------------------------+------+-------------+----------+---------------------------------------------------------------------------+ Volume detached successfully from inst2, as we see above it only remains attached to isnt1 (status in-use). 5. Detach inst1 instance openstack server remove volume inst1 vol1 cinder list +--------------------------------------+-----------+------------------------+------+-------------+----------+---------------------------------------------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+------------------------+------+-------------+----------+---------------------------------------------------------------------------+ | c5d00d17-5eaa-49d1-aac6-5ecc2c429f2d | detaching | vol1 | 1 | multiattach | false | 438c5fcd-bd81-44c8-ad88-1f8ef0e1b50d | +--------------------------------------+-----------+------------------------+------+-------------+----------+---------------------------------------------------------------------------+ Even after a few minutes volume status remains in detaching state. what I noticed in the cinder-volume.log is, when user sends a detach instance call for the first time it removes the host entry from 3PAR. Now since there is no LUN connectivity with 3PAR, further detach of instances fails. For detailed logs please check attached log file. Actual results: Nova volume detach fails to complete due to no host found error in cinder-volume.log Volume remains in detaching state. Expected results: Volume should successfully detach from second instance as it did from the first one. Additional info:
Awaiting newer build, Latest P2 compose I managed to deploy: RHOS_TRUNK-15.0-RHEL-8-20200428.n.0 Still produces a prefixed-in version: 14.0.4-0.202002 > 14.0.4-0.202001
OSP15 is EOL, but the fix lives in the newer releases (OSP16) and it has been backported to OSP13.