Description When set reclaim_instance_interval > 0, and we delete an instance which boot from volume with delete_on_termination set as true. After reclaim_instance_interval time pass, all volumes boot instance will with state: attached and in-use, but attached instances was deleted. Steps to reproduce 1. set reclaim_instance_interval = 60 2. create a bootable volume 3. boot instance with created bootable volume 4. delete instance, and wait 60 seconds Expected result Previous test bootable volume was deleted after reclaim_instance_interval seconds. Actual result Previous test bootable volume was in state attached and in-use, attached with deleted instance. Extra info: The fix is available in upstream pike release [1] Can we backport it to OSP10? [1] https://bugs.launchpad.net/nova/+bug/1733736
Volume attachment state is controlled by nova.
Hit the same issue on packages: Test steps: 1. Boot VMs from volumes, attach volumes to the VMs, then delete the VMs successfully 2. Check the volumes are still in-use and attached, and without snapshot (overcloud) [stack@~]$ openstack server list (overcloud) [stack@~]$ openstack volume list +--------------------------------------+---------------+--------+------+---------------------------------------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+---------------+--------+------+---------------------------------------------------------------+ | d9dcca92-2395-4e07-ab4d-b0a9f04187e1 | r8-qcow2-vol2 | in-use | 10 | Attached to 263ee111-27cc-42ae-abdd-9bb6a15adb70 on /dev/vda | | 5fe9b664-742a-4357-80ad-40428b0d63d0 | r8-raw-vol | in-use | 10 | Attached to b01ff9d7-91e8-48d8-a20b-d8b546a1a47b on /dev/vda | | 6846df48-55ab-4204-9188-5a74033e6271 | r8-qcow2-vol | in-use | 10 | Attached to c901cff9-8933-421b-9404-3a4aaac7125a on /dev/vda | +--------------------------------------+---------------+--------+------+---------------------------------------------------------------+ (overcloud) [stack@~]$ openstack volume snapshot list 3. Failed to delete the volumes with --force (overcloud) [stack@~]$ openstack volume delete d9dcca92-2395-4e07-ab4d-b0a9f04187e1 Failed to delete volume with name or ID 'd9dcca92-2395-4e07-ab4d-b0a9f04187e1': Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots or be disassociated from snapshots after volume transfer. (HTTP 400) (Request-ID: req-8ef7e546-0352-4668-9c3b-a1ac7610cdcf) 1 of 1 volumes failed to delete. (overcloud) [stack@ ~]$ openstack volume delete d9dcca92-2395-4e07-ab4d-b0a9f04187e1 --force Failed to delete volume with name or ID 'd9dcca92-2395-4e07-ab4d-b0a9f04187e1': Invalid volume: Volume must not be migrating, attached, belong to a group, have snapshots or be disassociated from snapshots after volume transfer. (HTTP 400) (Request-ID: req-85eb8795-b476-4500-9527-9fab4f0aa857) 1 of 1 volumes failed to delete. Actual results: After delete server, the volume is still attached and can't be deleted with --force Expected results: After delete server, the volume is not attached, and can be deleted
(In reply to chhu from comment #10) > Hit the same issue on packages: openstack-nova-compute-19.0.3-0.20190814170534.a8e19af.el8ost.noarch > > Test steps: > 1. Boot VMs from volumes, attach volumes to the VMs, then delete the VMs > successfully > > 2. Check the volumes are still in-use and attached, and without snapshot > (overcloud) [stack@~]$ openstack server list > (overcloud) [stack@~]$ openstack volume list > +--------------------------------------+---------------+--------+------+----- > ----------------------------------------------------------+ > | ID | Name | Status | Size | > Attached to | > +--------------------------------------+---------------+--------+------+----- > ----------------------------------------------------------+ > | d9dcca92-2395-4e07-ab4d-b0a9f04187e1 | r8-qcow2-vol2 | in-use | 10 | > Attached to 263ee111-27cc-42ae-abdd-9bb6a15adb70 on /dev/vda | > | 5fe9b664-742a-4357-80ad-40428b0d63d0 | r8-raw-vol | in-use | 10 | > Attached to b01ff9d7-91e8-48d8-a20b-d8b546a1a47b on /dev/vda | > | 6846df48-55ab-4204-9188-5a74033e6271 | r8-qcow2-vol | in-use | 10 | > Attached to c901cff9-8933-421b-9404-3a4aaac7125a on /dev/vda | > +--------------------------------------+---------------+--------+------+----- > ----------------------------------------------------------+ > (overcloud) [stack@~]$ openstack volume snapshot list > > 3. Failed to delete the volumes with --force > (overcloud) [stack@~]$ openstack volume delete > d9dcca92-2395-4e07-ab4d-b0a9f04187e1 > Failed to delete volume with name or ID > 'd9dcca92-2395-4e07-ab4d-b0a9f04187e1': Invalid volume: Volume status must > be available or error or error_restoring or error_extending or > error_managing and must not be migrating, attached, belong to a group, have > snapshots or be disassociated from snapshots after volume transfer. (HTTP > 400) (Request-ID: req-8ef7e546-0352-4668-9c3b-a1ac7610cdcf) > 1 of 1 volumes failed to delete. > > (overcloud) [stack@ ~]$ openstack volume delete > d9dcca92-2395-4e07-ab4d-b0a9f04187e1 --force > Failed to delete volume with name or ID > 'd9dcca92-2395-4e07-ab4d-b0a9f04187e1': Invalid volume: Volume must not be > migrating, attached, belong to a group, have snapshots or be disassociated > from snapshots after volume transfer. (HTTP 400) (Request-ID: > req-85eb8795-b476-4500-9527-9fab4f0aa857) > 1 of 1 volumes failed to delete. > > Actual results: > After deleting server, the volume is still attached and can't be deleted with > --force > > Expected results: > After deleting server, the volume is not attached, and can be deleted
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days