Bug 1691324 - Volumes remain in the attached state even when the instance is already deleted
Summary: Volumes remain in the attached state even when the instance is already deleted
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: 10.0 (Newton)
Hardware: Unspecified
OS: Linux
high
high
Target Milestone: ---
: ---
Assignee: OSP DFG:Compute
QA Contact: OSP DFG:Compute
URL:
Whiteboard:
Depends On:
Blocks: 1691840 1691842 1691844
TreeView+ depends on / blocked
 
Reported: 2019-03-21 12:07 UTC by Shailesh Chhabdiya
Modified: 2024-01-06 04:26 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
When "reclaim_instance_interval" is > 0 and then delete instance which is booted from volume with "delete_on_termination" set as true. After "reclaim_instance_interval" time passes, the volume from which instance was booted shows status "attached" and "in-use". This happens because as admin context from `nova.compute.manager._reclaim_queued_deletes` did not have any token info, then call cinder api would be failed. The resolution is to add cinder credentials, user/project CONF with admin role at cinder group and when determine context is_admin and without token, do authentication with user/project info to call cinder api.
Clone Of:
: 1691839 (view as bug list)
Environment:
Last Closed: 2020-12-01 09:24:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-23424 0 None None None 2023-03-21 19:15:46 UTC

Description Shailesh Chhabdiya 2019-03-21 12:07:12 UTC
Description

When set reclaim_instance_interval > 0, and we delete an instance which boot from volume with delete_on_termination set as true.
After reclaim_instance_interval time pass, all volumes boot instance will with state: attached and in-use, but attached instances was deleted.

Steps to reproduce

1. set reclaim_instance_interval = 60
2. create a bootable volume
3. boot instance with created bootable volume
4. delete instance, and wait 60 seconds

Expected result

Previous test bootable volume was deleted after reclaim_instance_interval seconds.

Actual result

Previous test bootable volume was in state attached and in-use, attached with deleted instance.


Extra info:

The fix is available in upstream pike release [1]

Can we backport it to OSP10?

[1] https://bugs.launchpad.net/nova/+bug/1733736

Comment 1 Alan Bishop 2019-03-21 12:11:13 UTC
Volume attachment state is controlled by nova.

Comment 10 chhu 2019-08-26 07:31:09 UTC
Hit the same issue on packages:

Test steps:
1. Boot VMs from volumes, attach volumes to the VMs, then delete the VMs successfully

2. Check the volumes are still in-use and attached, and without snapshot
(overcloud) [stack@~]$ openstack server list
(overcloud) [stack@~]$ openstack volume list
+--------------------------------------+---------------+--------+------+---------------------------------------------------------------+
| ID                                   | Name          | Status | Size | Attached to                                                   |
+--------------------------------------+---------------+--------+------+---------------------------------------------------------------+
| d9dcca92-2395-4e07-ab4d-b0a9f04187e1 | r8-qcow2-vol2 | in-use |   10 | Attached to 263ee111-27cc-42ae-abdd-9bb6a15adb70 on /dev/vda  |
| 5fe9b664-742a-4357-80ad-40428b0d63d0 | r8-raw-vol    | in-use |   10 | Attached to b01ff9d7-91e8-48d8-a20b-d8b546a1a47b on /dev/vda  |
| 6846df48-55ab-4204-9188-5a74033e6271 | r8-qcow2-vol  | in-use |   10 | Attached to c901cff9-8933-421b-9404-3a4aaac7125a on /dev/vda  |
+--------------------------------------+---------------+--------+------+---------------------------------------------------------------+
(overcloud) [stack@~]$ openstack volume snapshot list

3. Failed to delete the volumes with --force
(overcloud) [stack@~]$ openstack volume delete d9dcca92-2395-4e07-ab4d-b0a9f04187e1
Failed to delete volume with name or ID 'd9dcca92-2395-4e07-ab4d-b0a9f04187e1': Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots or be disassociated from snapshots after volume transfer. (HTTP 400) (Request-ID: req-8ef7e546-0352-4668-9c3b-a1ac7610cdcf)
1 of 1 volumes failed to delete.

(overcloud) [stack@ ~]$ openstack volume delete d9dcca92-2395-4e07-ab4d-b0a9f04187e1 --force
Failed to delete volume with name or ID 'd9dcca92-2395-4e07-ab4d-b0a9f04187e1': Invalid volume: Volume  must not be migrating, attached, belong to a group, have snapshots or be disassociated from snapshots after volume transfer. (HTTP 400) (Request-ID: req-85eb8795-b476-4500-9527-9fab4f0aa857)
1 of 1 volumes failed to delete.

Actual results:
After delete server, the volume is still attached and can't be deleted with --force

Expected results:
After delete server, the volume is not attached, and can be deleted

Comment 11 chhu 2019-08-26 07:35:04 UTC
(In reply to chhu from comment #10)
> Hit the same issue on packages:
openstack-nova-compute-19.0.3-0.20190814170534.a8e19af.el8ost.noarch
> 
> Test steps:
> 1. Boot VMs from volumes, attach volumes to the VMs, then delete the VMs
> successfully
> 
> 2. Check the volumes are still in-use and attached, and without snapshot
> (overcloud) [stack@~]$ openstack server list
> (overcloud) [stack@~]$ openstack volume list
> +--------------------------------------+---------------+--------+------+-----
> ----------------------------------------------------------+
> | ID                                   | Name          | Status | Size |
> Attached to                                                   |
> +--------------------------------------+---------------+--------+------+-----
> ----------------------------------------------------------+
> | d9dcca92-2395-4e07-ab4d-b0a9f04187e1 | r8-qcow2-vol2 | in-use |   10 |
> Attached to 263ee111-27cc-42ae-abdd-9bb6a15adb70 on /dev/vda  |
> | 5fe9b664-742a-4357-80ad-40428b0d63d0 | r8-raw-vol    | in-use |   10 |
> Attached to b01ff9d7-91e8-48d8-a20b-d8b546a1a47b on /dev/vda  |
> | 6846df48-55ab-4204-9188-5a74033e6271 | r8-qcow2-vol  | in-use |   10 |
> Attached to c901cff9-8933-421b-9404-3a4aaac7125a on /dev/vda  |
> +--------------------------------------+---------------+--------+------+-----
> ----------------------------------------------------------+
> (overcloud) [stack@~]$ openstack volume snapshot list
> 
> 3. Failed to delete the volumes with --force
> (overcloud) [stack@~]$ openstack volume delete
> d9dcca92-2395-4e07-ab4d-b0a9f04187e1
> Failed to delete volume with name or ID
> 'd9dcca92-2395-4e07-ab4d-b0a9f04187e1': Invalid volume: Volume status must
> be available or error or error_restoring or error_extending or
> error_managing and must not be migrating, attached, belong to a group, have
> snapshots or be disassociated from snapshots after volume transfer. (HTTP
> 400) (Request-ID: req-8ef7e546-0352-4668-9c3b-a1ac7610cdcf)
> 1 of 1 volumes failed to delete.
> 
> (overcloud) [stack@ ~]$ openstack volume delete
> d9dcca92-2395-4e07-ab4d-b0a9f04187e1 --force
> Failed to delete volume with name or ID
> 'd9dcca92-2395-4e07-ab4d-b0a9f04187e1': Invalid volume: Volume  must not be
> migrating, attached, belong to a group, have snapshots or be disassociated
> from snapshots after volume transfer. (HTTP 400) (Request-ID:
> req-85eb8795-b476-4500-9527-9fab4f0aa857)
> 1 of 1 volumes failed to delete.
> 
> Actual results:
> After deleting server, the volume is still attached and can't be deleted with
> --force
> 
> Expected results:
> After deleting server, the volume is not attached, and can be deleted

Comment 18 Red Hat Bugzilla 2024-01-06 04:26:13 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.