Bug 1487910 - Volumes showup as attached to deleted instances
Summary: Volumes showup as attached to deleted instances
Keywords:
Status: CLOSED EOL
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 11.0 (Ocata)
Hardware: Unspecified
OS: Linux
medium
medium
Target Milestone: ---
: 11.0 (Ocata)
Assignee: Eric Harney
QA Contact: Avi Avraham
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-09-03 07:59 UTC by Tzach Shefi
Modified: 2018-06-22 12:33 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-06-22 12:33:27 UTC
Target Upstream Version:


Attachments (Terms of Use)
Nova and CInder logs (891.66 KB, application/x-gzip)
2017-09-03 08:05 UTC, Tzach Shefi
no flags Details

Description Tzach Shefi 2017-09-03 07:59:07 UTC
Description of problem: Cinder volumes are listed as attached, while the instances are gone. This might be a Nova issue. 

Version-Release number of selected component (if applicable):
rhel7.4 
puppet-cinder-10.3.1-1.el7ost.noarch
python-cinder-10.0.4-3.el7ost.noarch
openstack-cinder-10.0.4-3.el7ost.noarch
python-cinderclient-1.11.0-1.el7ost.noarch

python-nova-15.0.6-6.el7ost.noarch
openstack-nova-compute-15.0.6-6.el7ost.noarch
openstack-nova-console-15.0.6-6.el7ost.noarch
openstack-nova-placement-api-15.0.6-6.el7ost.noarch
openstack-nova-common-15.0.6-6.el7ost.noarch
openstack-nova-scheduler-15.0.6-6.el7ost.noarch
openstack-nova-api-15.0.6-6.el7ost.noarch
puppet-nova-10.4.1-2.el7ost.noarch
openstack-nova-cert-15.0.6-6.el7ost.noarch
openstack-nova-conductor-15.0.6-6.el7ost.noarch
python-novaclient-7.1.2-1.el7ost.noarch



How reproducible:
Every time. 


Steps to Reproduce:
1.  Boot up instance with vol from image

nova boot --flavor small --block-device source=image,id=40ac3ad9-8f07-4332-9676-70123854b4bf,dest=volume,size=1,shutdown=preserve,bootindex=0 ints1     --min-count 2    

2.
Notice instances were booted, volumes created. 
[stack@cougar08 ~]$ nova list
+--------------------------------------+---------+--------+------------+-------------+--------------------+
| ID                                   | Name    | Status | Task State | Power State | Networks           |
+--------------------------------------+---------+--------+------------+-------------+--------------------+
| 3422dc9a-c783-4ba2-8a5c-fa03047ca755 | inst1   | ACTIVE | -          | Running     | public=10.10.10.17 |
| ab49ced2-e453-4baf-b1f8-a2aed4593a1c | ints1-1 | ACTIVE | -          | Running     | public=10.10.10.14 |
| 3f77553f-472c-4698-8c1a-f2d0402e1091 | ints1-2 | ACTIVE | -          | Running     | public=10.10.10.11 |
+--------------------------------------+---------+--------+------------+-------------+--------------------+
[stack@cougar08 ~]$ nova delete 3422dc9a-c783-4ba2-8a5c-fa03047ca755
Request to delete server 3422dc9a-c783-4ba2-8a5c-fa03047ca755 has been accepted.
[stack@cougar08 ~]$ cinder list
+--------------------------------------+----------------+------+------+-------------+----------+--------------------------------------+
| ID                                   | Status         | Name | Size | Volume Type | Bootable | Attached to                          |
+--------------------------------------+----------------+------+------+-------------+----------+--------------------------------------+
| 20b4f62d-1e9e-4e07-8c92-4aecaaf3d656 | in-use         |      | 1    | xtremio     | true     | 3f77553f-472c-4698-8c1a-f2d0402e1091 |
| 2a715dd8-c65b-46f7-ad77-82d33775b756 | in-use         |      | 1    | xtremio     | true     | ab49ced2-e453-4baf-b1f8-a2aed4593a1c |
| df904a3d-9471-44ee-a1d3-3e534a50597e | error_deleting | 2    | 1    | xtremio     | false    |                                      |
+--------------------------------------+----------------+------+------+-------------+----------+--------------------------------------+


3. Deleted instance, due to preserve I expected volumes to remain but in unattached state. 

[stack@cougar08 ~]$ nova delete ints1-1 ints1-2
Request to delete server ints1-1 has been accepted.
Request to delete server ints1-2 has been accepted.

[stack@cougar08 ~]$ nova list
+--------------------------------------+---------+--------+------------+-------------+----------+
| ID                                   | Name    | Status | Task State | Power State | Networks |
+--------------------------------------+---------+--------+------------+-------------+----------+
| ab49ced2-e453-4baf-b1f8-a2aed4593a1c | ints1-1 | ACTIVE | deleting   | Running     |          |
| 3f77553f-472c-4698-8c1a-f2d0402e1091 | ints1-2 | ACTIVE | deleting   | Running     |          |
+--------------------------------------+---------+--------+------------+-------------+----------+

Waited a few minutes till nova list was empty
[stack@cougar08 ~]$ nova list
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+



Yet Cinder still still reports volumes as attached
+--------------------------------------+----------------+------+------+-------------+----------+--------------------------------------+
| ID                                   | Status         | Name | Size | Volume Type | Bootable | Attached to                          |
+--------------------------------------+----------------+------+------+-------------+----------+--------------------------------------+
| 20b4f62d-1e9e-4e07-8c92-4aecaaf3d656 | in-use         |      | 1    | xtremio     | true     | 3f77553f-472c-4698-8c1a-f2d0402e1091 |
| 2a715dd8-c65b-46f7-ad77-82d33775b756 | in-use         |      | 1    | xtremio     | true     | ab49ced2-e453-4baf-b1f8-a2aed4593a1c |
| df904a3d-9471-44ee-a1d3-3e534a50597e | error_deleting | 2    | 1    | xtremio     | false    |                                      |
+--------------------------------------+----------------+------+------+-------------+----------+--------------------------------------+

4. Trying to delete them says they are still attached (wrong)

for i in $(cinder list --all-tenant | awk '{print$2}'); do cinder delete $i  ; done
Delete for volume ID failed: No volume with a name or ID of 'ID' exists.
ERROR: Unable to delete any of the specified volumes.
Delete for volume 20b4f62d-1e9e-4e07-8c92-4aecaaf3d656 failed: Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group or have snapshots. (HTTP 400) (Request-ID: req-1c92c3ce-407b-4d93-a0da-8cdabca6ad26)
ERROR: Unable to delete any of the specified volumes.
Delete for volume 2a715dd8-c65b-46f7-ad77-82d33775b756 failed: Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group or have snapshots. (HTTP 400) (Request-ID: req-af61ea02-17b2-411f-8788-6a881c06841c)
ERROR: Unable to delete any of the specified volumes.
Delete for volume 56bc28a6-f565-4fda-9262-3cca6dfa676e failed: Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group or have snapshots. (HTTP 400) (Request-ID: req-c464ef7e-5107-4712-b4cb-2d88afcfc1c4)
ERROR: Unable to delete any of the specified volumes.
... 


5. After resetting state to detached volumes were successfully deleted. 
[stack@cougar08 ~]$ cinder reset-state 20b4f62d-1e9e-4e07-8c92-4aecaaf3d656 --attach-status detached
[stack@cougar08 ~]$ cinder list
+--------------------------------------+----------------+------+------+-------------+----------+--------------------------------------+
| ID                                   | Status         | Name | Size | Volume Type | Bootable | Attached to                          |
+--------------------------------------+----------------+------+------+-------------+----------+--------------------------------------+
| 20b4f62d-1e9e-4e07-8c92-4aecaaf3d656 | available      |      | 1    | xtremio     | true     |                                      |
| 2a715dd8-c65b-46f7-ad77-82d33775b756 | in-use         |      | 1    | xtremio     | true     | ab49ced2-e453-4baf-b1f8-a2aed4593a1c |
| df904a3d-9471-44ee-a1d3-3e534a50597e | error_deleting | 2    | 1    | xtremio     | false    |                                      |
+--------------------------------------+----------------+------+------+-------------+----------+--------------------------------------+
[stack@cougar08 ~]$ cinder reset-state 2a715dd8-c65b-46f7-ad77-82d33775b756 --attach-status detached
[stack@cougar08 ~]$ cinder list
+--------------------------------------+----------------+------+------+-------------+----------+-------------+
| ID                                   | Status         | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+----------------+------+------+-------------+----------+-------------+
| 20b4f62d-1e9e-4e07-8c92-4aecaaf3d656 | available      |      | 1    | xtremio     | true     |             |
| 2a715dd8-c65b-46f7-ad77-82d33775b756 | available      |      | 1    | xtremio     | true     |             |
| df904a3d-9471-44ee-a1d3-3e534a50597e | error_deleting | 2    | 1    | xtremio     | false    |             |
+--------------------------------------+----------------+------+------+-------------+----------+-------------+
[stack@cougar08 ~]$ cinder delete 20b4f62d-1e9e-4e07-8c92-4aecaaf3d656 2a715dd8-c65b-46f7-ad77-82d33775b756
Request to delete volume 20b4f62d-1e9e-4e07-8c92-4aecaaf3d656 has been accepted.
Request to delete volume 2a715dd8-c65b-46f7-ad77-82d33775b756 has been accepted.



Actual results:
Volumes remain listed as attached in-use, even include ID of deleted instances.

Expected results:
Once instances are deleted volumes should return to available. 

Additional info:
This happened while I was testing XtremIO iscsi as backend but doesn't look related and I also hit similar issues with attached volume migrations on NFS.

Comment 1 Tzach Shefi 2017-09-03 08:05:46 UTC
Created attachment 1321463 [details]
Nova and CInder logs

Comment 4 Tzach Shefi 2018-02-20 12:38:14 UTC
Failed to reprodcue issue. 
Ran same test twice on a recent builds of OSP 8/11: 
8   -p 2018-01-04.1
11   -p 2018-01-25.2


Booted instances from volume created from an image +shutdown preserve. 

Two instance/volumes created booted up just fine. 

Deleted instances, their volumes switched to available as expected. 

Maybe original bug was a once off fluke case.

Comment 5 Scott Lewis 2018-06-22 12:33:27 UTC
OSP11 is now retired, see details at https://access.redhat.com/errata/product/191/ver=11/rhel---7/x86_64/RHBA-2018:1828


Note You need to log in before you can comment on or make changes to this bug.