Bug 1551733

Summary: [BACKPORT Request] Nova returns a traceback when it's unable to detach a volume still in use
Product: Red Hat OpenStack Reporter: David Vallee Delisle <dvd>
Component: openstack-novaAssignee: Lee Yarwood <lyarwood>
Status: CLOSED ERRATA QA Contact: Joe H. Rahme <jhakimra>
Severity: high Docs Contact:
Priority: high    
Version: 12.0 (Pike)CC: acanan, berrange, coldford, dasmith, dcadzow, eglynn, jhakimra, kchamart, lyarwood, marjones, mschuppe, sbauza, sferdjao, sgordon, srevivo, vromanso
Target Milestone: betaKeywords: Triaged
Target Release: 13.0 (Queens)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openstack-nova-17.0.3-0.20180409231013.9affdb0.el7ost Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1557938 1669225 (view as bug list) Environment:
Last Closed: 2018-06-27 13:46:51 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1557938, 1669225    

Description David Vallee Delisle 2018-03-05 20:45:54 UTC
Description
===========
If libvirt is unable to detach a volume because it's still in-use by the guest (either mounted and/or file opened), nova returns a traceback.

Steps to reproduce
==================

* Create an instance with volume attached using heat
* Make sure there's activity on the volume
* Delete stack

Expected result
===============
We would expect nova to not return a traceback but a clean log about its incapacity to detach volume. If possible, that would be great if that exception was raised back to either cinder or heat.

Actual result
=============
```
21495 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall [-] Dynamic interval looping call 'oslo_service.loopingcall._func' failed: DeviceDetachFailed: Device detach failed for vdf: Unable to detach from guest transient domain.
21496 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall Traceback (most recent call last):
21497 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 137, in _run_loop
21498 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall result = func(*self.args, **self.kw)
21499 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 415, in _func
21500 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall return self._sleep_time
21501 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
21502 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall self.force_reraise()
21503 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
21504 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall six.reraise(self.type_, self.value, self.tb)
21505 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 394, in _func
21506 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall result = f(*args, **kwargs)
21507 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 462, in _do_wait_and_retry_detach
21508 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall device=alternative_device_name, reason=reason)
21509 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall DeviceDetachFailed: Device detach failed for vdf: Unable to detach from guest transient domain.
```

Environment
===========
* Red Hat Openstack 12
```
libvirt-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:28:48 2018
libvirt-client-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:07 2018
libvirt-daemon-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:02 2018
libvirt-daemon-config-network-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:06 2018
libvirt-daemon-config-nwfilter-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:05 2018
libvirt-daemon-driver-interface-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:05 2018
libvirt-daemon-driver-lxc-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:06 2018
libvirt-daemon-driver-network-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:02 2018
libvirt-daemon-driver-nodedev-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:05 2018
libvirt-daemon-driver-nwfilter-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:04 2018
libvirt-daemon-driver-qemu-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:25 2018
libvirt-daemon-driver-secret-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:04 2018
libvirt-daemon-driver-storage-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:29 2018
libvirt-daemon-driver-storage-core-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:25 2018
libvirt-daemon-driver-storage-disk-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:28 2018
libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:29 2018
libvirt-daemon-driver-storage-iscsi-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:28 2018
libvirt-daemon-driver-storage-logical-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:27 2018
libvirt-daemon-driver-storage-mpath-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:27 2018
libvirt-daemon-driver-storage-rbd-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:27 2018
libvirt-daemon-driver-storage-scsi-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:27 2018
libvirt-daemon-kvm-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:29 2018
libvirt-libs-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:00 2018
libvirt-python-3.2.0-3.el7_4.1.x86_64 Fri Jan 26 15:26:04 2018
openstack-nova-api-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:29 2018
openstack-nova-common-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:20 2018
openstack-nova-compute-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:21 2018
openstack-nova-conductor-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:29 2018
openstack-nova-console-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:29 2018
openstack-nova-migration-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:28 2018
openstack-nova-novncproxy-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:28 2018
openstack-nova-placement-api-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:29 2018
openstack-nova-scheduler-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:30 2018
puppet-nova-11.4.0-2.el7ost.noarch Fri Jan 26 15:34:26 2018
python-nova-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:19 2018
python-novaclient-9.1.1-1.el7ost.noarch Fri Jan 26 15:27:39 2018
qemu-guest-agent-2.8.0-2.el7.x86_64 Fri Jan 26 14:56:57 2018
qemu-img-rhev-2.9.0-16.el7_4.13.x86_64 Fri Jan 26 15:26:03 2018
qemu-kvm-common-rhev-2.9.0-16.el7_4.13.x86_64 Fri Jan 26 15:26:07 2018
qemu-kvm-rhev-2.9.0-16.el7_4.13.x86_64 Fri Jan 26 15:27:16 2018
```

Comment 3 Lee Yarwood 2018-03-21 10:17:48 UTC
*** Bug 1546826 has been marked as a duplicate of this bug. ***

Comment 13 Joe H. Rahme 2018-05-04 16:38:46 UTC
Verification steps:

(overcloud) [stack@undercloud-0 ~]$ openstack volume create --size 1 vol1
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2018-05-04T15:44:55.000000           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | 9fd6ca7b-65dd-4480-82a9-0a685fd6798c |
| migration_status    | None                                 |
| multiattach         | False                                |
| name                | vol1                                 |
| properties          |                                      |
| replication_status  | None                                 |
| size                | 1                                    |
| snapshot_id         | None                                 |
| source_volid        | None                                 |
| status              | creating                             |
| type                | None                                 |
| updated_at          | None                                 |
| user_id             | b6ef93e75a5b4b809cdafa822d0a0668     |
+---------------------+--------------------------------------+
(overcloud) [stack@undercloud-0 ~]$ openstack server add volume test-3122 vol1
(overcloud) [stack@undercloud-0 ~]$ openstack volume delete vol1
Failed to delete volume with name or ID 'vol1': Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots or be disassociated from snapshots after volume transfer. (HTTP 400) (Request-ID: req-be340846-d2ae-47ce-ae5a-90e48114abac)
1 of 1 volumes failed to delete.

Comment 15 errata-xmlrpc 2018-06-27 13:46:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:2086