Description =========== If libvirt is unable to detach a volume because it's still in-use by the guest (either mounted and/or file opened), nova returns a traceback. Steps to reproduce ================== * Create an instance with volume attached using heat * Make sure there's activity on the volume * Delete stack Expected result =============== We would expect nova to not return a traceback but a clean log about its incapacity to detach volume. If possible, that would be great if that exception was raised back to either cinder or heat. Actual result ============= ``` 21495 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall [-] Dynamic interval looping call 'oslo_service.loopingcall._func' failed: DeviceDetachFailed: Device detach failed for vdf: Unable to detach from guest transient domain. 21496 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall Traceback (most recent call last): 21497 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 137, in _run_loop 21498 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall result = func(*self.args, **self.kw) 21499 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 415, in _func 21500 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall return self._sleep_time 21501 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 21502 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall self.force_reraise() 21503 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 21504 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall six.reraise(self.type_, self.value, self.tb) 21505 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/oslo_service/loopingcall.py", line 394, in _func 21506 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall result = f(*args, **kwargs) 21507 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 462, in _do_wait_and_retry_detach 21508 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall device=alternative_device_name, reason=reason) 21509 2018-02-14 20:31:09.735 1 ERROR oslo.service.loopingcall DeviceDetachFailed: Device detach failed for vdf: Unable to detach from guest transient domain. ``` Environment =========== * Red Hat Openstack 12 ``` libvirt-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:28:48 2018 libvirt-client-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:07 2018 libvirt-daemon-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:02 2018 libvirt-daemon-config-network-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:06 2018 libvirt-daemon-config-nwfilter-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:05 2018 libvirt-daemon-driver-interface-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:05 2018 libvirt-daemon-driver-lxc-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:06 2018 libvirt-daemon-driver-network-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:02 2018 libvirt-daemon-driver-nodedev-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:05 2018 libvirt-daemon-driver-nwfilter-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:04 2018 libvirt-daemon-driver-qemu-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:25 2018 libvirt-daemon-driver-secret-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:04 2018 libvirt-daemon-driver-storage-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:29 2018 libvirt-daemon-driver-storage-core-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:25 2018 libvirt-daemon-driver-storage-disk-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:28 2018 libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:29 2018 libvirt-daemon-driver-storage-iscsi-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:28 2018 libvirt-daemon-driver-storage-logical-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:27 2018 libvirt-daemon-driver-storage-mpath-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:27 2018 libvirt-daemon-driver-storage-rbd-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:27 2018 libvirt-daemon-driver-storage-scsi-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:27 2018 libvirt-daemon-kvm-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:27:29 2018 libvirt-libs-3.2.0-14.el7_4.7.x86_64 Fri Jan 26 15:26:00 2018 libvirt-python-3.2.0-3.el7_4.1.x86_64 Fri Jan 26 15:26:04 2018 openstack-nova-api-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:29 2018 openstack-nova-common-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:20 2018 openstack-nova-compute-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:21 2018 openstack-nova-conductor-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:29 2018 openstack-nova-console-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:29 2018 openstack-nova-migration-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:28 2018 openstack-nova-novncproxy-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:28 2018 openstack-nova-placement-api-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:29 2018 openstack-nova-scheduler-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:30 2018 puppet-nova-11.4.0-2.el7ost.noarch Fri Jan 26 15:34:26 2018 python-nova-16.0.2-9.el7ost.noarch Fri Jan 26 15:28:19 2018 python-novaclient-9.1.1-1.el7ost.noarch Fri Jan 26 15:27:39 2018 qemu-guest-agent-2.8.0-2.el7.x86_64 Fri Jan 26 14:56:57 2018 qemu-img-rhev-2.9.0-16.el7_4.13.x86_64 Fri Jan 26 15:26:03 2018 qemu-kvm-common-rhev-2.9.0-16.el7_4.13.x86_64 Fri Jan 26 15:26:07 2018 qemu-kvm-rhev-2.9.0-16.el7_4.13.x86_64 Fri Jan 26 15:27:16 2018 ```
*** Bug 1546826 has been marked as a duplicate of this bug. ***
Verification steps: (overcloud) [stack@undercloud-0 ~]$ openstack volume create --size 1 vol1 +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2018-05-04T15:44:55.000000 | | description | None | | encrypted | False | | id | 9fd6ca7b-65dd-4480-82a9-0a685fd6798c | | migration_status | None | | multiattach | False | | name | vol1 | | properties | | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | type | None | | updated_at | None | | user_id | b6ef93e75a5b4b809cdafa822d0a0668 | +---------------------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ openstack server add volume test-3122 vol1 (overcloud) [stack@undercloud-0 ~]$ openstack volume delete vol1 Failed to delete volume with name or ID 'vol1': Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots or be disassociated from snapshots after volume transfer. (HTTP 400) (Request-ID: req-be340846-d2ae-47ce-ae5a-90e48114abac) 1 of 1 volumes failed to delete.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:2086