Haven't verified this with a CI failure, but it certainly seems to be missing the same handling.
Verified as follows - Error code 400 was returned as expected. ************ VERSION ************ # yum list installed | grep openstack-nova openstack-nova-api.noarch 2015.1.4-22.el7ost @RH7-RHOS-7.0 openstack-nova-cert.noarch 2015.1.4-22.el7ost @RH7-RHOS-7.0 openstack-nova-common.noarch 2015.1.4-22.el7ost @RH7-RHOS-7.0 openstack-nova-compute.noarch 2015.1.4-22.el7ost @RH7-RHOS-7.0 openstack-nova-conductor.noarch 2015.1.4-22.el7ost @RH7-RHOS-7.0 openstack-nova-console.noarch 2015.1.4-22.el7ost @RH7-RHOS-7.0 openstack-nova-novncproxy.noarch 2015.1.4-22.el7ost @RH7-RHOS-7.0 openstack-nova-scheduler.noarch 2015.1.4-22.el7ost @RH7-RHOS-7.0 ********* LOGS ********* # cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | 83671658-730a-4a54-969a-925c8d6e0887 | available | vol1 | 1 | - | false | | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ # nova list +--------------------------------------+------+--------+------------+-------------+---------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------+--------+------------+-------------+---------------------+ | dd68313a-d630-49d6-a0d1-c4cefd018a2d | vm1 | ACTIVE | - | Running | public=172.24.4.227 | +--------------------------------------+------+--------+------------+-------------+---------------------+ # nova volume-attach vm1 83671658-730a-4a54-969a-925c8d6e0887 +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdb | | id | 83671658-730a-4a54-969a-925c8d6e0887 | | serverId | dd68313a-d630-49d6-a0d1-c4cefd018a2d | | volumeId | 83671658-730a-4a54-969a-925c8d6e0887 | +----------+--------------------------------------+ # cinder list +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ | 83671658-730a-4a54-969a-925c8d6e0887 | in-use | vol1 | 1 | - | false | dd68313a-d630-49d6-a0d1-c4cefd018a2d | +--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+ # nova list +--------------------------------------+------+--------+------------+-------------+---------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------+--------+------------+-------------+---------------------+ | dd68313a-d630-49d6-a0d1-c4cefd018a2d | vm1 | ACTIVE | - | Running | public=172.24.4.227 | +--------------------------------------+------+--------+------------+-------------+---------------------+ # cinder delete vol1 Delete for volume vol1 failed: Volume 83671658-730a-4a54-969a-925c8d6e0887 is still attached, detach volume first. (HTTP 400) (Request-ID: req-62b16cd9-1e22-4374-b2c6-c719401b256a) ERROR: Unable to delete any of the specified volumes.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0282.html