Created attachment 1504733 [details] Cinder logs Description of problem: This is a usability issue, where the operation should, and does, fail but no error is returned to end user. IMHO this is bad practice as it may confuse the user who gave the command didn't get and error but didn't get his requested action preformed. Version-Release number of selected component (if applicable): puppet-cinder-13.3.1-0.20181013114719.25b1ba3.el7ost.noarch python2-os-brick-2.5.3-0.20180816081254.641337b.el7ost.noarch openstack-cinder-13.0.1-0.20181013185427.31ff628.el7ost.noarch python2-cinderclient-4.0.1-0.20180809133302.460229c.el7ost.noarch python-cinder-13.0.1-0.20181013185427.31ff628.el7ost.noarch How reproducible: Every time Steps to Reproduce: 1. Create a volume in on AZ: cinder create 1 --volume-type tripleo --availability-zone nova --name volWithSnap | id | 20646f60-4356-40ae-ac86-40a849e60668 | 2. Create snapshot of said volume cinder snapshot-create 20646f60-4356-40ae-ac86-40a849e60668 3. Try to migrate to other backend/AZ #cinder retype 20646f60-4356-40ae-ac86-40a849e60668 nfs --migration-policy on-demand 4. No error returned, if this failed due to volumes snapshots what wasn't I notified? This is the bug lack of user notification. (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+-------------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+-------------+------+-------------+----------+--------------------------------------+ | 20646f60-4356-40ae-ac86-40a849e60668 | available | volWithSnap | 1 | tripleo | false | Vol remains in same backend/AZ which is fine. cinder show 20646f60-4356-40ae-ac86-40a849e60668 | id | 20646f60-4356-40ae-ac86-40a849e60668 | | os-vol-host-attr:host | hostgroup@tripleo_iscsi#tripleo_iscsi | Actual results: Well the migration failed as it should due to snapshots. But user didn't get any noticed that it failed or should not be assumed to have passed. Expected results: "Can't migrate vol due to snapshots.." Additional info:
Another usability issue with nova.conf cross_az_attach=false. When I try to migrate a volume (across az) it fails, as expected. Yet in this case again no error is returned to user :( cinder migrate f5064daa-112e-47c2-91fd-acff0e00f519 controller-0@nfs Request to migrate volume f5064daa-112e-47c2-91fd-acff0e00f519 has been accepted.