Bug 1648941 - Migrating between AZs of a volume with snapshot - usability issue no error message returned to user.
Summary: Migrating between AZs of a volume with snapshot - usability issue no error me...
Keywords:
Status: NEW
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 14.0 (Rocky)
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: ---
Assignee: Rajat Dhasmana
QA Contact: Evelina Shames
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-11-12 13:57 UTC by Tzach Shefi
Modified: 2022-12-08 22:32 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Cinder logs (618.80 KB, application/x-gzip)
2018-11-12 13:57 UTC, Tzach Shefi
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1802924 0 None None None 2018-11-12 14:05:20 UTC
OpenStack gerrit 793515 0 None NEW Add user messages for volume operations 2021-10-08 10:43:12 UTC
Red Hat Issue Tracker OSP-2902 0 None None None 2021-11-16 11:21:45 UTC

Description Tzach Shefi 2018-11-12 13:57:57 UTC
Created attachment 1504733 [details]
Cinder logs

Description of problem: This is a usability issue, where the operation should, and does, fail but no error is returned to end user. IMHO this is bad practice as it may confuse the user who gave the command didn't get and error but didn't get his requested action preformed.    


Version-Release number of selected component (if applicable):
puppet-cinder-13.3.1-0.20181013114719.25b1ba3.el7ost.noarch
python2-os-brick-2.5.3-0.20180816081254.641337b.el7ost.noarch
openstack-cinder-13.0.1-0.20181013185427.31ff628.el7ost.noarch
python2-cinderclient-4.0.1-0.20180809133302.460229c.el7ost.noarch
python-cinder-13.0.1-0.20181013185427.31ff628.el7ost.noarch

How reproducible:
Every time

Steps to Reproduce:
1. Create a volume in on AZ:
cinder create 1 --volume-type tripleo   --availability-zone nova --name volWithSnap
| id                             | 20646f60-4356-40ae-ac86-40a849e60668  |

2. Create snapshot of said volume
cinder snapshot-create 20646f60-4356-40ae-ac86-40a849e60668

3. Try to migrate to other backend/AZ 
#cinder retype 20646f60-4356-40ae-ac86-40a849e60668  nfs --migration-policy on-demand

4. No error returned, if this failed due to volumes snapshots what wasn't I notified? This is the bug lack of user notification. 


(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+-------------+------+-------------+----------+--------------------------------------+
| ID                                   | Status    | Name        | Size | Volume Type | Bootable | Attached to                          |
+--------------------------------------+-----------+-------------+------+-------------+----------+--------------------------------------+
| 20646f60-4356-40ae-ac86-40a849e60668 | available | volWithSnap | 1    | tripleo     | false    |   

Vol remains in same backend/AZ which is fine. 
cinder show 20646f60-4356-40ae-ac86-40a849e60668
| id                             | 20646f60-4356-40ae-ac86-40a849e60668  |
| os-vol-host-attr:host          | hostgroup@tripleo_iscsi#tripleo_iscsi |



Actual results:
Well the migration failed as it should due to snapshots. 
But user didn't get any noticed that it failed or should not be assumed to have passed. 

Expected results:
"Can't migrate vol due to snapshots.." 

Additional info:

Comment 1 Tzach Shefi 2018-11-12 14:32:21 UTC
Another usability issue with nova.conf cross_az_attach=false. 

When I try to migrate a volume (across az) it fails, as expected. 
Yet in this case again no error is returned to user :(

cinder migrate f5064daa-112e-47c2-91fd-acff0e00f519 controller-0@nfs 
Request to migrate volume f5064daa-112e-47c2-91fd-acff0e00f519 has been accepted.


Note You need to log in before you can comment on or make changes to this bug.