Bug 1558897

Summary: cinder re-typing between 2 FC-backend (xtremio.XtremIOFibreChannelDriver) --> (emc_vmax_fc.EMCVMAXFCDriver) gets fail
Product: Red Hat OpenStack Reporter: Md Nadeem <mnadeem>
Component: openstack-cinderAssignee: Cinder Bugs List <cinder-bugs>
Status: CLOSED DUPLICATE QA Contact: Avi Avraham <aavraham>
Severity: high Docs Contact: Kim Nylander <knylande>
Priority: high    
Version: 10.0 (Newton)CC: dhill, geguileo, srevivo, tshefi
Target Milestone: ---Flags: tshefi: automate_bug-
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-03-23 09:47:04 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Md Nadeem 2018-03-21 09:10:59 UTC
Description of problem:

cinder re-typing of a in-use volume c6e8c1c7-1ec8-400e-99a0-ac9e69f3d24e  (attached to an instance) between different back-end iscsi (xtremio) --> iscsi (vmax) gets fail with below errors.

During the process, i can see the new volume gets created successfully on the target back-end, however while copying the data from source volume to destination volume, it gets failed
Note: The target back-end have plenty of space available though.

50223:2018-03-20 14:26:20.582 556219 INFO cinder.volume.manager [req-f258fa08-48e6-4618-afeb-acdf6971ab8e 2016d7a55a824182b420a6c56ac416a4 c175ea0037194d96936c7d07c38070a6 - default default] Created volume successfully.


50230:2018-03-20 14:26:40.162 556221 ERROR cinder.volume.manager [req-f258fa08-48e6-4618-afeb-acdf6971ab8e 2016d7a55a824182b420a6c56ac416a4 c175ea0037194d96936c7d07c38070a6 - default default] Failed to copy volume c6e8c1c7-1ec8-400e-99a0-ac9e69f3d24e to f946326e-bc89-43b8-9326-9b2f3325597b


2018-03-20 14:26:40.308 556221 ERROR oslo_messaging.rpc.server     six.reraise(self.type_, self.value, self.tb)
2018-03-20 14:26:40.308 556221 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 1788, in _migrate_volume_generic
2018-03-20 14:26:40.308 556221 ERROR oslo_messaging.rpc.server     new_volume.id)
2018-03-20 14:26:40.308 556221 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/cinder/compute/nova.py", line 178, in update_server_volume
2018-03-20 14:26:40.308 556221 ERROR oslo_messaging.rpc.server     nova = novaclient(context, admin_endpoint=True, privileged_user=True)
2018-03-20 14:26:40.308 556221 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/cinder/compute/nova.py", line 136, in novaclient
2018-03-20 14:26:40.308 556221 ERROR oslo_messaging.rpc.server     **region_filter)
2018-03-20 14:26:40.308 556221 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/novaclient/service_catalog.py", line 84, in url_for
2018-03-20 14:26:40.308 556221 ERROR oslo_messaging.rpc.server     raise novaclient.exceptions.EndpointNotFound()
2018-03-20 14:26:40.308 556221 ERROR oslo_messaging.rpc.server EndpointNotFound

INFO cinder.volume.drivers.emc.emc_vmax_common [req-248e6f7b-0128-4aef-baab-04255a1b445c - - - - -] NON-FAST: capacity stats for pool TP_600_OPS on array 000292602067 total_capacity_gb=2093, free_capacity_gb=1482.

 

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:

Re-typing fails between 2 iscsi backend.

Expected results:

Re-typing should completes successfully.

Additional info:

Controller SOS report can be found on collab-shell by referring case number.

Comment 1 Md Nadeem 2018-03-21 09:38:08 UTC
Just a clarification. Both xtremio and vmax backend are not ISCSI but FC-based drivers.

volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver
volume_driver=cinder.volume.drivers.emc.emc_vmax_fc.EMCVMAXFCDriver

Comment 2 Gorka Eguileor 2018-03-21 13:35:46 UTC
This doesn't look like a Cinder issue, it's most likely a problem when Cinder tries to connect to Nova to do the live migration, and it's probably a configuration issue in cinder on the "nova_catalog_admin_info" and/or "os_region_name", or the entry in the keystone catalog it refers to.

Comment 3 David Hill 2018-03-21 21:57:31 UTC
This looks like this issue here https://bugzilla.redhat.com/show_bug.cgi?id=1306547

Comment 4 Gorka Eguileor 2018-03-23 09:47:04 UTC

*** This bug has been marked as a duplicate of bug 1306547 ***

Comment 5 Tzach Shefi 2019-07-18 08:56:22 UTC
Nothing to automate/test look at dup bz.