Bug 1651352 - [v2v][osp] Volume cleanup fails if migrating to project different than where the conversion appliance is located
Summary: [v2v][osp] Volume cleanup fails if migrating to project different than where ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat CloudForms Management Engine
Classification: Red Hat
Component: V2V
Version: 5.10.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: GA
: 5.10.1
Assignee: Tomáš Golembiovský
QA Contact: Yadnyawalk Tale
Red Hat CloudForms Documentation
URL:
Whiteboard: v2v
Depends On: 1668049 1668791
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-11-19 19:18 UTC by Kedar Kulkarni
Modified: 2019-03-08 14:51 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-03-08 14:51:22 UTC
Category: ---
Cloudforms Team: V2V
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Kedar Kulkarni 2018-11-19 19:18:24 UTC
Description of problem:
If VMware to OSP migration is failed, volume that was created during migration process fails to clean up if the volume has been detached from conversion host and moved to different project where the migrated VM was supposed to be. 


Version-Release number of selected component (if applicable):
CFME 5.10.0.24

How reproducible:
Believe 100%

Steps to Reproduce:
1.Configure OpenStack Conversion Appliance Instance
2.Add OpenStack and VMware to CFME 
3.Create infra mapping and migration plan
4.Execute migration plan, wait for completion, we would ideally want it to fail so we can see the cleanup phase taking place.

Actual results:
Failed migration leaves stale volumes that are not attached to anything. 

In the traceback given below, I was migrating VM to "qe_destination_1" project and conversion host was under "migration" project. When Migration was almost finishing, volume related to migrated VM was detached from conversion host and moved to "qe_destination_1" project. Migration failed to create network port so it tried cleaning up the volume in "migration" project which was no longer present in that project as it was already transferred. 

Expected results:
Virt-v2v-wrapper should know how to clean up the volume even if it has been moved to different project than the one where we have conversion appliance.

Additional info:

Traceback:

2018-11-19 14:04:01,547:DEBUG: Cleanup phase (virt-v2v-wrapper:1342)
2018-11-19 14:04:01,547:INFO: Executing command: ['openstack', u'--os-username=admin', u'--os-identity-api-version=3', u'--os-user-domain-name=default', u'--os-auth-url=http://controller.v2v.bos.redhat.com:5000/v3', u'--os-project-name=migration', u'--os-password=*****', 'volume', 'transfer', 'request', 'list', '--format', 'json'], environment: {} (virt-v2v-wrapper:1123)
2018-11-19 14:04:08,750:INFO: Removing volumes: [u'2c45ef5a-a85f-4472-8c96-2830d4513cb3'] (virt-v2v-wrapper:179)
2018-11-19 14:04:08,750:INFO: Executing command: ['openstack', u'--os-username=admin', u'--os-identity-api-version=3', u'--os-user-domain-name=default', u'--os-auth-url=http://controller.v2v.bos.redhat.com:5000/v3', u'--os-project-name=migration', u'--os-password=*****', 'volume', 'delete', u'2c45ef5a-a85f-4472-8c96-2830d4513cb3'], environment: {} (virt-v2v-wrapper:1123)
2018-11-19 14:04:15,756:ERROR: Failed to remove volumes(s) (virt-v2v-wrapper:185)
Traceback (most recent call last):
  File "/usr/bin/virt-v2v-wrapper.py", line 183, in handle_cleanup
    self._run_openstack(vol_cmd, data)
  File "/usr/bin/virt-v2v-wrapper.py", line 368, in _run_openstack
    return subprocess.check_output(command, stderr=subprocess.STDOUT)
  File "/usr/lib64/python2.7/subprocess.py", line 575, in check_output
    raise CalledProcessError(retcode, cmd, output=output)
CalledProcessError: Command '['openstack', u'--os-username=admin', u'--os-identity-api-version=3', u'--os-user-domain-name=default', u'--os-auth-url=http://controller.v2v.bos.redhat.com:5000/v3', u'--os-project-name=migration', u'--os-password=100Root-', 'volume', 'delete', u'2c45ef5a-a85f-4472-8c96-2830d4513cb3']' returned non-zero exit status 1
2018-11-19 14:04:15,757:ERROR: Command output:
Failed to delete volume with name or ID '2c45ef5a-a85f-4472-8c96-2830d4513cb3': No volume with a name or ID of '2c45ef5a-a85f-4472-8c96-2830d4513cb3' exists.
1 of 1 volumes failed to delete.
 (virt-v2v-wrapper:186)

Comment 8 Kedar Kulkarni 2019-01-21 20:20:39 UTC
I checked this with CFME version 5.10.0.32, 
virt-v2v-1.38.2-12.28.lp.el7ev.x86_64
# Wrapper version
VERSION = "11"

If the migration fails, the volumes are not cleaned up. Does not matter if the volume is in admin project or in another destination project, it is not cleaned up. Here is a snippet from wrapper logs: 
2019-01-21 15:03:41,968:DEBUG: Updated progress: 100.00 (virt-v2v-wrapper:891)
2019-01-21 15:06:22,151:INFO: virt-v2v terminated with return code 0 (virt-v2v-wrapper:1044)
2019-01-21 15:06:22,152:INFO: Executing command: ['openstack', u'--os-username=admin', u'--os-identity-api-version=3', u'--os-user-domain-name=default', u'--os-auth-url=https://10.8.58.137:13000/v3', u'--os-project-name=admin', u'--os-password=*****', 'token', 'issue'], environment: {} (virt-v2v-wrapper:1136)
2019-01-21 15:06:26,119:ERROR: No volumes found! (virt-v2v-wrapper:204)
2019-01-21 15:06:26,119:DEBUG: Cleanup phase (virt-v2v-wrapper:1355)
2019-01-21 15:06:26,120:INFO: Executing command: ['openstack', u'--os-username=admin', u'--os-identity-api-version=3', u'--os-user-domain-name=default', u'--os-auth-url=https://10.8.58.137:13000/v3', u'--os-project-name=admin', u'--os-password=*****', 'volume', 'transfer', 'request', 'list', '--format', 'json'], environment: {} (virt-v2v-wrapper:1136)
2019-01-21 15:06:38,736:INFO: Removing password files (virt-v2v-wrapper:1363)
2019-01-21 15:06:38,737:INFO: Finished (virt-v2v-wrapper:1385)



marking FailedQA

Comment 9 Fabien Dupont 2019-01-22 10:21:02 UTC
@Kedar, the failed test in https://bugzilla.redhat.com/show_bug.cgi?id=1651352#c8 is related to https://bugzilla.redhat.com/show_bug.cgi?id=1668049. As virt-v2v-wrapper doesn't know the volumes, it can't clean them.

Comment 12 Yadnyawalk Tale 2019-02-15 16:18:06 UTC
Fixed! Created admin project and provisioned conversion appliance in that.
And other osp-auto project set as a target project, migration failed with multiple IPv6 issue and I can see volumes get cleaned up.

Verified on: 5.10.1.1.20190212171432_83eb777


Note You need to log in before you can comment on or make changes to this bug.