RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1456185 - Failed to do Tunnelled migration via virt-manager
Summary: Failed to do Tunnelled migration via virt-manager
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: virt-manager
Version: 7.4
Hardware: x86_64
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Pavel Hrdina
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 1473046
TreeView+ depends on / blocked
 
Reported: 2017-05-27 11:30 UTC by zhoujunqin
Modified: 2018-04-10 11:42 UTC (History)
6 users (show)

Fixed In Version: virt-manager-1.4.3-2.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-10 11:40:46 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
virt-manager debug log while dor migration (46.41 KB, text/plain)
2017-05-27 11:30 UTC, zhoujunqin
no flags Details
description for direct migration (169.27 KB, image/png)
2017-11-24 04:38 UTC, zhoujunqin
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:0726 0 None None None 2018-04-10 11:42:12 UTC

Description zhoujunqin 2017-05-27 11:30:46 UTC
Created attachment 1282899 [details]
virt-manager debug log while dor migration

Description of problem:
Failed to do Tunnelled migration via virt-manager

Version-Release number of selected component (if applicable):
virt-manager-1.4.1-5.el7.noarch
libvirt-3.2.0-6.el7.x86_64
libvirt-python-3.2.0-2.el7.x86_64
qemu-kvm-rhev-2.9.0-6.el7.x86_64

How reproducible:
100%

Steps to Reproduce:

Tunnelled migration environment setup:

a. Prepare 2 hosts and prepare a nfs server (eg. 10.73.194.27:/vol/S3/libvirtmanual) which is mounted on both hosts,Note that the storage on the destination host should have the same path and same name as the source host

b.setting the virt_use_nfs boolean and close the iptables on both sides
# setsebool -P virt_use_nfs 1
# iptables -F

c. Add each other's ip and hostname in /etc/hosts to make sure both the source and dest host can ping each other by hostname successfully.

d. Create a new guest in local host by import existing disk image, choose the shared image in the new added nfs storage .

e. Dispatch ssh public key of source host to target host.
Creating your local public key pair
# ssh-keygen -t rsa
Copying the public key to remote host
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@{target ip}

Steps:
1. Launch virt-manager
# virt-manager

2. Click File->Add Connection, choose QEMU/KVM on Hypervisor, choose SSH on
Method, and write down the hostname of target host, then click Connect

3. Right click guest and select migrate, select new host with dest ip, set mode as "Tunnelled",  then click "Migrate"

Actual results:
Tunnelled migration failed with error:
Unable to migrate guest: use virDomainMigrateToURI3 for peer-to-peer migration

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 88, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/migrate.py", line 438, in _async_migrate
    meter=meter)
  File "/usr/share/virt-manager/virtManager/domain.py", line 1585, in migrate
    self._backend.migrate3(libvirt_destconn, params, flags)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1519, in migrate3
    if ret is None:raise libvirtError('virDomainMigrate3() failed', dom=self)
libvirtError: use virDomainMigrateToURI3 for peer-to-peer migration


Expected results:
Do tunnelled migration successfully.

Additional info:
1. Do migration with virsh command with --p2p and --tunnelled option:
# virsh migrate --live rhel6.9nfs qemu+ssh://10.73.130.49/system --verbose --p2p --tunnelled
Migration: [100 %]

Comment 2 Pavel Hrdina 2017-09-06 07:24:14 UTC
Upstream patch posted:

https://www.redhat.com/archives/virt-tools-list/2017-September/msg00034.html

Comment 3 Pavel Hrdina 2017-09-11 07:35:45 UTC
Upstream commit:

commit abcff9e230bd00be91bdf0cef1da94f6e2ce709d
Author: Pavel Hrdina <phrdina>
Date:   Wed Sep 6 09:19:36 2017 +0200

    domain: use migrateToURI3() for tunneled migration

Comment 5 zhoujunqin 2017-09-25 08:55:48 UTC
Try to verify this bug with new build:
virt-manager-1.4.3-1.el7.noarch

Steps as Description.
After 3. Right click guest and select migrate, select new host with dest ip, set mode as "Tunnelled",  then click "Migrate"

Migration failed with error:
Unable to migrate guest: argument unsupported: migration URI is not supported by tunnelled migration
Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 89, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/migrate.py", line 434, in _async_migrate
    meter=meter)
  File "/usr/share/virt-manager/virtManager/domain.py", line 1603, in migrate
    self._backend.migrateToURI3(dest_uri, params, flags)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1739, in migrateToURI3
    if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', dom=self)
libvirtError: argument unsupported: migration URI is not supported by tunnelled migration


So i move this bug from ON_QA to ASSIGNED, thanks.

Comment 6 Pavel Hrdina 2017-10-03 10:29:57 UTC
Upstream commit:

commit 3b769643657f906dc2b53c568d7fe748155d9b2b
Author: Pavel Hrdina <phrdina>
Date:   Tue Oct 3 12:24:39 2017 +0200

    domain: don't add URI into params for tunneled migration

Comment 9 zhoujunqin 2017-11-24 04:29:44 UTC
Try to verify this bug with new build:
virt-manager-1.4.3-2.el7.noarch

Steps:
Tunnelled migration environment setup:

a. Prepare 3 hosts and a nfs server (eg. 10.73.194.27:/vol/S3/libvirtmanual).
HostA: migration server
HostB: migration source
HostC: migration target

Then mount the nfs server on HostB and HostC, note the mount path should be same.

b.set the virt_use_nfs boolean and no need stop firewalld service on HostB and HostC
# setsebool -P virt_use_nfs 1
# service firewalld status
...
   Active: active (running)
...

c. Add each other's ip and hostname in /etc/hosts to make they can ping each other by hostname successfully.

d. Create a new guest in HostB by import existing disk image, choose the shared image in the new added nfs storage .

e. Dispatch ssh public key of HostB to HostC.
Creating HostB public key pair
# ssh-keygen -t rsa
Copying the public key to HostC
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@{target ip}

Steps:
1. Launch virt-manager on HostA.
# virt-manager

2. Add to remote connection with HostB and HostC
Click File->Add Connection, choose QEMU/KVM on Hypervisor, choose SSH on
Method, and write down the IP of HostB(HostC), then click Connect.

3. Right click guest on HostB connection and right click, then select migrate, select new host with HostC ip, set mode as "Tunnelled",  then click "Migrate".

Result:
After a while of step3, tunnel migration through HostB to HostC finished successfully, after migration window closed, the migrated guest on HostB disappeared, and running on the HostC, it is as expected.

But @Pavel, I found another question, the description for Direct migration now is same with Tunnel migration, is it a new bug, please help have a look.
I will attach the screenshot for this, thanks.

Comment 10 zhoujunqin 2017-11-24 04:38:17 UTC
Created attachment 1358486 [details]
description for direct migration

Comment 11 Pavel Hrdina 2017-11-24 12:11:03 UTC
The description is not based on the selected mode, it's a tooltip for the mode in general, but I agree that it could be improved.  Can you create a separata bug for that? Thanks.

Comment 12 zhoujunqin 2017-11-26 13:16:41 UTC
(In reply to Pavel Hrdina from comment #11)
> The description is not based on the selected mode, it's a tooltip for the
> mode in general, but I agree that it could be improved.  Can you create a
> separated bug for that? Thanks.

Thanks for your reply, file a separated Bug 1517534, and since bug itself has been fixed, move this bug from ON_QA to VERIFIED.

Comment 15 errata-xmlrpc 2018-04-10 11:40:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:0726


Note You need to log in before you can comment on or make changes to this bug.