Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
A customer's network design implies that a kvm live
migration has to choose a dedicated network interface. To get this
running they need to use the parameter migration_host in qemu.conf.
Without this backport they are not able to get live migration up and
running in a reliable manner.
Version-Release number of selected component (if applicable):
in RHEL 6
How reproducible:
all the time
Steps to Reproduce:
config file in RHEL 7:
[root@undercloud ~]# cat /etc/libvirt/qemu.conf | grep -i migration_host -B6
# The default hostname or IP address which will be used by a migration
# source for transferring migration data to this host. The migration
# source has to be able to resolve this hostname and connect to it so
# setting "localhost" will not work. By default, the host's configured
# hostname is used.
#migration_host = "host.example.com"
Actual results:
Expected results:
Additional info:
This is in an OSP environment. several workarounds have been tried, such as modifying /etc/hosts or the host names in OSP. So far, none of them worked.
Verify with libvirt-0.10.2-62.el6.x86_64.
0.Prepare two hosts and the target host have two cards:nic1, nic2
Scenario 1: Set migration_host with right nic2 IPv4 address:
1.Set the migration_host in the target host qemu.conf and restart libvirtd service:
migration_host="nic2's IP"
2.Use tcpdump to monitor the migration data in the source host:
#tcpdump -i eth0 | grep "nic2's IP"
3.Open another terminal to do migration:
#virsh migrate rhel6.8 qemu+ssh://IP-nic1/system --live --verbose
4.Check the output in step3, could see the migration data is used nic2 on the target host.
Scenario 2: Set migration_host with right nic2 IPv6 address:
1.Set the migration_host in the target host qemu.conf and restart libvirtd service:
migration_host="[2620:52:0:4982:9657:a5ff:fe5b:2864]"
2.Use tcpdump to monitor the migration data in the source host:
#tcpdump -i eth0 | grep "2620:52:0:4982:9657:a5ff:fe5b:2864"
3.Open another terminal to do migration:
#virsh migrate rhel6.8 qemu+ssh://IP-nic1/system --live --verbose
4.Check the output in step3, could see the migration data is used ipv6 address of nic2 on the target host.
Scenario 3:Set migration_host with invalid IP address:
1.Set the migration_host in the target host qemu.conf with invalid value and restart libvirtd service:
migration_host=" "
2.Do migration:
# virsh migrate r6 --live --verbose qemu+ssh://IP-nic1/system
root.4.148's password:
error: unable to connect to server at '(null):49152': Connection refused
Also test setting migration_host with p2p tunnelled migration, and the migration_host will be ignored, since p2p tunnelled migration does not support migration_host setting.
The customer rebooted a regression in the fix:
the fix for the dedicated network interface for nova / virsh live
migration is working on the first view but not on the second
with the standard nova.conf line for live_migration_flag it´s working
smoothly, but with the live_migration_flags as set here on our systems
for the unsuspended live migration:
"live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,
VIR_MIGRATE_LIVE,VIR_MIGRATE_TUNNELLED"
it is not choosing the interface configured via "migration_host" in the
qemu.conf after applying the patch.
with this line in the nova.conf:
live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER
it has chosen the dedicated interface for the migration traffic, but
suspended the virtual macheine during the migration, what is
unacceptable for this enviroment / our workload.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://rhn.redhat.com/errata/RHBA-2017-0682.html