Bug 1564445 - live migration broken when live_migration_inbound_addr is set and transport = ssh
Summary: live migration broken when live_migration_inbound_addr is set and transport =...
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: puppet-nova
Version: 12.0 (Pike)
Hardware: Unspecified
OS: Unspecified
Target Milestone: Upstream M2
: 14.0 (Rocky)
Assignee: Ollie Walsh
QA Contact: nova-maint
Depends On:
Blocks: 1576750 1576751
TreeView+ depends on / blocked
Reported: 2018-04-06 10:09 UTC by Sven Michels
Modified: 2019-11-18 11:39 UTC (History)
19 users (show)

Fixed In Version: puppet-nova-13.1.1-0.20180709142740.fa5ce48.el7ost
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1576750 1576751 (view as bug list)
Last Closed: 2019-01-11 11:49:20 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Launchpad 1765462 0 None None None 2018-04-19 20:03:16 UTC
OpenStack gerrit 562764 0 'None' MERGED Allow live_migration_inbound_addr to be used with non-default port/user/extra_params 2020-05-12 02:48:11 UTC
OpenStack gerrit 562818 0 'None' MERGED Set live_migration_inbound_addr for ssh transport 2020-05-12 02:48:11 UTC
Red Hat Product Errata RHEA-2019:0045 0 None None None 2019-01-11 11:49:45 UTC

Description Sven Michels 2018-04-06 10:09:12 UTC
Description of problem:

We wanted to get live migration using our storage interface.

To get this working, we specified the nodes storage ip in nova.conf/libvirt/live_migration_inbound_addr using a template setting:


As long as live_migration_inbound_addr is *NOT* set, the live_migration_uri defaults to:
which is set by puppet (modules/nova/manifests/migration/libvirt.pp). But the same module unsets the uri when live_migration_inbound_addr is set:
 160 if is_service_default($live_migration_inbound_addr) {
 161   $live_migration_uri = "qemu+${transport_real}://${prefix}%s${postfix}/system${extra_params}"
 162   $live_migration_scheme = $::os_service_default
 163 } else {
 164   $live_migration_uri = $::os_service_default
 165   $live_migration_scheme = $transport_real
 166 } 

Since live_migration_uri is deprecated, this might be okay. But it breaks live migration, because the ssh config which is needed to migrate is missing then.

The ssh config for nova is done (correctly) in /var/lib/nova/.ssh/config - but livemigration is done by root in nova-libvirt container. And root is missing the config.

Version-Release number of selected component (if applicable):

How reproducible:
deploy without live_migration_inbound_addr set, live migration works (using the default interface)

Set live_migration_inbound_addr to another IP of the node during deployment,
using nova::migration::libvirt::live_migration_inbound_addr
and live migration will break. You'll see errors regarding ssh connection failed, permission denied in nova-compute.log. And you see that it tries to use port 22 (or it doesn't show any port, which defaults to 22) in the error.

Steps to Reproduce:
1. deploy without live_migration_inbound_addr
2. migrate, works
3. deploy with live_migration_inbound_addr
4. migration fails

Actual results:
live migraton fails after setting live_migration_inbound_addr

Expected results:
live migration works using the correct IP from live_migration_inbound_addr.

Additional info:
The only change which i need to get it working after the addr was set:
copy /var/lib/nova/.ssh/config to /root/.ssh/config inside the nova-libvirt container.
The config contains the same information as the live_migration_uri would provide, so i think the config is the correct way to set the options. We just need to provide the root user the same information.

Comment 8 Ollie Walsh 2018-05-10 10:10:33 UTC
https://review.openstack.org/562818 & https://review.openstack.org/562764 merged to master

Comment 15 errata-xmlrpc 2019-01-11 11:49:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.