Bug 1597313 - [UPGRADES][12]Failed to host-evacuate-live VM from non-containerized to containerized compute
Summary: [UPGRADES][12]Failed to host-evacuate-live VM from non-containerized to conta...
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates
Version: 12.0 (Pike)
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: z3
: 12.0 (Pike)
Assignee: Martin Schuppert
QA Contact: Archit Modi
URL:
Whiteboard:
Keywords: Triaged, ZStream
Depends On:
Blocks: 1573791 1597997
TreeView+ depends on / blocked
 
Reported: 2018-07-02 14:49 UTC by Yurii Prokulevych
Modified: 2018-08-20 13:04 UTC (History)
18 users (show)

(edit)
A change in the libvirtd live-migration port range prevents live migration failures.

Previously, libvirtd live-migration used ports 49152 to 49215, as specified in the qemu.conf file. 

On Linux, this range is a subset of the ephemeral port range 32768 to 61000. Any port in the ephemeral range can be consumed by any other service as well. 

As a result, live-migration failed with the error:
Live Migration failure: internal error: Unable to find an unused port in range 'migration' (49152-49215).

The new libvirtd live-migration range of 61152-61215 is not in the ephemeral range. The related failures no longer occur.

This completes the port change work started in BZ1573791.
Clone Of:
: 1597541 1597997 (view as bug list)
(edit)
Last Closed: 2018-08-20 13:02:42 UTC


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2331 None None None 2018-08-20 13:04 UTC
OpenStack gerrit 580061 None None None 2018-07-04 07:16 UTC
OpenStack gerrit 580068 None None None 2018-07-04 07:25 UTC
Launchpad 1779820 None None None 2018-07-03 07:53 UTC

Description Yurii Prokulevych 2018-07-02 14:49:23 UTC
Description of problem:
-----------------------
Failed to evacuate-live VMs from non-upgraded compute to upgraded one.

Excerpts from nova logs:
    docker logs nova_libvirt
    INFO:__main__:Deleting /etc/ceph/ceph.client.admin.keyring
    INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.admin.keyring to /etc/ceph/ceph.client.admin.keyring
    INFO:__main__:Deleting /etc/ceph/ceph.client.openstack.keyring
    INFO:__main__:Copying /var/lib/kolla/config_files/src-ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
    INFO:__main__:Writing out command to execute
    INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
    Running command: '/usr/sbin/libvirtd --config /etc/libvirt/libvirtd.conf'
    2018-07-02 13:49:56.070+0000: 2860: info : libvirt version: 3.9.0, package: 14.el7_5.5 (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>, 2018-05-10-16:14:17, x86-040.build.eng.bos.redhat.com)
    2018-07-02 13:49:56.070+0000: 2860: info : hostname: compute-0.localdomain
    2018-07-02 13:49:56.070+0000: 2860: error : logStrToLong_ui:2564 : Failed to convert 'virtio0' to unsigned int
    2018-07-02 13:49:56.070+0000: 2860: error : virPCIGetDeviceAddressFromSysfsLink:2643 : internal error: Failed to parse PCI config address 'virtio0'
    2018-07-02 13:49:56.072+0000: 2860: error : logStrToLong_ui:2564 : Failed to convert 'virtio1' to unsigned int
    2018-07-02 13:49:56.072+0000: 2860: error : virPCIGetDeviceAddressFromSysfsLink:2643 : internal error: Failed to parse PCI config address 'virtio1'
    2018-07-02 13:49:56.073+0000: 2860: error : logStrToLong_ui:2564 : Failed to convert 'virtio2' to unsigned int
    2018-07-02 13:49:56.073+0000: 2860: error : virPCIGetDeviceAddressFromSysfsLink:2643 : internal error: Failed to parse PCI config address 'virtio2'
    2018-07-02 13:53:51.578+0000: 2849: error : qemuMigrationFinish:5511 : migration successfully aborted
     
     
    nova-compute.log (non-containerized)
    2018-07-02 13:52:12.290 19156 DEBUG oslo_service.periodic_task [req-5ce4bf96-7b9b-4b59-b124-df30ef9cf28d - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215
    2018-07-02 13:52:12.291 19156 DEBUG nova.compute.manager [req-5ce4bf96-7b9b-4b59-b124-df30ef9cf28d - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python2.7/site-packages/nova/compute/manager.py:6548
    2018-07-02 13:52:14.085 19156 DEBUG nova.virt.libvirt.guest [req-a82b06d8-5ef5-4123-be53-5a739ee418c4 c8bf1bb7e7214f1abf2cbb6a8f947c12 2f93c0afcafc40a9960b04b2eaa56a06 - - -] Failed to get job stats: Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainMigratePerform3Params) get_job_info /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:696
    2018-07-02 13:52:14.086 19156 WARNING nova.virt.libvirt.driver [req-a82b06d8-5ef5-4123-be53-5a739ee418c4 c8bf1bb7e7214f1abf2cbb6a8f947c12 2f93c0afcafc40a9960b04b2eaa56a06 - - -] [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b] Error monitoring migration: Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainMigratePerform3Params)
    2018-07-02 13:52:14.086 19156 ERROR nova.virt.libvirt.driver [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b] Traceback (most recent call last):
    2018-07-02 13:52:14.086 19156 ERROR nova.virt.libvirt.driver [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6625, in _live_migration
    2018-07-02 13:52:14.086 19156 ERROR nova.virt.libvirt.driver [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]     finish_event, disk_paths)
    2018-07-02 13:52:14.086 19156 ERROR nova.virt.libvirt.driver [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6417, in _live_migration_monitor
    2018-07-02 13:52:14.086 19156 ERROR nova.virt.libvirt.driver [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]     info = guest.get_job_info()
    2018-07-02 13:52:14.086 19156 ERROR nova.virt.libvirt.driver [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 680, in get_job_info
    2018-07-02 13:52:14.086 19156 ERROR nova.virt.libvirt.driver [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]     stats = self._domain.jobStats()
    2018-07-02 13:52:14.086 19156 ERROR nova.virt.libvirt.driver [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 186, in doit
    2018-07-02 13:52:14.086 19156 ERROR nova.virt.libvirt.driver [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]     result = proxy_call(self._autowrap, f, *args, **kwargs)
    2018-07-02 13:52:14.086 19156 ERROR nova.virt.libvirt.driver [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 144, in proxy_call
    2018-07-02 13:52:14.086 19156 ERROR nova.virt.libvirt.driver [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]     rv = execute(f, *args, **kwargs)
    2018-07-02 13:52:14.086 19156 ERROR nova.virt.libvirt.driver [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 125, in execute
    2018-07-02 13:52:14.086 19156 ERROR nova.virt.libvirt.driver [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]     six.reraise(c, e, tb)
    2018-07-02 13:52:14.086 19156 ERROR nova.virt.libvirt.driver [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 83, in tworker
    2018-07-02 13:52:14.086 19156 ERROR nova.virt.libvirt.driver [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]     rv = meth(*args, **kwargs)
    2018-07-02 13:52:14.086 19156 ERROR nova.virt.libvirt.driver [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1404, in jobStats
    2018-07-02 13:52:14.086 19156 ERROR nova.virt.libvirt.driver [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]     if ret is None: raise libvirtError ('virDomainGetJobStats() failed', dom=self)
    2018-07-02 13:52:14.086 19156 ERROR nova.virt.libvirt.driver [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b] libvirtError: Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainMigratePerform3Params)
    2018-07-02 13:52:14.086 19156 ERROR nova.virt.libvirt.driver [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]
    2018-07-02 13:52:14.090 19156 DEBUG nova.virt.libvirt.driver [req-a82b06d8-5ef5-4123-be53-5a739ee418c4 c8bf1bb7e7214f1abf2cbb6a8f947c12 2f93c0afcafc40a9960b04b2eaa56a06 - - -] [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b] Live migration monitoring is all done _live_migration /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:6632
    2018-07-02 13:52:14.090 19156 ERROR nova.compute.manager [req-a82b06d8-5ef5-4123-be53-5a739ee418c4 c8bf1bb7e7214f1abf2cbb6a8f947c12 2f93c0afcafc40a9960b04b2eaa56a06 - - -] [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b] Live migration failed.
    2018-07-02 13:52:14.086 19156 ERROR nova.virt.libvirt.driver [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]
    2018-07-02 13:52:14.090 19156 DEBUG nova.virt.libvirt.driver [req-a82b06d8-5ef5-4123-be53-5a739ee418c4 c8bf1bb7e7214f1abf2cbb6a8f947c12 2f93c0afcafc40a9960b04b2eaa56a06 - - -] [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b] Live migration monitoring is all done _live_migration /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py:6632
    2018-07-02 13:52:14.090 19156 ERROR nova.compute.manager [req-a82b06d8-5ef5-4123-be53-5a739ee418c4 c8bf1bb7e7214f1abf2cbb6a8f947c12 2f93c0afcafc40a9960b04b2eaa56a06 - - -] [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b] Live migration failed.
    2018-07-02 13:52:14.090 19156 ERROR nova.compute.manager [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b] Traceback (most recent call last):
    2018-07-02 13:52:14.090 19156 ERROR nova.compute.manager [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5419, in _do_live_migration
    2018-07-02 13:52:14.090 19156 ERROR nova.compute.manager [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]     block_migration, migrate_data)
    2018-07-02 13:52:14.090 19156 ERROR nova.compute.manager [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6140, in live_migration
    2018-07-02 13:52:14.090 19156 ERROR nova.compute.manager [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]     migrate_data)
    2018-07-02 13:52:14.090 19156 ERROR nova.compute.manager [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6625, in _live_migration
    2018-07-02 13:52:14.090 19156 ERROR nova.compute.manager [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]     finish_event, disk_paths)
    2018-07-02 13:52:14.090 19156 ERROR nova.compute.manager [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6417, in _live_migration_monitor
    2018-07-02 13:52:14.090 19156 ERROR nova.compute.manager [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]     info = guest.get_job_info()
    2018-07-02 13:52:14.090 19156 ERROR nova.compute.manager [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 680, in get_job_info
    2018-07-02 13:52:14.090 19156 ERROR nova.compute.manager [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]     stats = self._domain.jobStats()
    2018-07-02 13:52:14.090 19156 ERROR nova.compute.manager [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 186, in doit
    2018-07-02 13:52:14.090 19156 ERROR nova.compute.manager [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]     result = proxy_call(self._autowrap, f, *args, **kwargs)
    2018-07-02 13:52:14.090 19156 ERROR nova.compute.manager [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 144, in proxy_call
    2018-07-02 13:52:14.090 19156 ERROR nova.compute.manager [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]     rv = execute(f, *args, **kwargs)
    2018-07-02 13:52:14.090 19156 ERROR nova.compute.manager [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 125, in execute
    2018-07-02 13:52:14.090 19156 ERROR nova.compute.manager [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]     six.reraise(c, e, tb)
    2018-07-02 13:52:14.090 19156 ERROR nova.compute.manager [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 83, in tworker
    2018-07-02 13:52:14.090 19156 ERROR nova.compute.manager [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]     rv = meth(*args, **kwargs)
    2018-07-02 13:52:14.090 19156 ERROR nova.compute.manager [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1404, in jobStats
    2018-07-02 13:52:14.090 19156 ERROR nova.compute.manager [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]     if ret is None: raise libvirtError ('virDomainGetJobStats() failed', dom=self)
    2018-07-02 13:52:14.090 19156 ERROR nova.compute.manager [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b] libvirtError: Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainMigratePerform3Params)
    2018-07-02 13:52:14.090 19156 ERROR nova.compute.manager [instance: 1a6b33a4-6c87-431c-b25a-5b5a68f0fe5b]
    2018-07-02 13:52:14.092 19156 DEBUG oslo_messaging._drivers.amqpdriver [req-a82b06d8-5ef5-4123-be53-5a739ee418c4 c8bf1bb7e7214f1abf2cbb6a8f947c12 2f93c0afcafc40a9960b04b2eaa56a06 - - -] CALL msg_id: 9597f97d30944095955d27f37acf778c exchange 'nova' topic 'conductor' _send /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:562



Version-Release number of selected component (if applicable):
-------------------------------------------------------------
upgraded node:
==============
python-nova-16.1.4-2.el7ost.noarch
openstack-nova-novncproxy-16.1.4-2.el7ost.noarch
python-novaclient-9.1.2-1.el7ost.noarch
openstack-nova-placement-api-16.1.4-2.el7ost.noarch
puppet-nova-11.5.0-3.el7ost.noarch
openstack-nova-common-16.1.4-2.el7ost.noarch
openstack-nova-scheduler-16.1.4-2.el7ost.noarch
openstack-nova-api-16.1.4-2.el7ost.noarch
openstack-nova-conductor-16.1.4-2.el7ost.noarch
openstack-nova-compute-16.1.4-2.el7ost.noarch
openstack-nova-console-16.1.4-2.el7ost.noarch
openstack-nova-migration-16.1.4-2.el7ost.noarch

non-upgrded node:
=================
openstack-nova-placement-api-15.0.8-5.el7ost.noarch
openstack-nova-api-15.0.8-5.el7ost.noarch
python-nova-15.0.8-5.el7ost.noarch
openstack-nova-compute-15.0.8-5.el7ost.noarch
openstack-nova-conductor-15.0.8-5.el7ost.noarch
puppet-nova-10.4.1-5.el7ost.noarch
openstack-nova-migration-15.0.8-5.el7ost.noarch
openstack-nova-scheduler-15.0.8-5.el7ost.noarch
openstack-nova-common-15.0.8-5.el7ost.noarch
openstack-nova-console-15.0.8-5.el7ost.noarch
openstack-nova-cert-15.0.8-5.el7ost.noarch
openstack-nova-novncproxy-15.0.8-5.el7ost.noarch
python-novaclient-7.1.2-1.el7ost.noarch

openstack-tripleo-heat-templates-7.0.12-1.el7ost.noarch

Steps to Reproduce:
===================
1. Install RHOS-11:
openstack overcloud deploy \
  --timeout 100 \
  --templates /usr/share/openstack-tripleo-heat-templates \
  --stack overcloud \
  --libvirt-type kvm \
  --ntp-server clock.redhat.com \
  -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml \
  -e /home/stack/virt/internal.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
  -e /home/stack/virt/network/network-environment.yaml \
  -e /home/stack/virt/enable-tls.yaml \
  -e /home/stack/virt/inject-trust-anchor.yaml \
  -e /home/stack/virt/public_vip.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/tls-endpoints-public-ip.yaml \
  -e /home/stack/virt/hostnames.yml \
  -e /home/stack/virt/debug.yaml \
  -e /home/stack/virt/nodes_data.yaml

2. Launch VM
3. Upgrade UC
4. Prepare env according docs and run major-upgrade-composable-docker step
5. Upgrade node not hosting VM
6. Try to migrate VM from non-upgraded node, e.g.:
    source overcloudrc
    nova host-evacuate-live compute-1.localdomain

Actual results:
---------------
Live migration of VM fails and VM moves to ERROR state


Expected results:
-----------------
VM is successfully evacuated from non-upgrade to upgraded compute


Additional info:
----------------
Virtual env: 3controllers + 2computes + 3ceph

Comment 2 Ollie Walsh 2018-07-02 15:04:53 UTC
Missing nova::migration::qemu::configure_qemu: true for https://bugzilla.redhat.com/show_bug.cgi?id=1573791

Comment 3 Ollie Walsh 2018-07-02 15:20:05 UTC
Also need to include ::nova::migration::qemu manifest in puppet-tripleo/manifests/profile/base/nova/libvirt.pp

Comment 4 Martin Schuppert 2018-07-03 06:54:17 UTC
This is not an upgrade issue. Can also be reproduced on a fresh OSP12 deployed env with the 2018-06-26.2:

nova::migration::qemu :

~~~
# [*configure_qemu*]
#   (optional) Whether or not configure qemu bits.
#   Defaults to false.
...
  if $configure_qemu {

    augeas { 'qemu-conf-migration-ports':
      context => '/files/etc/libvirt/qemu.conf',
      changes => [
        "set migration_port_min ${migration_port_min}",
        "set migration_port_max ${migration_port_max}",
      ],
      tag     => 'qemu-conf-augeas',
    }
  } else {
    augeas { 'qemu-conf-migration-ports':
      context => '/files/etc/libvirt/qemu.conf',
      changes => [
        'rm migration_port_min',
        'rm migration_port_max',
      ],
      tag     => 'qemu-conf-augeas',
    }
  }
~~~

From sosreport:

sosreport-failed-migration-containerized-compute-0-20180702141453]$ egrep "migration_port_min|migration_port_max" var/lib/config-data/puppet-generated/nova_libvirt/etc/libvirt/qemu.conf
#migration_port_min = 49152
#migration_port_max = 49215

sosreport-failed-migration-containerized-compute-0-20180702141453]$ grep libvirt etc/sysconfig/iptables
-A INPUT -p tcp -m multiport --dports 16514,61152:61215,5900:6923 -m state --state NEW -m comment --comment "200 nova_libvirt ipv4" -j ACCEPT


Can also be reproduced on a fresh OSP12 deployed env with the 2018-06-26.2:

- migration results in:
2018-07-03 06:03:00.786 1 ERROR nova.virt.libvirt.driver [req-1756a25c-95bb-45ad-b78d-1f235ee3155c 9646b8286b0b40f798e27d27aa529ece 8f01287c00c94895ae6639a198db4ed2 - default default] [instance: 80e82e13-6479-4b63-83bb-077cdc9c9c69] Live Migration failure: unable to connect to server at 'compute-0.localdomain:49152': Connection timed out: libvirtError: unable to connect to server at 'compute-0.localdomain:49152': Connection timed out

[root@compute-0 libvirt]# egrep "migration_port_min|migration_port_max" /var/lib/config-data/puppet-generated/nova_libvirt/etc/libvirt/qemu.conf
#migration_port_min = 49152
#migration_port_max = 49215

- Set the min/max port on DST compute-0:
[root@compute-0 libvirt]# egrep "migration_port_min|migration_port_max" /var/lib/config-data/puppet-generated/nova_libvirt/etc/libvirt/qemu.conf 
migration_port_min = 61152
migration_port_max = 61215

[root@compute-0 libvirt]# docker restart nova_libvirt

(overcloud) [stack@undercloud-0 ~]$ openstack server create --flavor m1.small --image cirros --nic net-id=4403ba34-7241-4e1c-942b-fb968fa6d28a test

(overcloud) [stack@undercloud-0 ~]$ openstack server list --long
+--------------------------------------+------+--------+------------+-------------+----------------------+------------+--------------------------------------+-------------+--------------------------------------+-------------------+-----------------------+------------+
| ID                                   | Name | Status | Task State | Power State | Networks             | Image Name | Image ID                             | Flavor Name | Flavor ID                            | Availability Zone | Host                  | Properties |
+--------------------------------------+------+--------+------------+-------------+----------------------+------------+--------------------------------------+-------------+--------------------------------------+-------------------+-----------------------+------------+
| 8ff8fd5e-9e83-4d6a-8a12-ecdbe96d7d53 | test | ACTIVE | None       | Running     | private=192.168.0.11 | cirros     | 17feb58a-93cb-4ff1-a7f7-ef12696b1329 | m1.small    | 881af7a0-141b-4b0a-8c65-8c5eac1ef182 | nova              | compute-1.localdomain |            |
+--------------------------------------+------+--------+------------+-------------+----------------------+------------+--------------------------------------+-------------+--------------------------------------+-------------------+-----------------------+------------+

...

(overcloud) [stack@undercloud-0 ~]$ openstack server list --long
+--------------------------------------+------+--------+------------+-------------+----------------------+------------+--------------------------------------+-------------+--------------------------------------+-------------------+-----------------------+------------+
| ID                                   | Name | Status | Task State | Power State | Networks             | Image Name | Image ID                             | Flavor Name | Flavor ID                            | Availability Zone | Host                  | Properties |
+--------------------------------------+------+--------+------------+-------------+----------------------+------------+--------------------------------------+-------------+--------------------------------------+-------------------+-----------------------+------------+
| 8ff8fd5e-9e83-4d6a-8a12-ecdbe96d7d53 | test | ACTIVE | None       | Running     | private=192.168.0.11 | cirros     | 17feb58a-93cb-4ff1-a7f7-ef12696b1329 | m1.small    | 881af7a0-141b-4b0a-8c65-8c5eac1ef182 | nova              | compute-0.localdomain |            |
+--------------------------------------+------+--------+------------+-------------+----------------------+------------+--------------------------------------+-------------+--------------------------------------+-------------------+-----------------------+------------+

working on a fix for OSPd

Comment 5 Martin Schuppert 2018-07-03 09:42:25 UTC
changes proposed to master:
puppet-tripleo -> https://review.openstack.org/#/c/579807/
tripleo-heat-templates -> https://review.openstack.org/#/c/579813/

when landed doing backport.

Comment 17 errata-xmlrpc 2018-08-20 13:02:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2331


Note You need to log in before you can comment on or make changes to this bug.