Bug 1965210 - [16.1] nova migration target firewall rules on ctlplane instead of internal_api
Summary: [16.1] nova migration target firewall rules on ctlplane instead of internal_api
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates
Version: 16.2 (Train)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z7
: 16.1 (Train on RHEL 8.2)
Assignee: David Vallee Delisle
QA Contact: Archit Modi
URL:
Whiteboard:
Depends On: 1961791
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-05-27 07:37 UTC by Martin Schuppert
Modified: 2021-12-09 20:20 UTC (History)
6 users (show)

Fixed In Version: openstack-tripleo-heat-templates-11.3.2-1.20210528060039.29a02c1.el8ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1961791
Environment:
Last Closed: 2021-12-09 20:19:39 UTC
Target Upstream Version: Train
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1929470 0 None None None 2021-05-27 07:37:52 UTC
OpenStack gerrit 793278 0 None NEW Fix network_cidrs when ManageNetworks: false 2021-05-28 09:40:19 UTC
Red Hat Issue Tracker OSP-4203 0 None None None 2021-11-18 11:33:24 UTC
Red Hat Product Errata RHBA-2021:3762 0 None None None 2021-12-09 20:20:01 UTC

Description Martin Schuppert 2021-05-27 07:37:53 UTC
+++ This bug was initially created as a clone of Bug #1961791 +++

Description of problem:
Since the introduction of this commit [a], live migration is broken on cellcomputes. [1]

Apparently, cell1-config is wrong, it should be internal api just like overcloud-config [2]


[a] https://review.opendev.org/c/openstack/tripleo-heat-templates/+/786576/
[1]
~~~
/nova-compute.log:2021-05-18 15:26:03.068 7 ERROR oslo_messaging.rpc.server [req-6351f9be-c200-4995-abb3-ece487bc54cc 9b51178989ab4c57817c7a79b37354b9 dcc474ee5bd042e3be158b87290a6a0b - default default] Exception during message handling:
 nova.exception.ResizeError: Resize error: not able to execute ssh command: Unexpected error while running command.
./nova-compute.log:2021-05-18 15:26:03.068 7 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
./nova-compute.log:2021-05-18 15:26:03.068 7 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 10119, in migrate_disk_and_power_off
./nova-compute.log:2021-05-18 15:26:03.068 7 ERROR oslo_messaging.rpc.server     self._remotefs.create_dir(dest, inst_base)
./nova-compute.log:2021-05-18 15:26:03.068 7 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/volume/remotefs.py", line 95, in create_dir
./nova-compute.log:2021-05-18 15:26:03.068 7 ERROR oslo_messaging.rpc.server     on_completion=on_completion)
./nova-compute.log:2021-05-18 15:26:03.068 7 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/volume/remotefs.py", line 185, in create_dir
./nova-compute.log:2021-05-18 15:26:03.068 7 ERROR oslo_messaging.rpc.server     on_execute=on_execute, on_completion=on_completion)
./nova-compute.log:2021-05-18 15:26:03.068 7 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/nova/utils.py", line 117, in ssh_execute
./nova-compute.log:2021-05-18 15:26:03.068 7 ERROR oslo_messaging.rpc.server     return processutils.execute(*ssh_cmd, **kwargs)
./nova-compute.log:2021-05-18 15:26:03.068 7 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py", line 431, in execute
./nova-compute.log:2021-05-18 15:26:03.068 7 ERROR oslo_messaging.rpc.server     cmd=sanitized_cmd)
./nova-compute.log:2021-05-18 15:26:03.068 7 ERROR oslo_messaging.rpc.server oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command.
./nova-compute.log:2021-05-18 15:26:03.068 7 ERROR oslo_messaging.rpc.server Command: ssh -o BatchMode=yes 172.17.1.18 mkdir -p /var/lib/nova/instances/3058ffe2-e4a2-461e-86a8-d2d5a0f42b48
./nova-compute.log:2021-05-18 15:26:03.068 7 ERROR oslo_messaging.rpc.server Exit code: 255
./nova-compute.log:2021-05-18 15:26:03.068 7 ERROR oslo_messaging.rpc.server Stdout: ''
./nova-compute.log:2021-05-18 15:26:03.068 7 ERROR oslo_messaging.rpc.server Stderr: 'ssh: connect to host 172.17.1.18 port 2022: Connection timed out\r\n'
~~~

[2]
~~~
(undercloud) [stack@undercloud-0 plans]$ grep -A10 -R nova_migration_target::firewall_rules
cell1-config/Compute/config_settings.yaml:tripleo::nova_migration_target::firewall_rules:
cell1-config/Compute/config_settings.yaml-  113 nova_migration_target accept api subnet 192.168.24.0/24:
cell1-config/Compute/config_settings.yaml-    dport: 2022
cell1-config/Compute/config_settings.yaml-    proto: tcp
cell1-config/Compute/config_settings.yaml-    source: 192.168.24.0/24
cell1-config/Compute/config_settings.yaml-  113 nova_migration_target accept libvirt subnet 192.168.24.0/24:
cell1-config/Compute/config_settings.yaml-    dport: 2022
cell1-config/Compute/config_settings.yaml-    proto: tcp
cell1-config/Compute/config_settings.yaml-    source: 192.168.24.0/24
cell1-config/Compute/config_settings.yaml-tripleo::ovn_controller::firewall_rules:
cell1-config/Compute/config_settings.yaml-  118 neutron vxlan networks:
--
cell1-config/group_vars/Compute:  tripleo::nova_migration_target::firewall_rules:
cell1-config/group_vars/Compute-    113 nova_migration_target accept api subnet 192.168.24.0/24:
cell1-config/group_vars/Compute-      dport: 2022
cell1-config/group_vars/Compute-      proto: tcp
cell1-config/group_vars/Compute-      source: 192.168.24.0/24
cell1-config/group_vars/Compute-    113 nova_migration_target accept libvirt subnet 192.168.24.0/24:
cell1-config/group_vars/Compute-      dport: 2022
cell1-config/group_vars/Compute-      proto: tcp
cell1-config/group_vars/Compute-      source: 192.168.24.0/24
cell1-config/group_vars/Compute-  tripleo::ovn_controller::firewall_rules:
cell1-config/group_vars/Compute-    118 neutron vxlan networks:

overcloud-config/Compute/config_settings.yaml:tripleo::nova_migration_target::firewall_rules:
overcloud-config/Compute/config_settings.yaml-  113 nova_migration_target accept api subnet 172.17.1.0/24:
overcloud-config/Compute/config_settings.yaml-    dport: 2022
overcloud-config/Compute/config_settings.yaml-    proto: tcp
overcloud-config/Compute/config_settings.yaml-    source: 172.17.1.0/24
overcloud-config/Compute/config_settings.yaml-  113 nova_migration_target accept libvirt subnet 172.17.1.0/24:
overcloud-config/Compute/config_settings.yaml-    dport: 2022
overcloud-config/Compute/config_settings.yaml-    proto: tcp
overcloud-config/Compute/config_settings.yaml-    source: 172.17.1.0/24
overcloud-config/Compute/config_settings.yaml-tripleo::ovn_controller::firewall_rules:
overcloud-config/Compute/config_settings.yaml-  118 neutron vxlan networks:
--
overcloud-config/group_vars/Compute:  tripleo::nova_migration_target::firewall_rules:
overcloud-config/group_vars/Compute-    113 nova_migration_target accept api subnet 172.17.1.0/24:
overcloud-config/group_vars/Compute-      dport: 2022
overcloud-config/group_vars/Compute-      proto: tcp
overcloud-config/group_vars/Compute-      source: 172.17.1.0/24
overcloud-config/group_vars/Compute-    113 nova_migration_target accept libvirt subnet 172.17.1.0/24:
overcloud-config/group_vars/Compute-      dport: 2022
overcloud-config/group_vars/Compute-      proto: tcp
overcloud-config/group_vars/Compute-      source: 172.17.1.0/24
overcloud-config/group_vars/Compute-  tripleo::ovn_controller::firewall_rules:
overcloud-config/group_vars/Compute-    118 neutron vxlan networks:
~~~

Comment 22 errata-xmlrpc 2021-12-09 20:19:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenStack Platform 16.1.7 (Train) bug fix and enhancement advisory), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3762


Note You need to log in before you can comment on or make changes to this bug.