Bug 1734172 - On pre-deployed servers environments prevent losing SSH access if DeployedServerPortMap is missing network tags
Summary: On pre-deployed servers environments prevent losing SSH access if DeployedSer...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates
Version: 15.0 (Stein)
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: rc
: 15.0 (Stein)
Assignee: Cédric Jeanneret
QA Contact: Sasha Smolyak
URL:
Whiteboard:
: 1730661 (view as bug list)
Depends On:
Blocks: 1704973
TreeView+ depends on / blocked
 
Reported: 2019-07-29 20:52 UTC by Marius Cornea
Modified: 2020-01-31 08:10 UTC (History)
11 users (show)

Fixed In Version: openstack-tripleo-heat-templates-10.6.1-0.20190816020444.9d0a312.el8ost
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-09-21 11:24:01 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1839324 0 None None None 2019-08-07 12:56:22 UTC
OpenStack gerrit 676136 0 None MERGED Ensure we get at least one ctlplane subnet 2020-07-08 21:00:06 UTC
Red Hat Product Errata RHEA-2019:2811 0 None None None 2019-09-21 11:24:19 UTC

Description Marius Cornea 2019-07-29 20:52:35 UTC
Description of problem:
Deployment on pre-deployed servers fails due to missing SSH firewall rules. 


After https://review.opendev.org/#/c/667295/ landed deployment on pre-deployed servers doesn't work anymore since the SSH firewall rules are not set anymore:


Version-Release number of selected component (if applicable):
openstack-tripleo-heat-templates-10.6.1-0.20190725000448.e49b8db.el8ost.noarch

How reproducible:
100%

Steps to Reproduce:
1. Deploy OSP15 overcloud on pre-deployed servers

Actual results:
overcloud deployments fails because the firewall rules set by Director do not include SSH access hence ansible can't reach the nodes any longer.

Expected results:
Firewall rules to allow SSH access are set.

Additional info:

On pre-deployed servers there are no ports in the ctlplane network:

(undercloud) [stack@undercloud-0 ~]$ openstack port list 
+--------------------------------------+--------------------------+-------------------+-----------------------------------------------------------------------------+--------+
| ID                                   | Name                     | MAC Address       | Fixed IP Addresses                                                          | Status |
+--------------------------------------+--------------------------+-------------------+-----------------------------------------------------------------------------+--------+
| 04182bfb-af6c-4a8e-a698-2ea637aaa5f5 | compute-0_Tenant         | fa:16:3e:f2:0d:25 | ip_address='172.17.2.76', subnet_id='ad34046b-4261-46e5-b9ff-a2c98fb545dc'  | DOWN   |
| 09aca09b-17d1-4bc9-b943-4fc9484b6b11 | controller-0_External    | fa:16:3e:f2:9d:b9 | ip_address='10.0.0.123', subnet_id='aeac68a5-00eb-4417-a6fa-a47a004781e5'   | DOWN   |
| 09c3cadc-8a57-43f1-917e-845de2dbbccc | controller-1_StorageMgmt | fa:16:3e:1d:5b:ef | ip_address='172.17.4.23', subnet_id='f747f32d-bb1e-4c42-9d77-3e24367eecd4'  | DOWN   |
| 0d3a6d9b-5b20-4494-9b12-c45c9805e77f | compute-1_Storage        | fa:16:3e:1b:6f:c9 | ip_address='172.17.3.67', subnet_id='e250b35a-29c8-4e3a-948a-892f11d683dd'  | DOWN   |
| 10508068-0a41-4dc8-98b9-f8c5bd2a5f2d | ceph-0_Storage           | fa:16:3e:d5:cd:91 | ip_address='172.17.3.147', subnet_id='e250b35a-29c8-4e3a-948a-892f11d683dd' | DOWN   |
| 119e911b-fcd1-46b2-8699-a6a3e6b2ec45 | ceph-2_Storage           | fa:16:3e:96:08:59 | ip_address='172.17.3.100', subnet_id='e250b35a-29c8-4e3a-948a-892f11d683dd' | DOWN   |
| 1200b8e3-9d48-4f72-9495-b2cabce10390 | compute-1_Tenant         | fa:16:3e:9d:85:e9 | ip_address='172.17.2.79', subnet_id='ad34046b-4261-46e5-b9ff-a2c98fb545dc'  | DOWN   |
| 2ad6e29a-3e2b-4948-ae0a-86de8cad6369 | internal_api_virtual_ip  | fa:16:3e:1d:62:06 | ip_address='172.17.1.20', subnet_id='39561db2-7db3-4446-a76e-e6e202b57205'  | DOWN   |
| 384fe3fc-e879-48ba-b817-093f3f27a2b4 | controller-1_InternalApi | fa:16:3e:0e:5c:46 | ip_address='172.17.1.88', subnet_id='39561db2-7db3-4446-a76e-e6e202b57205'  | DOWN   |
| 3f4ffca8-a138-47da-8e77-59dce54ca413 | ceph-1_Storage           | fa:16:3e:c3:74:f7 | ip_address='172.17.3.124', subnet_id='e250b35a-29c8-4e3a-948a-892f11d683dd' | DOWN   |
| 3fa8e7e8-4e23-4003-aeb8-530829a48c00 | controller-1_Tenant      | fa:16:3e:18:ac:28 | ip_address='172.17.2.69', subnet_id='ad34046b-4261-46e5-b9ff-a2c98fb545dc'  | DOWN   |
| 52470e4d-de19-4b56-ab92-631ca5b61581 | controller-1_Storage     | fa:16:3e:f8:2d:40 | ip_address='172.17.3.76', subnet_id='e250b35a-29c8-4e3a-948a-892f11d683dd'  | DOWN   |
| 52e9ab61-83a7-4f5e-8ad7-67e032c914a3 | ceph-1_StorageMgmt       | fa:16:3e:4b:bf:c9 | ip_address='172.17.4.24', subnet_id='f747f32d-bb1e-4c42-9d77-3e24367eecd4'  | DOWN   |
| 6025fa1b-bd3e-4d8c-a6d3-c9a7961f93b8 | controller-2_Tenant      | fa:16:3e:b5:69:9f | ip_address='172.17.2.123', subnet_id='ad34046b-4261-46e5-b9ff-a2c98fb545dc' | DOWN   |
| 6f0fcba0-c636-4b00-9b94-dd4e706786ca | compute-0_Storage        | fa:16:3e:a4:ab:1c | ip_address='172.17.3.127', subnet_id='e250b35a-29c8-4e3a-948a-892f11d683dd' | DOWN   |
| 73423e39-3bde-4b3a-b8ae-4ef2c79d5844 | ceph-2_StorageMgmt       | fa:16:3e:26:9a:3b | ip_address='172.17.4.64', subnet_id='f747f32d-bb1e-4c42-9d77-3e24367eecd4'  | DOWN   |
| 73c14f08-d6a1-4e91-a1a7-a4e94440194e | controller-2_External    | fa:16:3e:c0:34:3f | ip_address='10.0.0.149', subnet_id='aeac68a5-00eb-4417-a6fa-a47a004781e5'   | DOWN   |
| 7cd71263-1fd2-4041-aec6-12edc052f304 |                          | fa:16:3e:f4:11:cf | ip_address='192.168.24.5', subnet_id='5b8a2903-f7c1-41c1-b7d4-b134e702f1fe' | ACTIVE |
| 81c10ad6-8900-4201-b97e-dcdcb53023e8 | ceph-0_StorageMgmt       | fa:16:3e:93:85:e9 | ip_address='172.17.4.139', subnet_id='f747f32d-bb1e-4c42-9d77-3e24367eecd4' | DOWN   |
| 8934ace6-26a3-4b42-a383-2c25fa95a843 | controller-2_Storage     | fa:16:3e:ba:62:5a | ip_address='172.17.3.36', subnet_id='e250b35a-29c8-4e3a-948a-892f11d683dd'  | DOWN   |
| 8d97d631-6d22-48cd-84c2-85f7374b3e76 | compute-0_InternalApi    | fa:16:3e:77:b7:55 | ip_address='172.17.1.43', subnet_id='39561db2-7db3-4446-a76e-e6e202b57205'  | DOWN   |
| 8ff13c92-902f-4631-8a4b-e2a298acdb86 | controller-0_Storage     | fa:16:3e:cb:59:57 | ip_address='172.17.3.148', subnet_id='e250b35a-29c8-4e3a-948a-892f11d683dd' | DOWN   |
| 958ec693-3636-435b-96de-e2d070e68300 | storage_mgmt_virtual_ip  | fa:16:3e:89:0d:3f | ip_address='172.17.4.122', subnet_id='f747f32d-bb1e-4c42-9d77-3e24367eecd4' | DOWN   |
| b2869fd1-7c3c-4a1e-acec-ed831341149c | compute-1_InternalApi    | fa:16:3e:41:e0:24 | ip_address='172.17.1.51', subnet_id='39561db2-7db3-4446-a76e-e6e202b57205'  | DOWN   |
| c1f1b0bb-ede7-4f97-b3f4-7aaf37332f56 | controller-0_StorageMgmt | fa:16:3e:a4:3e:7a | ip_address='172.17.4.81', subnet_id='f747f32d-bb1e-4c42-9d77-3e24367eecd4'  | DOWN   |
| c8567633-3ce1-4357-a6c2-60ee012b5952 | controller-0_InternalApi | fa:16:3e:d0:2e:95 | ip_address='172.17.1.78', subnet_id='39561db2-7db3-4446-a76e-e6e202b57205'  | DOWN   |
| cf09c6a1-5598-4fbb-9042-32126eaa0d18 | controller-2_StorageMgmt | fa:16:3e:8e:98:2d | ip_address='172.17.4.47', subnet_id='f747f32d-bb1e-4c42-9d77-3e24367eecd4'  | DOWN   |
| e3481028-adc4-4aca-9c78-111831e0c4da | controller-0_Tenant      | fa:16:3e:f8:a4:64 | ip_address='172.17.2.19', subnet_id='ad34046b-4261-46e5-b9ff-a2c98fb545dc'  | DOWN   |
| ebcbd535-3ae0-4076-965f-aa9a13ef9ffb | controller-1_External    | fa:16:3e:ee:e6:ed | ip_address='10.0.0.104', subnet_id='aeac68a5-00eb-4417-a6fa-a47a004781e5'   | DOWN   |
| eecf20a5-43b7-414b-97e7-53088aa8380a | controller-2_InternalApi | fa:16:3e:d3:c1:d7 | ip_address='172.17.1.79', subnet_id='39561db2-7db3-4446-a76e-e6e202b57205'  | DOWN   |
| f1c6bad1-70d9-4487-8eb9-140ec6c4e18b | storage_virtual_ip       | fa:16:3e:03:75:82 | ip_address='172.17.3.21', subnet_id='e250b35a-29c8-4e3a-948a-892f11d683dd'  | DOWN   |
| fff611b0-fe77-464f-94ab-ad4429b20a0b | public_virtual_ip        | fa:16:3e:65:bb:f3 | ip_address='10.0.0.137', subnet_id='aeac68a5-00eb-4417-a6fa-a47a004781e5'   | DOWN   |
+--------------------------------------+--------------------------+-------------------+-----------------------------------------------------------------------------+--------+

The resulting firewall rules on the server do not include SSH access:

[root@controller-0 ~]# iptables -nL
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED /* 000 accept related established rules ipv4 */
ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0            state NEW /* 001 accept all icmp ipv4 */
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            state NEW /* 002 accept all to lo interface ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 873,3123,3306,4444,4567,4568,9200 state NEW /* 104 mysql galera-bundle ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 1993 state NEW /* 107 haproxy stats ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 3124,6379,26379 state NEW /* 108 redis-bundle ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 3122,4369,5672,25672 state NEW /* 109 rabbitmq-bundle ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 6789,3300 state NEW /* 110 ceph_mon ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 5000,13000,35357 state NEW /* 111 keystone ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 9292,13292 state NEW /* 112 glance_api ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 6800:7300 state NEW /* 113 ceph_mgr ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 8774,13774 state NEW /* 113 nova_api ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 9696,13696 state NEW /* 114 neutron api ipv4 */
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 4789 state NEW /* 118 neutron vxlan networks ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 8776,13776 state NEW /* 119 cinder ipv4 */
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 6081 state NEW /* 119 neutron geneve networks ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 3260 state NEW /* 120 iscsi initiator ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 3125,6641,6642 state NEW /* 121 OVN DB server ports ipv4 */
ACCEPT     tcp  --  172.17.1.0/24        0.0.0.0/0            multiport dports 11211 state NEW /* 121 memcached 172.17.1.0/24 ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 8080,13808 state NEW /* 122 swift proxy ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 873,6000,6001,6002 state NEW /* 123 swift storage ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 8004,13004 state NEW /* 125 heat_api ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 8000,13800 state NEW /* 125 heat_cfn ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 80,443 state NEW /* 126 horizon ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 8042,13042 state NEW /* 128 aodh-api ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 8041,13041 state NEW /* 129 gnocchi-api ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 2224,3121,21064 state NEW /* 130 pacemaker tcp ipv4 */
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 5405 state NEW /* 131 pacemaker udp ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 6080,13080 state NEW /* 137 nova_vnc_proxy ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 8778,13778 state NEW /* 138 nova_placement ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 8775,13775 state NEW /* 139 nova_metadata ipv4 */
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 8125 state NEW /* 140 gnocchi-statsd ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 8977,13977 state NEW /* 140 panko-api ipv4 */
LOG        all  --  0.0.0.0/0            0.0.0.0/0            state NEW limit: avg 20/min burst 15 /* 998 log all ipv4 */ LOG flags 0 level 4
DROP       all  --  0.0.0.0/0            0.0.0.0/0            state NEW /* 999 drop all ipv4 */

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Comment 2 Sasha Smolyak 2019-08-01 06:22:30 UTC
*** Bug 1730661 has been marked as a duplicate of this bug. ***

Comment 3 Bob Fournier 2019-08-01 14:55:44 UTC
Moving to DF as pre-deployed servers is in that area.

Comment 11 Marius Cornea 2019-08-08 00:26:48 UTC
I tested by including https://review.opendev.org/#/c/675124/ and providing the full subnet and network mask in the cidr parameter and the issue doesn't show up anymore.

Comment 12 Cédric Jeanneret 2019-08-08 05:59:31 UTC
Patch needs some more work, it introduces another regression.

I'll take some more time to compare the envs, since upstream doesn't have that issue.

Comment 13 Cédric Jeanneret 2019-08-12 08:34:48 UTC
Upstream backport to Stein started.

Comment 24 Sasha Smolyak 2019-08-28 07:47:20 UTC
The deployment passed, verified

Comment 28 errata-xmlrpc 2019-09-21 11:24:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:2811


Note You need to log in before you can comment on or make changes to this bug.