Bug 1561245

Summary: FFU: floating IP connectivity gets interrupted during deploy_steps_playbook.yaml playbook on compute nodes because iptables service gets restarted
Product: Red Hat OpenStack Reporter: Marius Cornea <mcornea>
Component: puppet-tripleoAssignee: Emilien Macchi <emacchi>
Status: CLOSED ERRATA QA Contact: Marius Cornea <mcornea>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 13.0 (Queens)CC: aschultz, dbecker, jjoyce, jschluet, mandreou, mbracho, mburns, morazi, rhel-osp-director-maint, sathlang, sclewis, slinaber, tvignaud
Target Milestone: betaKeywords: Triaged
Target Release: 13.0 (Queens)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: puppet-tripleo-8.3.2-0.20180326191354.b25e2fb.el7ost Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-06-27 13:49:05 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Marius Cornea 2018-03-28 00:14:13 UTC
Description of problem:

FFU: floating IP connectivity gets interrupted during deploy_steps_playbook.yaml playbook on compute nodes because iptables service gets restarted. 


The output of the following tasks:

TASK [Run puppet host configuration for step 1] ********************************

shows that puppet triggers restart_iptables:

        "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", 
        "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", 
        "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.93 seconds", 
        "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Compute1]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/File[/etc/systemd/system/docker.service.d/99-unset-mountflags.conf]/ensure: defined content as '{md5}b984426de0b5978853686a649b64e4b8'", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Exec[systemd daemon-reload]: Triggered 'refresh' from 1 events", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-options]/returns: executed successfully", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-registry]/returns: executed successfully", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-daemon.json-debug]/returns: executed successfully", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-storage]/returns: executed successfully", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Augeas[docker-sysconfig-network]/returns: executed successfully", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Docker/Service[docker]/ensure: ensure changed 'stopped' to 'running'", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", 
        "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}8ea449d7aa73245c001c0134b5b23e2d' to '{md5}537f072fe8f462b20e5e88f9121550b2'", 
        "Notice: /Stage[main]/Ntp::Service/Service[ntp]: Triggered 'refresh' from 1 events", 
        "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", 
        "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", 
        "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}b62de5c8d20a9f655df40460407032b5' to '{md5}3534841fdb8db5b58d66600a60bf3759'", 
        "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", 
        "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_libvirt]/Tripleo::Firewall::Rule[200 nova_libvirt]/Firewall[200 nova_libvirt ipv4]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_libvirt]/Tripleo::Firewall::Rule[200 nova_libvirt]/Firewall[200 nova_libvirt ipv6]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_migration_target]/Tripleo::Firewall::Rule[113 nova_migration_target]/Firewall[113 nova_migration_target ipv4]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_migration_target]/Tripleo::Firewall::Rule[113 nova_migration_target]/Firewall[113 nova_migration_target ipv6]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", 
        "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[snmp]/Tripleo::Firewall::Rule[124 snmp]/Firewall[124 snmp ipv4]/ensure: created", 
        "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", 
        "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", 
        "Notice: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_v4_rules_cleanup]/returns: executed successfully", 
        "Notice: /Stage[main]/Tripleo::Firewall/Exec[restart_iptables]: Triggered 'refresh' from 1 events", 
        "Notice: /Stage[main]/Tripleo::Firewall/Exec[nonpersistent_v6_rules_cleanup]/returns: executed successfully", 
        "Notice: /Stage[main]/Tripleo::Firewall/Exec[restart_ip6tables]: Triggered 'refresh' from 1 events", 


iptables rules after iptables hsa been restarted:

[root@compute-0 heat-admin]# iptables -nL
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* 000 accept related established rules */ state RELATED,ESTABLISHED
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED /* 000 accept related established rules ipv4 */
ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0            /* 001 accept all icmp */ state NEW
ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0            state NEW /* 001 accept all icmp ipv4 */
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* 002 accept all to lo interface */ state NEW
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            state NEW /* 002 accept all to lo interface ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 22 /* 003 accept ssh */ state NEW
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 22 state NEW /* 003 accept ssh ipv4 */
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 123 /* 105 ntp */ state NEW
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 123 state NEW /* 105 ntp ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 2022 state NEW /* 113 nova_migration_target ipv4 */
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 4789 /* 118 neutron vxlan networks */ state NEW
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 4789 state NEW /* 118 neutron vxlan networks ipv4 */
ACCEPT     udp  --  172.17.1.0/24        0.0.0.0/0            multiport dports 161 state NEW /* 124 snmp ipv4 */
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 161 /* 127 snmp */ state NEW
ACCEPT     47   --  0.0.0.0/0            0.0.0.0/0            /* 136 neutron gre networks */
ACCEPT     47   --  0.0.0.0/0            0.0.0.0/0            /* 136 neutron gre networks ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 16514,49152:49215,5900:5999 /* 200 nova_libvirt */ state NEW
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 16514,49152:49215,5900:6923 state NEW /* 200 nova_libvirt ipv4 */
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED
ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:22
REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited
LOG        all  --  0.0.0.0/0            0.0.0.0/0            /* 998 log all */ state NEW LOG flags 0 level 4
LOG        all  --  0.0.0.0/0            0.0.0.0/0            state NEW /* 998 log all ipv4 */ LOG flags 0 level 4
DROP       all  --  0.0.0.0/0            0.0.0.0/0            /* 999 drop all */ state NEW
DROP       all  --  0.0.0.0/0            0.0.0.0/0            state NEW /* 999 drop all ipv4 */

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
[root@compute-0 heat-admin]# 
[root@compute-0 heat-admin]# 


The instance remains unreacheable even after the upgrade has finished. 

After rebooting the instance via:
nova stop workload_instance_0
nova start workload_instance_0


the instance is reacheable via its floating ip


We can see that the iptables rules on the compute after the instance reboot contain the neutron related chains:


[root@compute-0 heat-admin]# iptables -nL
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
neutron-openvswi-INPUT  all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* 000 accept related established rules */ state RELATED,ESTABLISHED
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED /* 000 accept related established rules ipv4 */
ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0            /* 001 accept all icmp */ state NEW
ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0            state NEW /* 001 accept all icmp ipv4 */
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            /* 002 accept all to lo interface */ state NEW
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            state NEW /* 002 accept all to lo interface ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 22 /* 003 accept ssh */ state NEW
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 22 state NEW /* 003 accept ssh ipv4 */
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 123 /* 105 ntp */ state NEW
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 123 state NEW /* 105 ntp ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 2022 state NEW /* 113 nova_migration_target ipv4 */
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 4789 /* 118 neutron vxlan networks */ state NEW
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 4789 state NEW /* 118 neutron vxlan networks ipv4 */
ACCEPT     udp  --  172.17.1.0/24        0.0.0.0/0            multiport dports 161 state NEW /* 124 snmp ipv4 */
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 161 /* 127 snmp */ state NEW
ACCEPT     47   --  0.0.0.0/0            0.0.0.0/0            /* 136 neutron gre networks */
ACCEPT     47   --  0.0.0.0/0            0.0.0.0/0            /* 136 neutron gre networks ipv4 */
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 16514,49152:49215,5900:5999 /* 200 nova_libvirt */ state NEW
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 16514,49152:49215,5900:6923 state NEW /* 200 nova_libvirt ipv4 */
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED
ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:22
REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited
LOG        all  --  0.0.0.0/0            0.0.0.0/0            /* 998 log all */ state NEW LOG flags 0 level 4
LOG        all  --  0.0.0.0/0            0.0.0.0/0            state NEW /* 998 log all ipv4 */ LOG flags 0 level 4
DROP       all  --  0.0.0.0/0            0.0.0.0/0            /* 999 drop all */ state NEW
DROP       all  --  0.0.0.0/0            0.0.0.0/0            state NEW /* 999 drop all ipv4 */

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
neutron-filter-top  all  --  0.0.0.0/0            0.0.0.0/0           
neutron-openvswi-FORWARD  all  --  0.0.0.0/0            0.0.0.0/0           
REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
neutron-filter-top  all  --  0.0.0.0/0            0.0.0.0/0           
neutron-openvswi-OUTPUT  all  --  0.0.0.0/0            0.0.0.0/0           

Chain neutron-filter-top (2 references)
target     prot opt source               destination         
neutron-openvswi-local  all  --  0.0.0.0/0            0.0.0.0/0           

Chain neutron-openvswi-FORWARD (1 references)
target     prot opt source               destination         
neutron-openvswi-sg-chain  all  --  0.0.0.0/0            0.0.0.0/0            PHYSDEV match --physdev-out tapf2302411-f4 --physdev-is-bridged /* Direct traffic from the VM interface to the security group chain. */
neutron-openvswi-sg-chain  all  --  0.0.0.0/0            0.0.0.0/0            PHYSDEV match --physdev-in tapf2302411-f4 --physdev-is-bridged /* Direct traffic from the VM interface to the security group chain. */

Chain neutron-openvswi-INPUT (1 references)
target     prot opt source               destination         
neutron-openvswi-of2302411-f  all  --  0.0.0.0/0            0.0.0.0/0            PHYSDEV match --physdev-in tapf2302411-f4 --physdev-is-bridged /* Direct incoming traffic from VM to the security group chain. */

Chain neutron-openvswi-OUTPUT (1 references)
target     prot opt source               destination         

Chain neutron-openvswi-if2302411-f (1 references)
target     prot opt source               destination         
RETURN     all  --  0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED /* Direct packets associated with a known session to the RETURN chain. */
RETURN     udp  --  0.0.0.0/0            192.168.0.9          udp spt:67 dpt:68
RETURN     udp  --  0.0.0.0/0            255.255.255.255      udp spt:67 dpt:68
RETURN     icmp --  0.0.0.0/0            0.0.0.0/0           
RETURN     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:22
DROP       all  --  0.0.0.0/0            0.0.0.0/0            state INVALID /* Drop packets that appear related to an existing connection (e.g. TCP ACK/FIN) but do not have an entry in conntrack. */
neutron-openvswi-sg-fallback  all  --  0.0.0.0/0            0.0.0.0/0            /* Send unmatched traffic to the fallback chain. */

Chain neutron-openvswi-local (1 references)
target     prot opt source               destination         

Chain neutron-openvswi-of2302411-f (2 references)
target     prot opt source               destination         
RETURN     udp  --  0.0.0.0              255.255.255.255      udp spt:68 dpt:67 /* Allow DHCP client traffic. */
neutron-openvswi-sf2302411-f  all  --  0.0.0.0/0            0.0.0.0/0           
RETURN     udp  --  0.0.0.0/0            0.0.0.0/0            udp spt:68 dpt:67 /* Allow DHCP client traffic. */
DROP       udp  --  0.0.0.0/0            0.0.0.0/0            udp spt:67 dpt:68 /* Prevent DHCP Spoofing by VM. */
RETURN     all  --  0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED /* Direct packets associated with a known session to the RETURN chain. */
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           
DROP       all  --  0.0.0.0/0            0.0.0.0/0            state INVALID /* Drop packets that appear related to an existing connection (e.g. TCP ACK/FIN) but do not have an entry in conntrack. */
neutron-openvswi-sg-fallback  all  --  0.0.0.0/0            0.0.0.0/0            /* Send unmatched traffic to the fallback chain. */

Chain neutron-openvswi-sf2302411-f (1 references)
target     prot opt source               destination         
RETURN     all  --  192.168.0.9          0.0.0.0/0            MAC FA:16:3E:14:56:02 /* Allow traffic from defined IP/MAC pairs. */
DROP       all  --  0.0.0.0/0            0.0.0.0/0            /* Drop traffic without an IP/MAC allow rule. */

Chain neutron-openvswi-sg-chain (2 references)
target     prot opt source               destination         
neutron-openvswi-if2302411-f  all  --  0.0.0.0/0            0.0.0.0/0            PHYSDEV match --physdev-out tapf2302411-f4 --physdev-is-bridged /* Jump to the VM specific chain. */
neutron-openvswi-of2302411-f  all  --  0.0.0.0/0            0.0.0.0/0            PHYSDEV match --physdev-in tapf2302411-f4 --physdev-is-bridged /* Jump to the VM specific chain. */
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           

Chain neutron-openvswi-sg-fallback (2 references)
target     prot opt source               destination         
DROP       all  --  0.0.0.0/0            0.0.0.0/0            /* Default drop rule for unmatched traffic. */

Comment 2 Alex Schultz 2018-03-28 18:50:10 UTC
You're missing https://review.openstack.org/#/c/551748/. we removed the iptables-save and did something else.

Comment 3 Marios Andreou 2018-04-02 13:08:11 UTC
adding flags as this shows up in our untriaged list (via DFG:Upgrades weekly BZ triage call)

Comment 8 errata-xmlrpc 2018-06-27 13:49:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:2086