Bug 1720135 - [OSP15] Traffic between 2 VMs connected to different networks and same router is DNAT'ed
Summary: [OSP15] Traffic between 2 VMs connected to different networks and same router...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-neutron
Version: 15.0 (Stein)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: beta
: 15.0 (Stein)
Assignee: Slawek Kaplonski
QA Contact: Eran Kuris
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-06-13 08:31 UTC by Bernard Cafarelli
Modified: 2019-09-26 10:52 UTC (History)
12 users (show)

Fixed In Version: openstack-neutron-14.0.3-0.20190704180411.9f4e596.el8ost
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1725169 (view as bug list)
Environment:
Last Closed: 2019-09-21 11:23:06 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1834825 0 None None None 2019-07-01 07:34:11 UTC
OpenStack gerrit 667547 0 'None' MERGED Change order of creating vms and plug routers in scenario test 2020-10-13 17:10:53 UTC
OpenStack gerrit 668378 0 'None' MERGED Don't match input interface in POSTROUTING table 2020-10-13 17:10:53 UTC
Red Hat Product Errata RHEA-2019:2811 0 None None None 2019-09-21 11:23:31 UTC

Description Bernard Cafarelli 2019-06-13 08:31:07 UTC
neutron_tempest_plugin.scenario.test_connectivity.NetworkConnectivityTest.test_connectivity_through_2_routers fails consistently in several test configurations

Test VM looks good, but test connectivity fails, traceback:
  File "/usr/lib/python3.6/site-packages/neutron_tempest_plugin/scenario/test_connectivity.py", line 111, in test_connectivity_through_2_routers
    ap1_sshclient, ap2_internal_port['fixed_ips'][0]['ip_address'])
  File "/usr/lib/python3.6/site-packages/neutron_tempest_plugin/scenario/base.py", line 303, in check_remote_connectivity
    timeout=timeout))
  File "/usr/lib/python3.6/site-packages/unittest2/case.py", line 705, in assertTrue
    raise self.failureException(msg)
AssertionError: False is not true

Comment 2 Slawek Kaplonski 2019-06-26 21:43:07 UTC
I was debugging this failure in OSP-15 and RHEL 8 and it looks for me that it may be yet another ebtables issue.

In qrouter- namespace in iptables nat table there is rule like:

-A neutron-l3-agent-POSTROUTING ! -i qg-24c1ea39-06 ! -o qg-24c1ea39-06 -m conntrack ! --ctstate DNAT -j ACCEPT

and packets which are going in this test from VM1 to VM2 are hitting this rule and are accepted forward. Thus those packets as src IP address got fixed IP of VM1 (which is fine) and all works as expected.

In case of OSP-15 and RHEL8 where iptables is implemented in fact using ebtables there is also exactly same rule when doing iptables-save command but in Nftables it looks like:

        chain neutron-l3-agent-POSTROUTING {
                iifname != "qg-30fe198f-f4" oifname != "qg-30fe198f-f4" ct state !=  counter packets 0 bytes 0 accept
        }

So ct state is empty in this case. And packets send from VM1 to VM2 don't match this rule and later such packets are matched to some SNAT rules and finally are send to VM2 with floating IP assigned to VM1. That cause errors like:

kernel: IPv4: martian source 10.10.220.11 from 10.0.0.232, on dev qr-f7d5410f-81

on node with second router and such packets are dropped there so they never reach VM2.

I will probably need some help from some Nftables experts here.

Comment 3 Slawek Kaplonski 2019-06-27 07:07:05 UTC
I checked it more deeply and it looks that problem is even bigger.

Lets say I have router with 2 different networks connected to it:

(overcloud) [stack@undercloud-0 ~]$ neutron router-port-list 99327078-3c48-4e74-80b9-80ea115b358f -c id -c fixed_ips | grep -v 169.254
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+----------------------------------------------------------------------------------------+
| id                                   | fixed_ips                                                                              |
+--------------------------------------+----------------------------------------------------------------------------------------+
| 30fe198f-f4f8-4b5c-8435-022edd2b6354 | {"subnet_id": "3cd4964f-ab75-41e8-bbde-af4a382f9eee", "ip_address": "10.0.0.227"}      |
| 4cee624c-841f-4111-b183-bcdeb1a7aff3 | {"subnet_id": "44393b4b-2118-42ef-9093-97dd65ea47fb", "ip_address": "192.168.1.1"}     |
| 92cd6d8c-8d70-4077-bdf5-119b3840df7d | {"subnet_id": "ffd2a346-aa95-4928-93a1-e3b9d2a89dcf", "ip_address": "10.10.210.254"}   |
+--------------------------------------+----------------------------------------------------------------------------------------+

Where 10.0.0.227 is external gateway and 2 other are just 2 tenant networks connected to the router.

Now there are VMs:

(overcloud) [stack@undercloud-0 ~]$ nova list --all-tenants
+--------------------------------------+--------------------------------+----------------------------------+--------+------------+-------------+------------------------------------------------------------+
| ID                                   | Name                           | Tenant ID                        | Status | Task State | Power State | Networks                                                   |
+--------------------------------------+--------------------------------+----------------------------------+--------+------------+-------------+------------------------------------------------------------+
| 654b28e7-51b7-4d03-925b-81feb6d69dc4 | sk-vm1                         | 9ea1d902f87a4131b844c785e4124a54 | ACTIVE | -          | Running     | test-tenant-net=192.168.1.78                               |
| 3252cc86-ca45-4d45-8dbd-760bef40c140 | tempest-server-test-1259508729 | d84ee96801574c11b6f333eb573f5e35 | ACTIVE | -          | Running     | tempest-test-network--1271101399=10.10.210.165, 10.0.0.232 |
+--------------------------------------+--------------------------------+----------------------------------+--------+------------+-------------+------------------------------------------------------------+

When I ssh to vm with FIP 10.0.0.232 and ping 192.168.1.78 from it, here is what I see on sk-vm1's tap device:

[root@compute-1 heat-admin]# tcpdump -i tap094a999f-04 -nnel
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tap094a999f-04, link-type EN10MB (Ethernet), capture size 262144 bytes
06:58:27.801142 fa:16:3e:11:dd:e3 > fa:16:3e:7d:68:e5, ethertype IPv4 (0x0800), length 98: 10.0.0.232 > 192.168.1.78: ICMP echo request, id 61, seq 5, length 64
06:58:27.801499 fa:16:3e:7d:68:e5 > fa:16:3e:11:dd:e3, ethertype IPv4 (0x0800), length 98: 192.168.1.78 > 10.0.0.232: ICMP echo reply, id 61, seq 5, length 64
06:58:27.814937 fa:16:3e:7d:68:e5 > fa:16:3e:11:dd:e3, ethertype ARP (0x0806), length 42: Request who-has 192.168.1.1 tell 192.168.1.78, length 28
06:58:27.815747 fa:16:3e:11:dd:e3 > fa:16:3e:7d:68:e5, ethertype ARP (0x0806), length 42: Reply 192.168.1.1 is-at fa:16:3e:11:dd:e3, length 28
06:58:28.801636 fa:16:3e:11:dd:e3 > fa:16:3e:7d:68:e5, ethertype IPv4 (0x0800), length 98: 10.0.0.232 > 192.168.1.78: ICMP echo request, id 61, seq 6, length 64
06:58:28.802069 fa:16:3e:7d:68:e5 > fa:16:3e:11:dd:e3, ethertype IPv4 (0x0800), length 98: 192.168.1.78 > 10.0.0.232: ICMP echo reply, id 61, seq 6, length 64
06:58:29.802037 fa:16:3e:11:dd:e3 > fa:16:3e:7d:68:e5, ethertype IPv4 (0x0800), length 98: 10.0.0.232 > 192.168.1.78: ICMP echo request, id 61, seq 7, length 64
06:58:29.802386 fa:16:3e:7d:68:e5 > fa:16:3e:11:dd:e3, ethertype IPv4 (0x0800), length 98: 192.168.1.78 > 10.0.0.232: ICMP echo reply, id 61, seq 7, length 64
^C
8 packets captured
8 packets received by filter
0 packets dropped by kernel

So packets which should be sent between 2 internal networks are in fact first SNAT'ed and send 2 second network with FIP as src IP address.

Comment 6 Phil Sutter 2019-06-27 14:00:48 UTC
Hi!

(In reply to Slawek Kaplonski from comment #2)
> I was debugging this failure in OSP-15 and RHEL 8 and it looks for me that
> it may be yet another ebtables issue.
> 
> In qrouter- namespace in iptables nat table there is rule like:
> 
> -A neutron-l3-agent-POSTROUTING ! -i qg-24c1ea39-06 ! -o qg-24c1ea39-06 -m
> conntrack ! --ctstate DNAT -j ACCEPT
> 
> and packets which are going in this test from VM1 to VM2 are hitting this
> rule and are accepted forward. Thus those packets as src IP address got
> fixed IP of VM1 (which is fine) and all works as expected.
> 
> In case of OSP-15 and RHEL8 where iptables is implemented in fact using
> ebtables there is also exactly same rule when doing iptables-save command
> but in Nftables it looks like:
> 
>         chain neutron-l3-agent-POSTROUTING {
>                 iifname != "qg-30fe198f-f4" oifname != "qg-30fe198f-f4" ct
> state !=  counter packets 0 bytes 0 accept
>         }

Yes, this is a bug - seems like nft tool has no translation for DNAT state
value. But this is not the cause of your problems: iptables-nft uses xtables
conntrack match internally, so the match should work even though nft can't
print it properly. This is also clear from the fact that 'iptables-nft -vnL'
correctly prints the rule.



> So ct state is empty in this case. And packets send from VM1 to VM2 don't
> match this rule and later such packets are matched to some SNAT rules and
> finally are send to VM2 with floating IP assigned to VM1. That cause errors
> like:

One thing I could imagine why the above rule still does not work is the use of
'iifname' and 'oifname'. If the interfaces were renamed after creating the
rule, it might stop working.

Is there a running system where I could investigate the issue as it happens?

Cheers, Phil

Comment 7 Slawek Kaplonski 2019-06-28 08:16:01 UTC
Hi Phill,

(In reply to Phil Sutter from comment #6)
> Hi!
> 
> (In reply to Slawek Kaplonski from comment #2)
> > I was debugging this failure in OSP-15 and RHEL 8 and it looks for me that
> > it may be yet another ebtables issue.
> > 
> > In qrouter- namespace in iptables nat table there is rule like:
> > 
> > -A neutron-l3-agent-POSTROUTING ! -i qg-24c1ea39-06 ! -o qg-24c1ea39-06 -m
> > conntrack ! --ctstate DNAT -j ACCEPT
> > 
> > and packets which are going in this test from VM1 to VM2 are hitting this
> > rule and are accepted forward. Thus those packets as src IP address got
> > fixed IP of VM1 (which is fine) and all works as expected.
> > 
> > In case of OSP-15 and RHEL8 where iptables is implemented in fact using
> > ebtables there is also exactly same rule when doing iptables-save command
> > but in Nftables it looks like:
> > 
> >         chain neutron-l3-agent-POSTROUTING {
> >                 iifname != "qg-30fe198f-f4" oifname != "qg-30fe198f-f4" ct
> > state !=  counter packets 0 bytes 0 accept
> >         }
> 
> Yes, this is a bug - seems like nft tool has no translation for DNAT state
> value. But this is not the cause of your problems: iptables-nft uses xtables
> conntrack match internally, so the match should work even though nft can't
> print it properly. This is also clear from the fact that 'iptables-nft -vnL'
> correctly prints the rule.

I'm not and ip/nftables expert for sure but that is what I found. On other systems those packets match this specific rule and are send to second VM, on RHEL8 this rule is not matched. Maybe reason is different than this DNAT ctstate, but would be good if You could take a look at it by self :)

> 
> 
> 
> > So ct state is empty in this case. And packets send from VM1 to VM2 don't
> > match this rule and later such packets are matched to some SNAT rules and
> > finally are send to VM2 with floating IP assigned to VM1. That cause errors
> > like:
> 
> One thing I could imagine why the above rule still does not work is the use
> of
> 'iifname' and 'oifname'. If the interfaces were renamed after creating the
> rule, it might stop working.

Interface names wasn't changed for sure. Neutron is not changing them after creation.

> 
> Is there a running system where I could investigate the issue as it happens?

Yes, we have ready env when You can take a look at it.

> Cheers, Phil

Comment 8 Slawek Kaplonski 2019-06-28 09:45:23 UTC
After debugging session with Phil we found out that probably packets don't match on input interface.

Below is nft ruleset from router's namespace:

[root@controller-0 heat-admin]# ip netns exec qrouter-bf03a2c0-e109-493a-bc86-5d58ea0b535d nft list ruleset
table ip filter {
        chain INPUT {
                type filter hook input priority 0; policy accept;
                counter packets 33083 bytes 1327062 jump neutron-l3-agent-INPUT
        }

        chain FORWARD {
                type filter hook forward priority 0; policy accept;
                counter packets 1514 bytes 119807 jump neutron-filter-top
                counter packets 1514 bytes 119807 jump neutron-l3-agent-FORWARD
        }

        chain OUTPUT {
                type filter hook output priority 0; policy accept;
                counter packets 33063 bytes 1326072 jump neutron-filter-top
                counter packets 33063 bytes 1326072 jump neutron-l3-agent-OUTPUT
        }

        chain neutron-filter-top {
                counter packets 34577 bytes 1445879 jump neutron-l3-agent-local
        }

        chain neutron-l3-agent-FORWARD {
                counter packets 1514 bytes 119807 jump neutron-l3-agent-scope
        }

        chain neutron-l3-agent-INPUT {
                mark and 0xffff == 0x1 counter packets 96 bytes 6960 accept
                meta l4proto tcp tcp dport 9697 counter packets 0 bytes 0 drop
        }

        chain neutron-l3-agent-OUTPUT {
        }

        chain neutron-l3-agent-local {
        }

        chain neutron-l3-agent-scope {
                oifname "qr-58fa6ffd-7a" mark and 0xffff0000 != 0x4000000 counter packets 0 bytes 0 drop
                oifname "qr-81336cda-f0" mark and 0xffff0000 != 0x4000000 counter packets 0 bytes 0 drop
        }
}
table ip mangle {
        chain PREROUTING {
                type filter hook prerouting priority -150; policy accept;
                counter packets 34641 bytes 1448629 jump neutron-l3-agent-PREROUTING
        }

        chain INPUT {
                type filter hook input priority -150; policy accept;
                counter packets 33083 bytes 1327062 jump neutron-l3-agent-INPUT
        }

        chain FORWARD {
                type filter hook forward priority -150; policy accept;
                counter packets 1514 bytes 119807 jump neutron-l3-agent-FORWARD
        }

        chain OUTPUT {
                type route hook output priority -150; policy accept;
                counter packets 33063 bytes 1326072 jump neutron-l3-agent-OUTPUT
        }

        chain POSTROUTING {
                type filter hook postrouting priority -150; policy accept;
                counter packets 34577 bytes 1445879 jump neutron-l3-agent-POSTROUTING
        }

        chain neutron-l3-agent-FORWARD {
        }

        chain neutron-l3-agent-INPUT {
        }

        chain neutron-l3-agent-OUTPUT {
        }

        chain neutron-l3-agent-POSTROUTING {
                oifname "qg-d2442f84-1e" ct mark and 0xffff0000 == 0x0 counter packets 1 bytes 60 ct mark set mark and 0xffff0000
        }

        chain neutron-l3-agent-PREROUTING {
                counter packets 34641 bytes 1448629 jump neutron-l3-agent-mark
                counter packets 34641 bytes 1448629 jump neutron-l3-agent-scope
                ct mark and 0xffff0000 != 0x0 counter packets 884 bytes 66935 meta mark set ct mark and 0xffff0000
                counter packets 34641 bytes 1448629 jump neutron-l3-agent-floatingip
                iifname "qr-*" meta l4proto tcp ip daddr 169.254.169.254 tcp dport 80 counter packets 96 bytes 6960 meta mark set mark and 0xffff0000 xor 0x1
        }

        chain neutron-l3-agent-float-snat {
                ct mark and 0xffff0000 == 0x0 counter packets 0 bytes 0 ct mark set mark and 0xffff0000
        }

        chain neutron-l3-agent-floatingip {
        }

        chain neutron-l3-agent-mark {
                iifname "qg-d2442f84-1e" counter packets 471 bytes 31433 meta mark set mark and 0xffff0000 xor 0x2
        }

        chain neutron-l3-agent-scope {
                iifname "qr-58fa6ffd-7a" counter packets 57 bytes 4788 meta mark set mark and 0xffff xor 0x4000000
                iifname "qr-81336cda-f0" counter packets 1085 bytes 91288 meta mark set mark and 0xffff xor 0x4000000
                iifname "qg-d2442f84-1e" counter packets 471 bytes 31433 meta mark set mark and 0xffff xor 0x4000000
        }
}
table ip nat {
        chain PREROUTING {
                type nat hook prerouting priority -100; policy accept;
                counter packets 73 bytes 3988 jump neutron-l3-agent-PREROUTING
        }

        chain INPUT {
                type nat hook input priority 100; policy accept;
        }

        chain POSTROUTING {
                type nat hook postrouting priority 100; policy accept;
                counter packets 12 bytes 896 jump neutron-l3-agent-POSTROUTING
                counter packets 7 bytes 476 jump neutron-postrouting-bottom
        }

        chain OUTPUT {
                type nat hook output priority -100; policy accept;
                counter packets 2 bytes 80 jump neutron-l3-agent-OUTPUT
        }

        chain neutron-l3-agent-OUTPUT {
                ip daddr 10.0.0.226 counter packets 0 bytes 0 dnat to 10.10.210.192
        }

        chain neutron-l3-agent-POSTROUTING {
                iifname != "qg-d2442f84-1e" oifname != "qg-d2442f84-1e" ct state !=  counter packets 0 bytes 0 accept
                iifname != "qg-d2442f84-1e" oifname != "qg-d2442f84-1e" ct state  counter packets 0 bytes 0 accept
                ct state  counter packets 0 bytes 0 accept
                oifname != "qg-d2442f84-1e" counter packets 1 bytes 84 accept
        }

        chain neutron-l3-agent-PREROUTING {
                iifname "qr-*" meta l4proto tcp ip daddr 169.254.169.254 tcp dport 80 counter packets 16 bytes 960 redirect to :9697
                ip daddr 10.0.0.226 counter packets 1 bytes 60 dnat to 10.10.210.192
        }

        chain neutron-l3-agent-float-snat {
                ip saddr 10.10.210.192 counter packets 4 bytes 336 snat to 10.0.0.226 fully-random
        }

        chain neutron-l3-agent-snat {
                counter packets 7 bytes 476 jump neutron-l3-agent-float-snat
                oifname "qg-d2442f84-1e" counter packets 0 bytes 0 snat to 10.0.0.192 fully-random
                mark and 0xffff != 0x2 ct state  counter packets 0 bytes 0 snat to 10.0.0.192 fully-random
        }

        chain neutron-postrouting-bottom {
                 counter packets 7 bytes 476 jump neutron-l3-agent-snat
        }
}
table ip raw {
        chain PREROUTING {
                type filter hook prerouting priority -300; policy accept;
                counter packets 34641 bytes 1448629 jump neutron-l3-agent-PREROUTING
        }

        chain OUTPUT {
                type filter hook output priority -300; policy accept;
                counter packets 33063 bytes 1326072 jump neutron-l3-agent-OUTPUT
        }

        chain neutron-l3-agent-OUTPUT {
        }

        chain neutron-l3-agent-PREROUTING {
        }
}
table ip6 filter {
        chain INPUT {
                type filter hook input priority 0; policy accept;
                counter packets 276 bytes 24680 jump neutron-l3-agent-INPUT
        }

        chain FORWARD {
                type filter hook forward priority 0; policy accept;
                counter packets 0 bytes 0 jump neutron-filter-top
                counter packets 0 bytes 0 jump neutron-l3-agent-FORWARD
        }

        chain OUTPUT {
                type filter hook output priority 0; policy accept;
                counter packets 26 bytes 2488 jump neutron-filter-top
                counter packets 26 bytes 2488 jump neutron-l3-agent-OUTPUT
        }

        chain neutron-filter-top {
                counter packets 26 bytes 2488 jump neutron-l3-agent-local
        }

        chain neutron-l3-agent-FORWARD {
                counter packets 0 bytes 0 jump neutron-l3-agent-scope
        }

        chain neutron-l3-agent-INPUT {
        }

        chain neutron-l3-agent-OUTPUT {
        }

        chain neutron-l3-agent-local {
        }

        chain neutron-l3-agent-scope {
        }
}
table ip6 mangle {
        chain PREROUTING {
                type filter hook prerouting priority -150; policy accept;
                counter packets 682 bytes 66540 jump neutron-l3-agent-PREROUTING
        }

        chain INPUT {
                type filter hook input priority -150; policy accept;
                counter packets 276 bytes 24680 jump neutron-l3-agent-INPUT
        }

        chain FORWARD {
                type filter hook forward priority -150; policy accept;
                counter packets 0 bytes 0 jump neutron-l3-agent-FORWARD
        }

        chain OUTPUT {
                type route hook output priority -150; policy accept;
                counter packets 26 bytes 2488 jump neutron-l3-agent-OUTPUT
        }

        chain POSTROUTING {
                type filter hook postrouting priority -150; policy accept;
                counter packets 26 bytes 2488 jump neutron-l3-agent-POSTROUTING
        }

        chain neutron-l3-agent-FORWARD {
        }

        chain neutron-l3-agent-INPUT {
        }

        chain neutron-l3-agent-OUTPUT {
        }

        chain neutron-l3-agent-POSTROUTING {
        }

        chain neutron-l3-agent-PREROUTING {
                counter packets 682 bytes 66540 jump neutron-l3-agent-scope
                ct mark and 0xffff0000 != 0x0 counter packets 0 bytes 0 meta mark set ct mark and 0xffff0000
        }

        chain neutron-l3-agent-scope {
        }
}
table ip6 raw {
        chain PREROUTING {
                type filter hook prerouting priority -300; policy accept;
                counter packets 682 bytes 66540 jump neutron-l3-agent-PREROUTING
        }

        chain OUTPUT {
                type filter hook output priority -300; policy accept;
                counter packets 26 bytes 2488 jump neutron-l3-agent-OUTPUT
        }

        chain neutron-l3-agent-OUTPUT {
        }

        chain neutron-l3-agent-PREROUTING {
        }
}

Comment 9 Phil Sutter 2019-06-28 14:59:50 UTC
Hi,

Thanks for your input and the enlightening debugging session!

I've discussed the issue with a colleague and he pointed out that netfilter
postrouting hooks don't provide the input interface. This is not new and common
between iptables and nftables. The difference is how the match behaves in this
situation: With iptables, the comparison simply happens against an empty string.
With nftables though, rule processing aborts due to no data to compare against -
the rule doesn't match. The inverted match exposes the difference as for
iptables, the result is always true while for nftables it is always false.

We will discuss the problem in upcoming netfilter workshop (second week of
July). The difference in behaviour between iptables-legacy and iptables-nft is
definitely a bug which has to be solved.

What might be interesting news for you is that this input interface check in
postrouting chain is not effective and never was - even with legacy iptables
(e.g. in RHEL7), the rule would match a packet originating from that interface
as long as everything else is OK. So long story short, you may simply drop the
input interface parameter from that rule - not just as a workaround but
permanently.

I'll clone this ticket for nftables to track progress.

Cheers, Phil

Comment 10 Slawek Kaplonski 2019-07-01 07:35:10 UTC
Thx a lot Phill for all Your investigation on this.
I reported U/S bug against Neutron and will propose patch to drop this input interface from this rule.

Comment 11 Slawek Kaplonski 2019-07-01 08:11:42 UTC
I applied patch proposed u/s manually on test env and result was:

[stack@undercloud-0 tempest-dir]$ tempest run -vv --regex neutron_tempest_plugin.scenario.test_connectivity.NetworkConnectivityTest.test_connectivity_through_2_routers
tempest initialize_app
prepare_to_run_command TempestRun
{0} neutron_tempest_plugin.scenario.test_connectivity.NetworkConnectivityTest.test_connectivity_through_2_routers [110.297019s] ... ok

======
Totals
======
Ran: 1 tests in 110.2970 sec.
 - Passed: 1
 - Skipped: 0
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 0
Sum of execute time for each test: 110.2970 sec.

So it looks that we will be good with this test :)

Comment 21 Slawek Kaplonski 2019-07-11 14:28:58 UTC
@Lon, yes, I think that this one is required for beta. And it's fixed with https://review.opendev.org/#/c/668378/ which is already in stable/stein and in OSP-15.
Title of this BZ was misleading and that's why I think Tomer set it as depended on BZ 1729007 (which isn't beta blocker for sure).
This one can now be verified by QA and should be good to go IMO.

Comment 22 Eran Kuris 2019-07-11 15:04:36 UTC
(In reply to Slawek Kaplonski from comment #21)
> @Lon, yes, I think that this one is required for beta. And it's fixed with
> https://review.opendev.org/#/c/668378/ which is already in stable/stein and
> in OSP-15.
> Title of this BZ was misleading and that's why I think Tomer set it as
> depended on BZ 1729007 (which isn't beta blocker for sure).
> This one can now be verified by QA and should be good to go IMO.

Slawek,

According to Tomer's check, the test_connectivity_through_2_routers fails 100% of the time with the latest puddle. we will execute the test a few more time and will take statistic of success/failures.

Comment 23 Bernard Cafarelli 2019-07-11 15:13:51 UTC
It does work on several jobs, as Slawek pointed the other lingering failures are separate from the DNAT issue itself. Let's keep them separate

Comment 27 Tomer 2019-07-17 13:15:14 UTC
It was verified on -
Puddle - RHOS_TRUNK-15.0-RHEL-8-20190714.n.0

RPM - openstack-neutron-14.0.3-0.20190704180411.9f4e596.el8ost.noarch.rpm

Comment 30 errata-xmlrpc 2019-09-21 11:23:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:2811


Note You need to log in before you can comment on or make changes to this bug.