Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1671776

Summary: [OVN] Traffic is not distributed from VM1 FIP to VM2 FIP even if DVR is enabled
Product: Red Hat OpenStack Reporter: Daniel Alvarez Sanchez <dalvarez>
Component: openvswitchAssignee: lorenzo bianconi <lorenzo.bianconi>
Status: CLOSED ERRATA QA Contact: Eran Kuris <ekuris>
Severity: high Docs Contact:
Priority: high    
Version: 14.0 (Rocky)CC: apevec, chrisw, fhallal, lmartins, lorenzo.bianconi, njohnston, rheslop, rhos-maint, rsafrono, tredaelli, tvignaud
Target Milestone: z4Keywords: Triaged, ZStream
Target Release: 14.0 (Rocky)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openvswitch2.11-2.11.0-9.el7fdp Doc Type: Bug Fix
Doc Text:
Previously, two VMs with floating IPs (FIPs) on differing compute nodes would send traffic through the geneve tunnel, even with distributed virtual routing (DVR) enabled. With this patch, direct FIP to FIP communication is enabled with DVR, improving network efficiency.
Story Points: ---
Clone Of:
: 1701183 (view as bug list) Environment:
Last Closed: 2019-11-06 16:49:59 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1701183    

Description Daniel Alvarez Sanchez 2019-02-01 15:57:31 UTC
Description of problem:

When two VMs each with a floating IP try to talk to each other through the floating IP address and they are located on different compute nodes, it's expected that the traffic won't go to any controller/network node. However, the traffic is pushed via the tunnel interface.

On the other hand, if from the external network, we try to reach VM's FIP, traffic will be fully distributed.

Expected behavior:

When VM1 tries to ping VM2 FIP, all the traffic has to be directly from compute to compute and never through the tunnel interface.




$ ovn-nbctl find nat type=dnat_and_snat
_uuid               : dc792041-9b4c-4943-9d50-537beabf974c
external_ids        : {"neutron:fip_external_mac"="fa:16:3e:cf:1a:33", "neutron:fip_id"="3c958c0c-ca81-4f9b-86df-024cc8470a6d", "neutron:fip_port_id"="9b8bb3bf-fab3-42ca-8db1-a428270b632e", "neutron:revision_number"="1", "neutron:router_name"="neutron-701cf876-fc3e-4958-b7af-5498199d8677"}
external_ip         : "172.24.4.106"
external_mac        : "fa:16:3e:cf:1a:33"
logical_ip          : "10.0.0.21"
logical_port        : "9b8bb3bf-fab3-42ca-8db1-a428270b632e"
type                : dnat_and_snat

_uuid               : 0c27232d-059e-40a6-b0f9-5333c938169b
external_ids        : {}
external_ip         : "172.24.4.170"
external_mac        : "fa:16:3e:14:2c:0e"
logical_ip          : "10.0.0.18"
logical_port        : "c2be57b3-c443-4968-b4df-4c6ed4263cb8"
type                : dnat_and_snat




From VM1 (10.0.0.21) I ping 172.24.4.106 and I can see the traffic in the tunnel interface of the controller node:


$ sudo tcpdump -i genev_sys_6081 -vvnnS not port 3784
tcpdump: listening on genev_sys_6081, link-type EN10MB (Ethernet), capture size 262144 bytes
15:42:19.374091 IP (tos 0x0, ttl 63, id 55722, offset 0, flags [DF], proto ICMP (1), length 84)
    10.0.0.18 > 172.24.4.106: ICMP echo request, id 21880, seq 9, length 64
15:42:19.374711 IP (tos 0x0, ttl 62, id 2993, offset 0, flags [none], proto ICMP (1), length 84)
    172.24.4.106 > 10.0.0.18: ICMP echo reply, id 21880, seq 9, length 64


Not sure if this happens in OSP13 (OVS 2.9).

Comment 10 Roman Safronov 2019-10-31 17:05:47 UTC
Verified on 14.0-RHEL-7/2019-10-25.1 with openvswitch2.11-2.11.0-21.el7fdp.x86_64

Verified according to the scenario in description. The problem does not occur, no traffic in tunnel when pinging between 2 VMs FIPs (VMs are running on different compute nodes).

Comment 12 errata-xmlrpc 2019-11-06 16:49:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3752