Bug 1465178

Summary: Poor North-South Performance when using openvswitch firewall driver
Product: Red Hat OpenStack Reporter: Sai Sindhur Malleni <smalleni>
Component: openstack-neutronAssignee: Jakub Libosvar <jlibosva>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Toni Freger <tfreger>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 10.0 (Newton)CC: amuller, beagles, chrisw, jlibosva, nyechiel, smalleni, srevivo
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-11-20 13:00:51 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Sai Sindhur Malleni 2017-06-26 22:27:41 UTC
Description of problem:
Comparing ML2/OVS with iptables vs ML2/OVS with openvswitch firewall, the latter seems to be performing better in almost all the cases except one. It is the North-South traffic case when the sender and receiver are on the same compute node (Page 39 in [1]). 

The test setup was as following:
1. Run 1 to 8 pairs of VMs(sender-recevier) that all reside on the same hypervisor.
2. Each VM of a pair is on a separate Neutron network. 
3. The Neutron networks are connected each connected to a Neutron router whose gateway is on the external network. (L3 North-South connectivity)
4. All VMs have Floating IPs.

Although, I do agree that this is a synthetic test with the senders and receivers both having FIPs and on the same hypervisor it gives us an estimate of the performance. If we move away from the model of hosting the senders and receivers within the OpenStack cloud, we would need one iperf server for every client sending traffic(iperf server can't support multiple clients), so we should have n number of iperf servers hosted somewhere externally to test cases where n clients are sending traffic. 

Back to the results, I see that in the North-South case where sender and receiver are on the same hypervisor with an FIP each, ML2/OVS with openvswitch firewall driver performs  poorly compared to ML2/OVS with iptables in TCP Download and TCP RR tests. 

To give you numbers, the sum of the throughput/transactions from the 8 clients is as follows:

TCP Download in Mbps-  
Iptables      openvswitch
63870.19617	21848.51885
TCP RR in transactions per second
Iptables     openvswitch
85583.65	68512.77

When 1 client is used the total throughput in each of the two cases is about the same, however as the number of clients is increased, throughput difference keeps getting bigger (openvswitch firewall driver doesn't scale like iptables does from a throughput perspective). I wonder if it is something to do with the number of conntrack zones or the number of times the packet passes through conntrack


Version-Release number of selected component (if applicable):
10

How reproducible:
100%

Steps to Reproduce:
1. Run iperf tests on setup mentioned above
2.
3.

Actual results:
Performance with openvswitch firewall driver is lower than with iptables firewall driver

Expected results:
On par or better performance than iptables firewall driver

Additional info:

[1]- https://docs.google.com/a/redhat.com/document/d/1coNcfUPA-MqOiPJH4nCnT98SIKvG-W0hxSmEFbXOyAY/edit?usp=sharing

Comment 4 Red Hat Bugzilla 2023-09-14 03:59:50 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days