Bug 1465178 - Poor North-South Performance when using openvswitch firewall driver
Summary: Poor North-South Performance when using openvswitch firewall driver
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-neutron
Version: 10.0 (Newton)
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Jakub Libosvar
QA Contact: Toni Freger
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-06-26 22:27 UTC by Sai Sindhur Malleni
Modified: 2023-09-14 03:59 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-11-20 13:00:51 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Sai Sindhur Malleni 2017-06-26 22:27:41 UTC
Description of problem:
Comparing ML2/OVS with iptables vs ML2/OVS with openvswitch firewall, the latter seems to be performing better in almost all the cases except one. It is the North-South traffic case when the sender and receiver are on the same compute node (Page 39 in [1]). 

The test setup was as following:
1. Run 1 to 8 pairs of VMs(sender-recevier) that all reside on the same hypervisor.
2. Each VM of a pair is on a separate Neutron network. 
3. The Neutron networks are connected each connected to a Neutron router whose gateway is on the external network. (L3 North-South connectivity)
4. All VMs have Floating IPs.

Although, I do agree that this is a synthetic test with the senders and receivers both having FIPs and on the same hypervisor it gives us an estimate of the performance. If we move away from the model of hosting the senders and receivers within the OpenStack cloud, we would need one iperf server for every client sending traffic(iperf server can't support multiple clients), so we should have n number of iperf servers hosted somewhere externally to test cases where n clients are sending traffic. 

Back to the results, I see that in the North-South case where sender and receiver are on the same hypervisor with an FIP each, ML2/OVS with openvswitch firewall driver performs  poorly compared to ML2/OVS with iptables in TCP Download and TCP RR tests. 

To give you numbers, the sum of the throughput/transactions from the 8 clients is as follows:

TCP Download in Mbps-  
Iptables      openvswitch
63870.19617	21848.51885
TCP RR in transactions per second
Iptables     openvswitch
85583.65	68512.77

When 1 client is used the total throughput in each of the two cases is about the same, however as the number of clients is increased, throughput difference keeps getting bigger (openvswitch firewall driver doesn't scale like iptables does from a throughput perspective). I wonder if it is something to do with the number of conntrack zones or the number of times the packet passes through conntrack


Version-Release number of selected component (if applicable):
10

How reproducible:
100%

Steps to Reproduce:
1. Run iperf tests on setup mentioned above
2.
3.

Actual results:
Performance with openvswitch firewall driver is lower than with iptables firewall driver

Expected results:
On par or better performance than iptables firewall driver

Additional info:

[1]- https://docs.google.com/a/redhat.com/document/d/1coNcfUPA-MqOiPJH4nCnT98SIKvG-W0hxSmEFbXOyAY/edit?usp=sharing

Comment 4 Red Hat Bugzilla 2023-09-14 03:59:50 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.