Bug 1465178 - Poor North-South Performance when using openvswitch firewall driver [NEEDINFO]
Poor North-South Performance when using openvswitch firewall driver
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-neutron (Show other bugs)
10.0 (Newton)
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Jakub Libosvar
Toni Freger
Depends On:
  Show dependency treegraph
Reported: 2017-06-26 18:27 EDT by Sai Sindhur Malleni
Modified: 2017-11-20 08:00 EST (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2017-11-20 08:00:51 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
jlibosva: needinfo? (smalleni)

Attachments (Terms of Use)

  None (edit)
Description Sai Sindhur Malleni 2017-06-26 18:27:41 EDT
Description of problem:
Comparing ML2/OVS with iptables vs ML2/OVS with openvswitch firewall, the latter seems to be performing better in almost all the cases except one. It is the North-South traffic case when the sender and receiver are on the same compute node (Page 39 in [1]). 

The test setup was as following:
1. Run 1 to 8 pairs of VMs(sender-recevier) that all reside on the same hypervisor.
2. Each VM of a pair is on a separate Neutron network. 
3. The Neutron networks are connected each connected to a Neutron router whose gateway is on the external network. (L3 North-South connectivity)
4. All VMs have Floating IPs.

Although, I do agree that this is a synthetic test with the senders and receivers both having FIPs and on the same hypervisor it gives us an estimate of the performance. If we move away from the model of hosting the senders and receivers within the OpenStack cloud, we would need one iperf server for every client sending traffic(iperf server can't support multiple clients), so we should have n number of iperf servers hosted somewhere externally to test cases where n clients are sending traffic. 

Back to the results, I see that in the North-South case where sender and receiver are on the same hypervisor with an FIP each, ML2/OVS with openvswitch firewall driver performs  poorly compared to ML2/OVS with iptables in TCP Download and TCP RR tests. 

To give you numbers, the sum of the throughput/transactions from the 8 clients is as follows:

TCP Download in Mbps-  
Iptables      openvswitch
63870.19617	21848.51885
TCP RR in transactions per second
Iptables     openvswitch
85583.65	68512.77

When 1 client is used the total throughput in each of the two cases is about the same, however as the number of clients is increased, throughput difference keeps getting bigger (openvswitch firewall driver doesn't scale like iptables does from a throughput perspective). I wonder if it is something to do with the number of conntrack zones or the number of times the packet passes through conntrack

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Run iperf tests on setup mentioned above

Actual results:
Performance with openvswitch firewall driver is lower than with iptables firewall driver

Expected results:
On par or better performance than iptables firewall driver

Additional info:

[1]- https://docs.google.com/a/redhat.com/document/d/1coNcfUPA-MqOiPJH4nCnT98SIKvG-W0hxSmEFbXOyAY/edit?usp=sharing

Note You need to log in before you can comment on or make changes to this bug.