Bug 1412007 - [RFE] Support balanced-tcp/slb bonding modes for ovs-dpdk based networking
Summary: [RFE] Support balanced-tcp/slb bonding modes for ovs-dpdk based networking
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: openvswitch
Version: 7.4
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: pre-dev-freeze
: ---
Assignee: Franck Baudin
QA Contact: Hekai Wang
URL:
Whiteboard:
Depends On:
Blocks: 1419948 1465537
TreeView+ depends on / blocked
 
Reported: 2017-01-11 00:28 UTC by hrushi
Modified: 2019-07-02 14:26 UTC (History)
19 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-07-02 14:25:32 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description hrushi 2017-01-11 00:28:14 UTC
Description of problem:

NFV workloads need both bandwidth aggregation and fault tolerance in the networking path. In OVS-DPDK networking, this is achieved by bonding the dpdk ports. With RHOSP10, the bonding mode seems hard coded to active-standby. We need to provide options to the customer to help meet their requirements. Secondly, in HPE's NFV System, our networking infrastructure (switch) is configured to support dynamic lacp (802.3ad). This is not getting utilized with active-standby bonding mode. The requirement is to push various bonding modes and its bonding options through OSP director. Preferably, 

Bonding modes:
balanced-slb
balanced-tcp 

Bonding options:
lacp: active
bond-detect-mode: miimon
bond-miimon-interval: 200

Comment 2 Flavio Leitner 2017-01-12 19:14:37 UTC
Hi,

Those bonding modes were initially tested but we noticed that they had problems like packet loss or upstream switches shutting down ports.

Perhaps one option would be to use bond PMD? Have you thought about that?

Thanks!

Comment 3 hrushi 2017-01-12 21:51:23 UTC
Thanks Flavio. We did test balanced-tcp w/our own OpenStack distro using OVS 2.6 and did not see such behavior. Am not clear on bond PMD? Can you please elaborate?

Comment 4 Flavio Leitner 2017-01-12 22:38:30 UTC
DPDK has a PMD driver that provides the bond functionality as well.
So, in theory you could have the PMD doing the bonding logic while OVS sees that as a single DPDK port.

Comment 5 hrushi 2017-01-23 17:12:12 UTC
(In reply to Flavio Leitner from comment #4)
> DPDK has a PMD driver that provides the bond functionality as well.
> So, in theory you could have the PMD doing the bonding logic while OVS sees
> that as a single DPDK port.

If it serves the same purpose great, however I would prefer the traditional approach for troubleshooting and standard deployment perspective.

Comment 29 Jing.C.Zhang 2018-07-09 11:33:08 UTC
Can you please explain why the ticket is closed with "NOTABUG"?

I have been told by your support team the lacp bond packet loss issue will be addressed in OSP 13.

https://access.redhat.com/support/cases/#/case/01983658

In short, this is the closing statement for the above ticket, “While the bug has been fixed, it is still not recommended using OSP10 enabling LACP and Balance-TCP in Open vSwitch. It is a Red Hat goal to support this configuration by OSP13.”

Comment 30 Christian Trautman 2018-07-10 11:59:48 UTC
Reopening due to comment 29. We have test coverage in Platform QE. I will defer to OSP team to the questions in comment 29.

Comment 32 Franck Baudin 2019-07-02 14:25:32 UTC
balance-slb is fully supported, balance-tcp is tech preview. Update in OpenStack documentation to follow.


Note You need to log in before you can comment on or make changes to this bug.