The FDP team is no longer accepting new bugs in Bugzilla. Please report your issues under FDP project in Jira. Thanks.
Bug 1817606 - [OVN] QoS rules share the same Openflow meter
Summary: [OVN] QoS rules share the same Openflow meter
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Fast Datapath
Classification: Red Hat
Component: ovn2.13
Version: FDP 20.A
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: lorenzo bianconi
QA Contact: ying xu
URL:
Whiteboard:
Depends On:
Blocks: 1848818
TreeView+ depends on / blocked
 
Reported: 2020-03-26 16:06 UTC by Maciej Józefczyk
Modified: 2020-06-19 03:13 UTC (History)
5 users (show)

Fixed In Version: ovn2.13-2.13.0-22.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1848818 (view as bug list)
Environment:
Last Closed: 2020-05-26 14:07:17 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:2317 0 None None None 2020-05-26 14:07:38 UTC

Description Maciej Józefczyk 2020-03-26 16:06:54 UTC
Description of problem:

QoS rules seems to share the same 'meter' configured in OVS, when the burst and rate are the same for those rules. 

Version-Release number of selected component (if applicable):
OVS/OVN master

Example ENV:

- Logical Switch with 3 Logical Ports that are VMs:
   * 1 LSP from which i test QoS with iperf3
   * 2 LSPs have QoS policy set.


Configured QoS rules:
--------------------------------------------------------------------------------------------------------
stack at mjozefcz-devstack-qos:~$ ovn-nbctl list qos
_uuid               : 7ad43edb-ed2a-4279-8373-f925a6591508
action              : {}
bandwidth           : {burst=10000, rate=10000}
direction           : from-lport
external_ids        : {}
match               : "inport == \"0dbccc4f-5c36-406e-a629-70d49d52e391\""
priority            : 2002

_uuid               : 8ecac46b-1ec0-4e76-a9e0-0b3063fc79e0
action              : {}
bandwidth           : {burst=10000, rate=10000}
direction           : from-lport
external_ids        : {}
match               : "inport == \"cad88274-feea-4ddb-b8c1-af49ca8833cf\""
priority            : 2002
stack at mjozefcz-devstack-qos:~$
--------------------------------------------------------------------------------------------------------
Please note that the rules have the same bandwidth configuration.
Those QoS rules are mapped for those two logical flows:

-----------------------------------------------------------------------------------------------------------------------------------------------
stack at mjozefcz-devstack-qos:~$ ovn-sbctl list logical_flow
7ae15276-6869-40ac-be1d-b4707dcf5dc7
_uuid               : 7ae15276-6869-40ac-be1d-b4707dcf5dc7
actions             : "set_meter(10000, 10000); next;"
external_ids        : {source="ovn-northd.c:5451", stage-hint="8ecac46b",
stage-name=ls_in_qos_meter}
logical_datapath    : 9a1af1f9-7b42-43c2-ab0b-f4796d209e63
match               : "inport == \"cad88274-feea-4ddb-b8c1-af49ca8833cf\""
pipeline            : ingress
priority            : 2002
table_id            : 8
hash                : 0
stack at mjozefcz-devstack-qos:~$ ovn-sbctl list logical_flow
f541520a-ef70-4038-8ee5-b5b609fc3883
_uuid               : f541520a-ef70-4038-8ee5-b5b609fc3883
actions             : "set_meter(10000, 10000); next;"
external_ids        : {source="ovn-northd.c:5451", stage-hint="7ad43edb",
stage-name=ls_in_qos_meter}
logical_datapath    : 9a1af1f9-7b42-43c2-ab0b-f4796d209e63
match               : "inport == \"0dbccc4f-5c36-406e-a629-70d49d52e391\""
pipeline            : ingress
priority            : 2002
table_id            : 8
hash                : 0
stack at mjozefcz-devstack-qos:~$
-----------------------------------------------------------------------------------------------------------------------------------------------

The problem is that those two rules use the same meter (meter id 2):
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
stack at mjozefcz-devstack-qos:~$ sudo ovs-ofctl -O OpenFlow13 dump-flows
br-int | grep meter
 cookie=0xf541520a, duration=4215.163s, table=16, n_packets=12497,
n_bytes=15463221, priority=2002,reg14=0x4,metadata=0x1
actions=meter:2,resubmit(,17)
 cookie=0x7ae15276, duration=4215.163s, table=16, n_packets=13789,
n_bytes=33132305, priority=2002,reg14=0x5,metadata=0x1
actions=meter:2,resubmit(,17)
stack at mjozefcz-devstack-qos:~$ sudo ovs-ofctl -O OpenFlow13 dump-meters
br-int
OFPST_METER_CONFIG reply (OF1.3) (xid=0x2):
meter=2 kbps burst stats bands=
type=drop rate=10000 burst_size=10000
stack at mjozefcz-devstack-qos:~$ sudo ovs-ofctl -O OpenFlow13 meter-stats
br-int
OFPST_METER reply (OF1.3) (xid=0x2):
meter:2 flow_count:2 packet_in_count:21607 byte_in_count:40746921
duration:4158.558s bands:
0: packet_count:2010 byte_count:6486212
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

So if there are more than one Logical Switch Ports from the same Logical
Switch bound on the chassis that share the same QoS BW limit settings,
those also share the same meter.
That ends with slitted bw limit across those Logical Switch Ports.
I tested it using iperf3, the results:

If only one LSP consumes the limit:
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
stack at mjozefcz-devstack-qos:~$ sudo ip netns exec
ovnmeta-9a1af1f9-7b42-43c2-ab0b-f4796d209e63 iperf3 -R -O 1 -c 10.1.0.20
Connecting to host 10.1.0.20, port 5201
Reverse mode, remote host 10.1.0.20 is sending
[  4] local 10.1.0.2 port 57206 connected to 10.1.0.20 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  3.49 MBytes  29.3 Mbits/sec
 (omitted)
[  4]   0.00-1.00   sec  1.06 MBytes  8.86 Mbits/sec
[  4]   1.00-2.00   sec  1.37 MBytes  11.5 Mbits/sec
[  4]   2.00-3.00   sec  1.16 MBytes  9.71 Mbits/sec
[  4]   3.00-4.00   sec  1.17 MBytes  9.80 Mbits/sec
[  4]   4.00-5.00   sec  1.17 MBytes  9.84 Mbits/sec
[  4]   5.00-6.00   sec  1.13 MBytes  9.46 Mbits/sec
[  4]   6.00-7.00   sec  1.04 MBytes  8.76 Mbits/sec
[  4]   7.00-8.00   sec  1.26 MBytes  10.6 Mbits/sec
[  4]   8.00-9.00   sec  1.06 MBytes  8.88 Mbits/sec
[  4]   9.00-10.00  sec  1.33 MBytes  11.2 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  11.7 MBytes  9.78 Mbits/sec  2554
sender
[  4]   0.00-10.00  sec  11.7 MBytes  9.85 Mbits/sec
 receiver

iperf Done.
stack at mjozefcz-devstack-qos:~$
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

If two VMs are using the meter - two iperf3 tests at the same time:
stack at mjozefcz-devstack-qos:~$ sleep 1; sudo ip netns exec
ovnmeta-9a1af1f9-7b42-43c2-ab0b-f4796d209e63 iperf3 -R -c 1
0.1.0.16
Connecting to host 10.1.0.16, port 5201
Reverse mode, remote host 10.1.0.16 is sending
[  4] local 10.1.0.2 port 56874 connected to 10.1.0.16 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  3.39 MBytes  28.5 Mbits/sec
[  4]   1.00-2.00   sec  69.6 KBytes   570 Kbits/sec
[  4]   2.00-3.00   sec   456 KBytes  3.74 Mbits/sec
[  4]   3.00-4.00   sec   290 KBytes  2.38 Mbits/sec
[  4]   4.00-5.00   sec   534 KBytes  4.37 Mbits/sec
[  4]   5.00-6.00   sec   324 KBytes  2.65 Mbits/sec
[  4]   6.00-7.00   sec  16.8 KBytes   137 Kbits/sec
[  4]   7.00-8.00   sec   474 KBytes  3.89 Mbits/sec
[  4]   8.00-9.00   sec   400 KBytes  3.27 Mbits/sec
[  4]   9.00-10.00  sec   750 KBytes  6.15 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  6.92 MBytes  5.80 Mbits/sec  867             sender
[  4]   0.00-10.00  sec  6.63 MBytes  5.56 Mbits/sec
 receiver

iperf Done.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------


Questions:
- Can we create separate meter for each QoS row, to not share the same
meter if the rules are the same (except match)?
- If we would introduce QoS for Port Groups, can we also use separate
meters?

Comment 9 errata-xmlrpc 2020-05-26 14:07:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2317


Note You need to log in before you can comment on or make changes to this bug.