Bug 2120247
| Summary: | [dpdkbond]LACP PDU packet lose when send high rate traffic to dpdkbond port | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux Fast Datapath | Reporter: | mhou <mhou> | |
| Component: | openvswitch3.2 | Assignee: | Robin Jarry <rjarry> | |
| Status: | CLOSED EOL | QA Contact: | mhou <mhou> | |
| Severity: | high | Docs Contact: | ||
| Priority: | medium | |||
| Version: | FDP 22.G | CC: | amorenoz, ctrautma, dmarchan, fleitner, jhsiao, ktraynor, kzhang, mhou, ralongi, rjarry | |
| Target Milestone: | --- | |||
| Target Release: | --- | |||
| Hardware: | x86_64 | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | openvswitch3.2-3.2.0-0.2.el9fdp | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 2149301 (view as bug list) | Environment: | ||
| Last Closed: | 2024-10-08 17:49:14 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 2149301 | |||
|
Description
mhou
2022-08-22 10:14:38 UTC
Hello Robin
I try switch cx5/e810 nic but still can hit port down issue. The key command is add 'ovs-vsctl set Open_vSwitch . other_config:pmd-rxq-isolate=false' and assign specify cpu to rx queue.
Here is my reproducer.
# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
node 0 size: 63866 MB
node 0 free: 31080 MB
node 1 cpus: 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
node 1 size: 62402 MB
node 1 free: 33346 MB
node distances:
node 0 1
0: 10 21
1: 21 10
systemctl start openvswitch
ovs-appctl vlog/set console:lacp:dbg syslog:lacp:dbg file:lacp:dbg
ovs-appctl vlog/set console:bond:dbg syslog:bond:dbg file:bond:dbg
ovs-vsctl set Open_vSwitch . 'other_config={}'
ovs-vsctl set Open_vSwitch . other_config:dpdk-socket-mem=0,4096
ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x60000 # 2cpu
ovs-vsctl set Open_vSwitch . other_config:dpdk-lcore-mask=0x80000
ovs-vsctl set Open_vSwitch . other_config:vhost-iommu-support=false
ovs-vsctl set Open_vSwitch . other_config:userspace-tso-enable=false
ovs-vsctl set Open_vSwitch . other_config:pmd-rxq-isolate=false
ovs-vsctl set Open_vSwitch . other_config:dpdk-init=true
ovs-vsctl --if-exists del-br ovsbr0
ovs-vsctl --may-exist add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
ovs-vsctl add-bond ovsbr0 balance-tcp ens3f0 ens3f1 lacp=active bond_mode=balance-tcp \
-- set Interface ens3f0 type=dpdk options:dpdk-devargs=0000:13:00.0 options:n_rxq=1 options:n_rxq_desc=2048 options:n_txq_desc=2048 mtu_request=9200 options:dpdk-lsc-interrupt=true options:rx-steering=rss+lacp \
-- set Interface ens3f1 type=dpdk options:dpdk-devargs=0000:13:00.1 options:n_rxq=1 options:n_rxq_desc=2048 options:n_txq_desc=2048 mtu_request=9200 options:dpdk-lsc-interrupt=true options:rx-steering=rss+lacp \
-- set Port balance-tcp other_config:lb-output-action=true
mkdir -p /tmp/vhostuser/
chmod 777 /tmp/vhostuser/
chown -R qemu:hugetlbfs /tmp/vhostuser/
chcon -R -u system_u -t svirt_tmp_t /tmp/vhostuser/
restorecon -Rv /tmp/vhostuser
# ovs-appctl bond/show
---- balance-tcp ----
bond_mode: balance-tcp
bond may use recirculation: yes, Recirc-ID : 1
bond-hash-basis: 0
lb_output action: enabled, bond-id: 1
updelay: 0 ms
downdelay: 0 ms
next rebalance: 7299 ms
lacp_status: configured
lacp_fallback_ab: false
active-backup primary: <none>
active member mac: 00:00:00:00:00:00(none)
member ens1f0np0: disabled
may_enable: false
member ens1f1np1: disabled
may_enable: false
# ovs-appctl lacp/show
---- balance-tcp ----
status: active
sys_id: b4:96:91:a5:d0:e6
sys_priority: 65534
aggregation key: 3
lacp_time: slow
member: ens3f0: defaulted detached
port_id: 4
port_priority: 65535
may_enable: false
actor sys_id: b4:96:91:a5:d0:e6
actor sys_priority: 65534
actor port_id: 4
actor port_priority: 65535
actor key: 3
actor state: activity aggregation collecting distributing defaulted
partner sys_id: 00:00:00:00:00:00
partner sys_priority: 0
partner port_id: 0
partner port_priority: 0
partner key: 0
partner state:
member: ens3f1: defaulted detached
port_id: 3
port_priority: 65535
may_enable: false
actor sys_id: b4:96:91:a5:d0:e6
actor sys_priority: 65534
actor port_id: 3
actor port_priority: 65535
actor key: 3
actor state: activity aggregation collecting distributing defaulted
partner sys_id: 00:00:00:00:00:00
partner sys_priority: 0
partner port_id: 0
partner port_priority: 0
partner key: 0
partner state:
# ovs-appctl lacp/show-status
"lacp/show-status" is not a valid command (use "list-commands" to see a list of valid commands)
ovs-appctl: ovs-vswitchd: server returned an error
[root@hp-dl388g10-03 ~]# ovs-appctl lacp/show-stats
---- balance-tcp statistics ----
member: ens3f0:
TX PDUs: 8
RX PDUs: 0
RX Bad PDUs: 0
RX Marker Request PDUs: 0
Link Expired: 0
Link Defaulted: 2
Carrier Status Changed: 1
member: ens3f1:
TX PDUs: 9
RX PDUs: 0
RX Bad PDUs: 0
RX Marker Request PDUs: 0
Link Expired: 0
Link Defaulted: 2
Carrier Status Changed: 1
023-10-24T06:37:29.717Z|00123|bridge|INFO|bridge ovsbr0: added interface vhost0 on port 5
2023-10-24T06:37:29.717Z|00124|bridge|INFO|bridge ovsbr0: using datapath ID 0000b49691a5d0e6
2023-10-24T06:37:29.717Z|00125|connmgr|INFO|ovsbr0: added service controller "punix:/var/run/openvswitch/ovsbr0.mgmt"
2023-10-24T06:37:29.717Z|00126|dpif_netdev|INFO|Core 17 on numa node 0 is pinned with port 'vhost0' rx queue 0 (measured processing cycles 0)
2023-10-24T06:37:29.717Z|00127|dpif_netdev|INFO|Core 18 on numa node 0 is pinned with port 'vhost1' rx queue 0 (measured processing cycles 0)
2023-10-24T06:37:29.717Z|00128|dpif_netdev|INFO|Core 17 on numa node 0 is pinned with port 'ens3f0' rx queue 0 (measured processing cycles 0)
2023-10-24T06:37:29.717Z|00129|dpif_netdev|INFO|Core 18 on numa node 0 is pinned with port 'ens3f1' rx queue 0 (measured processing cycles 0)
2023-10-24T06:37:29.717Z|00130|dpif_netdev|INFO|Performing pmd to rx queue assignment using cycles algorithm.
2023-10-24T06:37:29.717Z|00131|dpif_netdev|WARN|No non-isolated pmd on any numa available for port 'ens3f1' rx queue 1 (measured processing cycles 0). This rx queue will not be polled.
2023-10-24T06:37:29.717Z|00132|dpif_netdev|WARN|No non-isolated pmd on any numa available for port 'ens3f0' rx queue 1 (measured processing cycles 0). This rx queue will not be polled.
2023-10-24T06:37:29.718Z|00133|timeval|WARN|Unreasonably long 6916ms poll interval (3236ms user, 2619ms system)
2023-10-24T06:37:29.718Z|00134|timeval|WARN|faults: 24901 minor, 0 major
2023-10-24T06:37:29.718Z|00135|timeval|WARN|disk: 0 reads, 32 writes
2023-10-24T06:37:29.718Z|00136|timeval|WARN|context switches: 1314 voluntary, 14119 involuntary
2023-10-24T06:37:29.718Z|00137|coverage|INFO|Event coverage, avg rate over last: 5 seconds, last minute, last hour, hash=31311377:
2023-10-24T06:37:29.718Z|00138|coverage|INFO|netlink_sent 0.0/sec 0.000/sec 0.0000/sec total: 54
2023-10-24T06:37:29.718Z|00139|coverage|INFO|netlink_recv_jumbo 0.0/sec 0.000/sec 0.0000/sec total: 9
2023-10-24T06:37:29.718Z|00140|coverage|INFO|netlink_received 0.0/sec 0.000/sec 0.0000/sec total: 55
2023-10-24T06:37:29.718Z|00141|coverage|INFO|netdev_get_ethtool 0.0/sec 0.000/sec 0.0000/sec total: 2
2023-10-24T06:37:29.718Z|00142|coverage|INFO|netdev_set_policing 0.0/sec 0.000/sec 0.0000/sec total: 1
2023-10-24T06:37:29.718Z|00143|coverage|INFO|util_xalloc 19.4/sec 1.617/sec 0.0269/sec total: 20813
2023-10-24T06:37:29.718Z|00144|coverage|INFO|stream_open 0.0/sec 0.000/sec 0.0000/sec total: 1
2023-10-24T06:37:29.718Z|00145|coverage|INFO|pstream_open 0.0/sec 0.000/sec 0.0000/sec total: 3
2023-10-24T06:37:29.718Z|00146|coverage|INFO|seq_change 0.4/sec 0.033/sec 0.0006/sec total: 14715
2023-10-24T06:37:29.718Z|00147|coverage|INFO|poll_zero_timeout 0.0/sec 0.000/sec 0.0000/sec total: 2
2023-10-24T06:37:29.718Z|00148|coverage|INFO|poll_create_node 9.0/sec 0.750/sec 0.0125/sec total: 4643
2023-10-24T06:37:29.718Z|00149|coverage|INFO|netdev_get_stats 0.0/sec 0.000/sec 0.0000/sec total: 5
2023-10-24T06:37:29.718Z|00150|coverage|INFO|hmap_shrink 0.0/sec 0.000/sec 0.0000/sec total: 4
2023-10-24T06:37:29.718Z|00151|coverage|INFO|hmap_expand 0.2/sec 0.017/sec 0.0003/sec total: 564
2023-10-24T06:37:29.718Z|00152|coverage|INFO|hmap_pathological 0.0/sec 0.000/sec 0.0000/sec total: 5
2023-10-24T06:37:29.718Z|00153|coverage|INFO|miniflow_malloc 0.0/sec 0.000/sec 0.0000/sec total: 26
2023-10-24T06:37:29.718Z|00154|coverage|INFO|flow_extract 0.0/sec 0.000/sec 0.0000/sec total: 4
2023-10-24T06:37:29.718Z|00155|coverage|INFO|dpif_port_add 0.0/sec 0.000/sec 0.0000/sec total: 5
2023-10-24T06:37:29.718Z|00156|coverage|INFO|dpif_flow_put_error 0.0/sec 0.000/sec 0.0000/sec total: 1
2023-10-24T06:37:29.718Z|00157|coverage|INFO|dpif_flow_put 0.0/sec 0.000/sec 0.0000/sec total: 25
2023-10-24T06:37:29.718Z|00158|coverage|INFO|dpif_flow_get 0.0/sec 0.000/sec 0.0000/sec total: 24
2023-10-24T06:37:29.718Z|00159|coverage|INFO|dpif_flow_flush 0.0/sec 0.000/sec 0.0000/sec total: 1
2023-10-24T06:37:29.718Z|00160|coverage|INFO|dpif_flow_del 0.0/sec 0.000/sec 0.0000/sec total: 24
2023-10-24T06:37:29.718Z|00161|coverage|INFO|dpif_execute 0.0/sec 0.000/sec 0.0000/sec total: 6
2023-10-24T06:37:29.718Z|00162|coverage|INFO|cmap_shrink 0.0/sec 0.000/sec 0.0000/sec total: 26
2023-10-24T06:37:29.718Z|00163|coverage|INFO|cmap_expand 0.0/sec 0.000/sec 0.0000/sec total: 53
2023-10-24T06:37:29.718Z|00164|coverage|INFO|ccmap_shrink 0.0/sec 0.000/sec 0.0000/sec total: 24
2023-10-24T06:37:29.718Z|00165|coverage|INFO|rev_flow_table 0.0/sec 0.000/sec 0.0000/sec total: 1
2023-10-24T06:37:29.718Z|00166|coverage|INFO|ofproto_update_port 0.0/sec 0.000/sec 0.0000/sec total: 5
2023-10-24T06:37:29.718Z|00167|coverage|INFO|ofproto_flush 0.0/sec 0.000/sec 0.0000/sec total: 1
2023-10-24T06:37:29.718Z|00168|coverage|INFO|bridge_reconfigure 0.0/sec 0.000/sec 0.0000/sec total: 1
2023-10-24T06:37:29.718Z|00169|coverage|INFO|125 events never hit
2023-10-24T06:37:29.718Z|00170|poll_loop|INFO|wakeup due to [POLLIN] on fd 84 (FIFO pipe:[139485]) at ../lib/ovs-rcu.c:236 (84% CPU usage)
2023-10-24T06:37:29.720Z|00171|poll_loop|INFO|wakeup due to [POLLIN] on fd 14 (NETLINK_ROUTE<->NETLINK_ROUTE) at ../lib/netlink-socket.c:1409 (84% CPU usage)
2023-10-24T06:37:29.720Z|00172|poll_loop|INFO|wakeup due to [POLLIN] on fd 15 (<->/var/run/openvswitch/db.sock) at ../lib/stream-fd.c:157 (84% CPU usage)
2023-10-24T06:37:29.721Z|00173|poll_loop|INFO|wakeup due to [POLLIN] on fd 84 (FIFO pipe:[139485]) at ../lib/ovs-rcu.c:236 (84% CPU usage)
2023-10-24T06:37:29.721Z|00174|dpif_netdev|WARN|pmd-rxq-isolate can only be set false when using pmd-rxq-assign=group
2023-10-24T06:37:29.721Z|00175|poll_loop|INFO|wakeup due to [POLLIN] on fd 84 (FIFO pipe:[139485]) at ../vswitchd/bridge.c:421 (84% CPU usage)
2023-10-24T06:37:29.721Z|00176|bridge|INFO|ovs-vswitchd (Open vSwitch) 3.2.0
2023-10-24T06:37:29.722Z|00177|poll_loop|INFO|wakeup due to 0-ms timeout at ../lib/dpif-netdev.c:6862 (84% CPU usage)
2023-10-24T06:37:29.722Z|00178|poll_loop|INFO|wakeup due to [POLLIN] on fd 84 (FIFO pipe:[139485]) at ../lib/ovs-rcu.c:236 (84% CPU usage)
2023-10-24T06:37:29.722Z|00179|poll_loop|INFO|wakeup due to [POLLIN] on fd 84 (FIFO pipe:[139485]) at ../lib/ovs-rcu.c:236 (84% CPU usage)
2023-10-24T06:37:29.723Z|00180|poll_loop|INFO|wakeup due to [POLLIN] on fd 84 (FIFO pipe:[139485]) at ../vswitchd/bridge.c:421 (84% CPU usage)
2023-10-24T06:37:34.385Z|00181|memory|INFO|253748 kB peak resident set size after 11.6 seconds
2023-10-24T06:37:34.385Z|00182|memory|INFO|handlers:1 idl-cells-Open_vSwitch:302 ports:5 revalidators:1 rules:5
2023-10-24T06:40:21.500Z|00183|bridge|INFO|bridge ovsbr0: deleted interface ens3f1 on port 3
2023-10-24T06:40:21.500Z|00184|bridge|INFO|bridge ovsbr0: deleted interface ens3f0 on port 4
2023-10-24T06:40:21.500Z|00185|dpif_netdev|INFO|Core 17 on numa node 0 is pinned with port 'vhost0' rx queue 0 (measured processing cycles 0)
2023-10-24T06:40:21.500Z|00186|dpif_netdev|INFO|Core 18 on numa node 0 is pinned with port 'vhost1' rx queue 0 (measured processing cycles 0)
2023-10-24T06:40:21.500Z|00187|dpif_netdev|INFO|Core 17 on numa node 0 is pinned with port 'ens3f0' rx queue 0 (measured processing cycles 0)
2023-10-24T06:40:21.500Z|00188|dpif_netdev|INFO|Performing pmd to rx queue assignment using cycles algorithm.
2023-10-24T06:40:21.500Z|00189|dpif_netdev|WARN|No non-isolated pmd on any numa available for port 'ens3f0' rx queue 1 (measured processing cycles 0). This rx queue will not be polled.
2023-10-24T06:40:21.517Z|00190|dpif_netdev|INFO|Core 17 on numa node 0 is pinned with port 'vhost0' rx queue 0 (measured processing cycles 0)
2023-10-24T06:40:21.517Z|00191|dpif_netdev|INFO|Core 18 on numa node 0 is pinned with port 'vhost1' rx queue 0 (measured processing cycles 0)
2023-10-24T06:40:21.554Z|00192|dpif_netdev|WARN|pmd-rxq-isolate can only be set false when using pmd-rxq-assign=group
2023-10-24T06:40:21.554Z|00193|bridge|INFO|bridge ovsbr0: using datapath ID 0000d6f719fcd94c
2023-10-24T06:40:21.555Z|00194|dpif_netdev|WARN|pmd-rxq-isolate can only be set false when using pmd-rxq-assign=group
If I don't set 'pmd-rxq-isolate=false' and 'pmd-rxq-affinity', balance-tcp can start as well.
driverctl set-override 0000:13:00.0 vfio-pci
driverctl set-override 0000:13:00.1 vfio-pci
systemctl start openvswitch
ovs-appctl vlog/set console:lacp:dbg syslog:lacp:dbg file:lacp:dbg
ovs-appctl vlog/set console:bond:dbg syslog:bond:dbg file:bond:dbg
ovs-vsctl set Open_vSwitch . 'other_config={}'
ovs-vsctl set Open_vSwitch . other_config:dpdk-socket-mem=4096,4096
ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x60000 # 0x70000 3cpu
ovs-vsctl set Open_vSwitch . other_config:dpdk-lcore-mask=0x80000
ovs-vsctl set Open_vSwitch . other_config:vhost-iommu-support=false
ovs-vsctl set Open_vSwitch . other_config:userspace-tso-enable=false
#ovs-vsctl set Open_vSwitch . other_config:pmd-rxq-isolate=false
ovs-vsctl set Open_vSwitch . other_config:dpdk-init=true
ovs-vsctl --if-exists del-br ovsbr0
ovs-vsctl --may-exist add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
ovs-vsctl add-bond ovsbr0 balance-tcp ens3f0 ens3f1 lacp=active bond_mode=balance-tcp \
-- set Interface ens3f0 type=dpdk options:dpdk-devargs=0000:13:00.0 options:n_rxq=1 options:n_rxq_desc=2048 options:n_txq_desc=2048 mtu_request=9200 options:dpdk-lsc-interrupt=true options:rx-steering=rss+lacp \
-- set Interface ens3f1 type=dpdk options:dpdk-devargs=0000:13:00.1 options:n_rxq=1 options:n_rxq_desc=2048 options:n_txq_desc=2048 mtu_request=9200 options:dpdk-lsc-interrupt=true options:rx-steering=rss+lacp \
-- set Port balance-tcp other_config:lb-output-action=true
# ovs-appctl bond/show
---- balance-tcp ----
bond_mode: balance-tcp
bond may use recirculation: yes, Recirc-ID : 1
bond-hash-basis: 0
lb_output action: enabled, bond-id: 1
updelay: 0 ms
downdelay: 0 ms
next rebalance: 7102 ms
lacp_status: negotiated
lacp_fallback_ab: false
active-backup primary: <none>
active member mac: b4:96:91:a5:d0:e7(ens3f1)
member ens3f0: enabled
may_enable: true
member ens3f1: enabled
active member
may_enable: true
# ovs-appctl lacp/show
---- balance-tcp ----
status: active negotiated
sys_id: b4:96:91:a5:d0:e6
sys_priority: 65534
aggregation key: 1
lacp_time: slow
member: ens3f0: current attached
port_id: 2
port_priority: 65535
may_enable: true
actor sys_id: b4:96:91:a5:d0:e6
actor sys_priority: 65534
actor port_id: 2
actor port_priority: 65535
actor key: 1
actor state: activity aggregation synchronized collecting distributing
partner sys_id: b0:c5:3c:f6:36:c7
partner sys_priority: 32768
partner port_id: 268
partner port_priority: 32768
partner key: 56
partner state: aggregation synchronized collecting distributing
member: ens3f1: current attached
port_id: 1
port_priority: 65535
may_enable: true
actor sys_id: b4:96:91:a5:d0:e6
actor sys_priority: 65534
actor port_id: 1
actor port_priority: 65535
actor key: 1
actor state: activity aggregation synchronized collecting distributing
partner sys_id: b0:c5:3c:f6:36:c7
partner sys_priority: 32768
partner port_id: 267
partner port_priority: 32768
partner key: 56
partner state: aggregation synchronized collecting distributing
# ovs-appctl lacp/show-stats
---- balance-tcp statistics ----
member: ens3f0:
TX PDUs: 8
RX PDUs: 6
RX Bad PDUs: 0
RX Marker Request PDUs: 0
Link Expired: 0
Link Defaulted: 1
Carrier Status Changed: 1
member: ens3f1:
TX PDUs: 9
RX PDUs: 6
RX Bad PDUs: 0
RX Marker Request PDUs: 0
Link Expired: 0
Link Defaulted: 1
Carrier Status Changed: 1
2023-10-24T07:09:12.252Z|00014|dpdk|INFO|EAL: Detected CPU lcores: 40
2023-10-24T07:09:12.252Z|00015|dpdk|INFO|EAL: Detected NUMA nodes: 2
2023-10-24T07:09:12.252Z|00016|dpdk|INFO|EAL: Detected static linkage of DPDK
2023-10-24T07:09:12.282Z|00017|dpdk|INFO|EAL: rte_mem_virt2phy(): cannot open /proc/self/pagemap: Permission denied
2023-10-24T07:09:12.282Z|00018|dpdk|INFO|EAL: Selected IOVA mode 'VA'
2023-10-24T07:09:12.282Z|00019|dpdk|WARN|EAL: No free 2048 kB hugepages reported on node 0
2023-10-24T07:09:12.282Z|00020|dpdk|WARN|EAL: No free 2048 kB hugepages reported on node 1
2023-10-24T07:09:12.283Z|00021|dpdk|INFO|EAL: VFIO support initialized
2023-10-24T07:09:14.519Z|00022|dpdk|INFO|EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:12:00.0 (socket 0)
2023-10-24T07:09:14.883Z|00023|dpdk|INFO|EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:12:00.1 (socket 0)
2023-10-24T07:09:15.362Z|00024|dpdk|INFO|EAL: Using IOMMU type 1 (Type 1)
2023-10-24T07:09:15.692Z|00025|dpdk|INFO|EAL: Probe PCI driver: net_ice (8086:159b) device: 0000:13:00.0 (socket 0)
2023-10-24T07:09:16.508Z|00026|dpdk|INFO|ice_load_pkg_type(): Active package is: 1.3.30.0, ICE OS Default Package (double VLAN mode)
2023-10-24T07:09:16.781Z|00027|dpdk|INFO|EAL: Probe PCI driver: net_ice (8086:159b) device: 0000:13:00.1 (socket 0)
2023-10-24T07:09:17.178Z|00028|dpdk|INFO|ice_load_pkg_type(): Active package is: 1.3.30.0, ICE OS Default Package (double VLAN mode)
2023-10-24T07:09:17.250Z|00029|dpdk|WARN|TELEMETRY: No legacy callbacks, legacy socket not created
2023-10-24T07:09:17.250Z|00030|netdev_dpdk|INFO|Per port memory for DPDK devices disabled.
2023-10-24T07:09:17.250Z|00031|netdev_dpdk|INFO|No vhost-sock-dir provided - defaulting to /var/run/openvswitch
2023-10-24T07:09:17.250Z|00032|netdev_dpdk|INFO|IOMMU support for vhost-user-client disabled.
2023-10-24T07:09:17.250Z|00033|netdev_dpdk|INFO|POSTCOPY support for vhost-user-client disabled.
2023-10-24T07:09:17.251Z|00034|dpdk|INFO|DPDK Enabled - initialized
2023-10-24T07:09:17.252Z|00035|timeval|WARN|Unreasonably long 5005ms poll interval (1490ms user, 2598ms system)
2023-10-24T07:09:17.252Z|00036|timeval|WARN|faults: 16303 minor, 0 major
2023-10-24T07:09:17.252Z|00037|timeval|WARN|context switches: 1301 voluntary, 87 involuntary
2023-10-24T07:09:17.252Z|00038|coverage|INFO|Event coverage, avg rate over last: 5 seconds, last minute, last hour, hash=ad35a750:
2023-10-24T07:09:17.252Z|00039|coverage|INFO|nln_changed 0.0/sec 0.117/sec 0.0042/sec total: 15
2023-10-24T07:09:17.252Z|00040|coverage|INFO|netlink_sent 0.0/sec 2.950/sec 0.1119/sec total: 403
2023-10-24T07:09:17.252Z|00041|coverage|INFO|netlink_recv_jumbo 0.0/sec 0.167/sec 0.0061/sec total: 22
2023-10-24T07:09:17.252Z|00042|coverage|INFO|netlink_received 0.0/sec 3.350/sec 0.1278/sec total: 460
2023-10-24T07:09:17.252Z|00043|coverage|INFO|util_xalloc 598.4/sec 89.583/sec 4.4561/sec total: 16042
2023-10-24T07:09:17.252Z|00044|coverage|INFO|unixctl_replied 0.4/sec 0.033/sec 0.0006/sec total: 2
2023-10-24T07:09:17.252Z|00045|coverage|INFO|unixctl_received 0.4/sec 0.033/sec 0.0006/sec total: 2
2023-10-24T07:09:17.252Z|00046|coverage|INFO|stream_open 0.0/sec 0.000/sec 0.0003/sec total: 1
2023-10-24T07:09:17.252Z|00047|coverage|INFO|pstream_open 0.0/sec 0.000/sec 0.0003/sec total: 1
2023-10-24T07:09:17.252Z|00048|coverage|INFO|seq_change 3.4/sec 4.533/sec 0.1628/sec total: 586
2023-10-24T07:09:17.252Z|00049|coverage|INFO|poll_create_node 101.8/sec 13.033/sec 0.3478/sec total: 1252
2023-10-24T07:09:17.252Z|00050|coverage|INFO|txn_success 0.0/sec 0.000/sec 0.0006/sec total: 2
2023-10-24T07:09:17.252Z|00051|coverage|INFO|txn_incomplete 1.4/sec 0.117/sec 0.0025/sec total: 9
2023-10-24T07:09:17.252Z|00052|coverage|INFO|txn_unchanged 1.2/sec 0.350/sec 0.0100/sec total: 36
2023-10-24T07:09:17.252Z|00053|coverage|INFO|hmap_expand 47.4/sec 5.867/sec 0.2611/sec total: 940
2023-10-24T07:09:17.252Z|00054|coverage|INFO|hmap_pathological 0.0/sec 0.000/sec 0.0014/sec total: 5
2023-10-24T07:09:17.252Z|00055|coverage|INFO|miniflow_malloc 0.0/sec 1.167/sec 0.0428/sec total: 154
2023-10-24T07:09:17.252Z|00056|coverage|INFO|cmap_shrink 0.0/sec 0.667/sec 0.0222/sec total: 80
2023-10-24T07:09:17.252Z|00057|coverage|INFO|cmap_expand 0.2/sec 0.683/sec 0.0256/sec total: 92
2023-10-24T07:09:17.252Z|00058|coverage|INFO|bridge_reconfigure 1.2/sec 0.183/sec 0.0039/sec total: 14
2023-10-24T07:09:17.252Z|00059|coverage|INFO|136 events never hit
2023-10-24T07:09:17.253Z|00060|poll_loop|INFO|wakeup due to [POLLIN] on fd 15 (<->/var/run/openvswitch/db.sock) at ../lib/stream-fd.c:157 (67% CPU usage)
2023-10-24T07:09:17.254Z|00061|poll_loop|INFO|wakeup due to [POLLIN] on fd 15 (<->/var/run/openvswitch/db.sock) at ../lib/stream-fd.c:157 (67% CPU usage)
2023-10-24T07:09:17.268Z|00062|poll_loop|INFO|wakeup due to [POLLIN] on fd 15 (<->/var/run/openvswitch/db.sock) at ../lib/stream-fd.c:157 (67% CPU usage)
2023-10-24T07:09:17.268Z|00063|pmd_perf|INFO|DPDK provided TSC frequency: 2100000 KHz
2023-10-24T07:09:17.269Z|00064|dpif_netdev_extract|INFO|Default MFEX Extract implementation is scalar.
2023-10-24T07:09:17.270Z|00065|dpif_netdev_impl|INFO|Default DPIF implementation is dpif_scalar.
2023-10-24T07:09:17.274Z|00066|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports recirculation
2023-10-24T07:09:17.274Z|00067|ofproto_dpif|INFO|netdev@ovs-netdev: VLAN header stack length probed as 1
2023-10-24T07:09:17.274Z|00068|ofproto_dpif|INFO|netdev@ovs-netdev: MPLS label stack length probed as 3
2023-10-24T07:09:17.274Z|00069|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports truncate action
2023-10-24T07:09:17.274Z|00070|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports unique flow ids
2023-10-24T07:09:17.274Z|00071|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports clone action
2023-10-24T07:09:17.274Z|00072|ofproto_dpif|INFO|netdev@ovs-netdev: Max sample nesting level probed as 10
2023-10-24T07:09:17.274Z|00073|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports eventmask in conntrack action
2023-10-24T07:09:17.274Z|00074|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports ct_clear action
2023-10-24T07:09:17.274Z|00075|ofproto_dpif|INFO|netdev@ovs-netdev: Max dp_hash algorithm probed to be 1
2023-10-24T07:09:17.274Z|00076|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports check_pkt_len action
2023-10-24T07:09:17.274Z|00077|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports timeout policy in conntrack action
2023-10-24T07:09:17.274Z|00078|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports ct_zero_snat
2023-10-24T07:09:17.274Z|00079|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports add_mpls action
2023-10-24T07:09:17.274Z|00080|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports ct_state
2023-10-24T07:09:17.274Z|00081|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports ct_zone
2023-10-24T07:09:17.274Z|00082|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports ct_mark
2023-10-24T07:09:17.274Z|00083|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports ct_label
2023-10-24T07:09:17.274Z|00084|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports ct_state_nat
2023-10-24T07:09:17.274Z|00085|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports ct_orig_tuple
2023-10-24T07:09:17.274Z|00086|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports ct_orig_tuple6
2023-10-24T07:09:17.274Z|00087|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports IPv6 ND Extensions
2023-10-24T07:09:17.274Z|00088|ofproto_dpif_upcall|INFO|Setting n-handler-threads to 1, setting n-revalidator-threads to 1
2023-10-24T07:09:17.274Z|00089|ofproto_dpif_upcall|INFO|Starting 2 threads
2023-10-24T07:09:17.278Z|00090|dpif_netdev|INFO|pmd-rxq-affinity isolates PMD core
2023-10-24T07:09:17.278Z|00091|dpif_netdev|INFO|PMD auto load balance interval set to 1 mins
2023-10-24T07:09:17.278Z|00092|dpif_netdev|INFO|PMD auto load balance improvement threshold set to 25%
2023-10-24T07:09:17.278Z|00093|dpif_netdev|INFO|PMD auto load balance load threshold set to 95%
2023-10-24T07:09:17.278Z|00094|dpif_netdev|INFO|PMD auto load balance is disabled.
2023-10-24T07:09:17.278Z|00095|dpif_netdev|INFO|PMD max sleep request is 0 usecs.
2023-10-24T07:09:17.278Z|00096|dpif_netdev|INFO|PMD load based sleeps are disabled.
2023-10-24T07:09:17.280Z|00097|bridge|INFO|bridge ovsbr0: added interface ovsbr0 on port 65534
2023-10-24T07:09:17.280Z|00098|bridge|INFO|bridge ovsbr0: using datapath ID 0000269c9978c446
2023-10-24T07:09:17.281Z|00099|connmgr|INFO|ovsbr0: added service controller "punix:/var/run/openvswitch/ovsbr0.mgmt"
2023-10-24T07:09:17.281Z|00100|poll_loop|INFO|wakeup due to [POLLIN] on fd 22 (FIFO pipe:[146609]) at ../lib/ovs-rcu.c:236 (67% CPU usage)
2023-10-24T07:09:17.281Z|00101|poll_loop|INFO|wakeup due to [POLLIN] on fd 22 (FIFO pipe:[146609]) at ../lib/ovs-rcu.c:236 (67% CPU usage)
2023-10-24T07:09:17.282Z|00102|poll_loop|INFO|wakeup due to [POLLIN] on fd 14 (NETLINK_ROUTE<->NETLINK_ROUTE) at ../lib/netlink-socket.c:1409 (67% CPU usage)
2023-10-24T07:09:17.282Z|00103|poll_loop|INFO|wakeup due to [POLLIN] on fd 22 (FIFO pipe:[146609]) at ../lib/ovs-rcu.c:236 (67% CPU usage)
2023-10-24T07:09:17.283Z|00104|poll_loop|INFO|wakeup due to 0-ms timeout at ../lib/dpif-netdev.c:6862 (67% CPU usage)
2023-10-24T07:09:17.283Z|00105|poll_loop|INFO|wakeup due to [POLLIN] on fd 22 (FIFO pipe:[146609]) at ../lib/ovs-rcu.c:236 (67% CPU usage)
2023-10-24T07:09:17.283Z|00106|poll_loop|INFO|wakeup due to [POLLIN] on fd 22 (FIFO pipe:[146609]) at ../vswitchd/bridge.c:421 (67% CPU usage)
2023-10-24T07:09:17.307Z|00107|dpif_netdev|INFO|PMD thread on numa_id: 0, core id: 17 created.
2023-10-24T07:09:17.311Z|00108|dpif_netdev|INFO|PMD thread on numa_id: 0, core id: 18 created.
2023-10-24T07:09:17.311Z|00109|dpif_netdev|INFO|There are 2 pmd threads on numa node 0
2023-10-24T07:09:17.311Z|00110|dpdk|INFO|Device with port_id=3 already stopped
2023-10-24T07:09:17.336Z|00001|dpdk(pmd-c18/id:8)|INFO|PMD thread uses DPDK lcore 0.
2023-10-24T07:09:17.336Z|00001|dpdk(pmd-c17/id:9)|INFO|PMD thread uses DPDK lcore 1.
2023-10-24T07:09:17.438Z|00111|dpdk|INFO|ice_set_rx_function(): Using AVX2 OFFLOAD Vector Rx (port 3).
2023-10-24T07:09:17.438Z|00112|dpdk|INFO|ice_set_tx_function(): Using AVX2 OFFLOAD Vector Tx (port 3).
2023-10-24T07:09:18.485Z|00113|netdev_dpdk|INFO|Port 3: b4:96:91:a5:d0:e7
2023-10-24T07:09:18.589Z|00114|netdev_dpdk|INFO|ens3f1: rx-steering: redirected lacp traffic to rx queue 1
2023-10-24T07:09:18.605Z|00115|netdev_dpdk|INFO|ens3f1: rx-steering: redirected other traffic to rx queue 0
2023-10-24T07:09:18.605Z|00116|dpif_netdev|INFO|Performing pmd to rx queue assignment using cycles algorithm.
2023-10-24T07:09:18.605Z|00117|dpif_netdev|INFO|Core 17 on numa node 0 assigned port 'ens3f1' rx queue 0 (measured processing cycles 0).
2023-10-24T07:09:18.605Z|00118|dpif_netdev|INFO|Core 18 on numa node 0 assigned port 'ens3f1' rx queue 1 (measured processing cycles 0).
2023-10-24T07:09:18.605Z|00119|bridge|INFO|bridge ovsbr0: added interface ens3f1 on port 1
2023-10-24T07:09:18.606Z|00120|dpdk|INFO|Device with port_id=2 already stopped
2023-10-24T07:09:18.649Z|00121|dpdk|INFO|ice_set_rx_function(): Using AVX2 OFFLOAD Vector Rx (port 2).
2023-10-24T07:09:18.649Z|00122|dpdk|INFO|ice_set_tx_function(): Using AVX2 OFFLOAD Vector Tx (port 2).
2023-10-24T07:09:19.684Z|00123|netdev_dpdk|INFO|Port 2: b4:96:91:a5:d0:e6
2023-10-24T07:09:19.696Z|00124|netdev_dpdk|INFO|ens3f0: rx-steering: redirected lacp traffic to rx queue 1
2023-10-24T07:09:19.697Z|00125|netdev_dpdk|INFO|ens3f0: rx-steering: redirected other traffic to rx queue 0
2023-10-24T07:09:19.697Z|00126|dpif_netdev|INFO|Performing pmd to rx queue assignment using cycles algorithm.
2023-10-24T07:09:19.697Z|00127|dpif_netdev|INFO|Core 17 on numa node 0 assigned port 'ens3f1' rx queue 0 (measured processing cycles 0).
2023-10-24T07:09:19.697Z|00128|dpif_netdev|INFO|Core 18 on numa node 0 assigned port 'ens3f1' rx queue 1 (measured processing cycles 0).
2023-10-24T07:09:19.697Z|00129|dpif_netdev|INFO|Core 18 on numa node 0 assigned port 'ens3f0' rx queue 0 (measured processing cycles 0).
2023-10-24T07:09:19.697Z|00130|dpif_netdev|INFO|Core 17 on numa node 0 assigned port 'ens3f0' rx queue 1 (measured processing cycles 0).
2023-10-24T07:09:19.697Z|00131|bridge|INFO|bridge ovsbr0: added interface ens3f0 on port 2
2023-10-24T07:09:19.697Z|00132|bridge|INFO|bridge ovsbr0: using datapath ID 0000b49691a5d0e6
2023-10-24T07:09:19.697Z|00133|timeval|WARN|Unreasonably long 2395ms poll interval (2384ms user, 9ms system)
2023-10-24T07:09:19.697Z|00134|timeval|WARN|faults: 5522 minor, 0 major
2023-10-24T07:09:19.697Z|00135|timeval|WARN|context switches: 0 voluntary, 78 involuntary
2023-10-24T07:09:19.697Z|00136|coverage|INFO|Event coverage, avg rate over last: 5 seconds, last minute, last hour, hash=8e621725:
2023-10-24T07:09:19.697Z|00137|coverage|INFO|nln_changed 0.0/sec 0.117/sec 0.0042/sec total: 22
2023-10-24T07:09:19.697Z|00138|coverage|INFO|netlink_sent 0.0/sec 2.950/sec 0.1119/sec total: 493
2023-10-24T07:09:19.697Z|00139|coverage|INFO|netlink_recv_jumbo 0.0/sec 0.167/sec 0.0061/sec total: 42
2023-10-24T07:09:19.697Z|00140|coverage|INFO|netlink_received 0.0/sec 3.350/sec 0.1278/sec total: 567
2023-10-24T07:09:19.697Z|00141|coverage|INFO|netdev_get_ethtool 0.0/sec 0.000/sec 0.0000/sec total: 3
2023-10-24T07:09:19.697Z|00142|coverage|INFO|netdev_set_hwaddr 0.0/sec 0.000/sec 0.0000/sec total: 2
2023-10-24T07:09:19.697Z|00143|coverage|INFO|netdev_set_policing 0.0/sec 0.000/sec 0.0000/sec total: 2
2023-10-24T07:09:19.697Z|00144|coverage|INFO|util_xalloc 598.4/sec 89.583/sec 4.4561/sec total: 21919
2023-10-24T07:09:19.697Z|00145|coverage|INFO|unixctl_replied 0.4/sec 0.033/sec 0.0006/sec total: 2
2023-10-24T07:09:19.697Z|00146|coverage|INFO|unixctl_received 0.4/sec 0.033/sec 0.0006/sec total: 2
2023-10-24T07:09:19.697Z|00147|coverage|INFO|stream_open 0.0/sec 0.000/sec 0.0003/sec total: 1
2023-10-24T07:09:19.697Z|00148|coverage|INFO|pstream_open 0.0/sec 0.000/sec 0.0003/sec total: 3
2023-10-24T07:09:19.697Z|00149|coverage|INFO|seq_change 3.4/sec 4.533/sec 0.1628/sec total: 24097
2023-10-24T07:09:19.697Z|00150|coverage|INFO|poll_zero_timeout 0.0/sec 0.000/sec 0.0000/sec total: 4
2023-10-24T07:09:19.697Z|00151|coverage|INFO|poll_create_node 101.8/sec 13.033/sec 0.3478/sec total: 1677
2023-10-24T07:09:19.697Z|00152|coverage|INFO|txn_success 0.0/sec 0.000/sec 0.0006/sec total: 4
2023-10-24T07:09:19.697Z|00153|coverage|INFO|txn_incomplete 1.4/sec 0.117/sec 0.0025/sec total: 18
2023-10-24T07:09:19.697Z|00154|coverage|INFO|txn_unchanged 1.2/sec 0.350/sec 0.0100/sec total: 37
2023-10-24T07:09:19.697Z|00155|coverage|INFO|netdev_get_stats 0.0/sec 0.000/sec 0.0000/sec total: 3
2023-10-24T07:09:19.697Z|00156|coverage|INFO|hmap_shrink 0.0/sec 0.000/sec 0.0000/sec total: 2
2023-10-24T07:09:19.697Z|00157|coverage|INFO|hmap_expand 47.4/sec 5.867/sec 0.2611/sec total: 1170
2023-10-24T07:09:19.697Z|00158|coverage|INFO|hmap_pathological 0.0/sec 0.000/sec 0.0014/sec total: 5
2023-10-24T07:09:19.697Z|00159|coverage|INFO|miniflow_malloc 0.0/sec 1.167/sec 0.0428/sec total: 194
2023-10-24T07:09:19.697Z|00160|coverage|INFO|flow_extract 0.0/sec 0.000/sec 0.0000/sec total: 4
2023-10-24T07:09:19.697Z|00161|coverage|INFO|dpif_port_add 0.0/sec 0.000/sec 0.0000/sec total: 3
2023-10-24T07:09:19.697Z|00162|coverage|INFO|dpif_flow_put_error 0.0/sec 0.000/sec 0.0000/sec total: 1
2023-10-24T07:09:19.697Z|00163|coverage|INFO|dpif_flow_put 0.0/sec 0.000/sec 0.0000/sec total: 25
2023-10-24T07:09:19.697Z|00164|coverage|INFO|dpif_flow_get 0.0/sec 0.000/sec 0.0000/sec total: 24
2023-10-24T07:09:19.697Z|00165|coverage|INFO|dpif_flow_flush 0.0/sec 0.000/sec 0.0000/sec total: 1
2023-10-24T07:09:19.697Z|00166|coverage|INFO|dpif_flow_del 0.0/sec 0.000/sec 0.0000/sec total: 24
2023-10-24T07:09:19.697Z|00167|coverage|INFO|dpif_execute 0.0/sec 0.000/sec 0.0000/sec total: 6
2023-10-24T07:09:19.697Z|00168|coverage|INFO|cmap_shrink 0.0/sec 0.667/sec 0.0222/sec total: 122
2023-10-24T07:09:19.697Z|00169|coverage|INFO|cmap_expand 0.2/sec 0.683/sec 0.0256/sec total: 149
2023-10-24T07:09:19.697Z|00170|coverage|INFO|ccmap_shrink 0.0/sec 0.000/sec 0.0000/sec total: 24
2023-10-24T07:09:19.697Z|00171|coverage|INFO|rev_flow_table 0.0/sec 0.000/sec 0.0000/sec total: 1
2023-10-24T07:09:19.697Z|00172|coverage|INFO|rev_reconfigure 0.0/sec 0.000/sec 0.0000/sec total: 3
2023-10-24T07:09:19.697Z|00173|coverage|INFO|ofproto_update_port 0.0/sec 0.000/sec 0.0000/sec total: 4
2023-10-24T07:09:19.697Z|00174|coverage|INFO|ofproto_flush 0.0/sec 0.000/sec 0.0000/sec total: 1
2023-10-24T07:09:19.697Z|00175|coverage|INFO|bridge_reconfigure 1.2/sec 0.183/sec 0.0039/sec total: 17
2023-10-24T07:09:19.697Z|00176|coverage|INFO|117 events never hit
2023-10-24T07:09:23.699Z|00177|poll_loop|INFO|Dropped 28 log messages in last 6 seconds (most recently, 1 seconds ago) due to excessive rate
2023-10-24T07:09:23.699Z|00178|poll_loop|INFO|wakeup due to [POLLIN] on fd 22 (FIFO pipe:[146609]) at ../vswitchd/bridge.c:421 (70% CPU usage)
2023-10-24T07:09:27.253Z|00179|memory|INFO|peak resident set size grew 55% in last 237.3 seconds, from 161120 kB to 249412 kB
2023-10-24T07:09:27.253Z|00180|memory|INFO|handlers:1 idl-cells-Open_vSwitch:188 idl-outstanding-txns-Open_vSwitch:1 ports:3 revalidators:1 rules:5
2023-10-24T07:09:29.710Z|00181|bond|DBG|bond balance-tcp:
2023-10-24T07:09:31.204Z|00182|bond|INFO|member ens3f1: link state up
2023-10-24T07:09:31.205Z|00183|bond|INFO|member ens3f1: enabled
2023-10-24T07:09:31.205Z|00184|bond|INFO|member ens3f0: link state up
2023-10-24T07:09:31.205Z|00185|bond|INFO|member ens3f0: enabled
2023-10-24T07:09:31.205Z|00186|bond|INFO|bond balance-tcp: active member is now ens3f1
2023-10-24T07:09:39.711Z|00187|bond|DBG|bond balance-tcp: ens3f1 0kB, ens3f0 0kB
Hi @mhou, could you try removing the following settings to see if it fixes the lacp member flapping? ovs-vsctl set Open_vSwitch . other_config:dpdk-lcore-mask=0x80000 <------ this should not be used anymore ovs-vsctl set Open_vSwitch . other_config:pmd-rxq-isolate=false <------ this has nothing to do with the current feature and may cause issues options:dpdk-lsc-interrupt=true <------ this has nothing to do with the current feature and may cause issues Hello Robin Please see the comment14. The key issue is added these command. If I remove below commands lacp work as well. I will try to remove these commands and re-run performance test. ovs-vsctl set Open_vSwitch . other_config:pmd-rxq-isolate=false ovs-vsctl set Interface ens3f0 other_config:pmd-rxq-affinity="0:17" ovs-vsctl set Interface ens3f1 other_config:pmd-rxq-affinity="0:18" But please be noted, the option rx-steering=rss+lacp is a new feature, qe needs to evaluate whether to write new test cases to consider compatibility with other parameters. If you feel for free, could you kindly show me some tips? This new feature is creating a new rxq dedicated to receive LACP traffic. 2023-10-24T07:09:19.696Z|00124|netdev_dpdk|INFO|ens3f0: rx-steering: redirected lacp traffic to rx queue 1 <------ this is the new queue 2023-10-24T07:09:19.697Z|00125|netdev_dpdk|INFO|ens3f0: rx-steering: redirected other traffic to rx queue 0 If you use pmd-rxq-affinity="0:17" you are explicitly telling ovs not to process rxq 1. If you really want to use this option, you would need to change it to: pmd-rxq-affinity="0:17,1:17" But I would advise not using it for this test. Hello Robin Many thanks for your tips. May I raise hand up and show what I am thinking. If this feature needs to be used on customer scenario. The number of pmdcpu is mostly specified. Here is my test scenario. 1. At first, I will verify wether the options:rx-steering=rss+lacp work as well without anyother configuration, 2. Second, QE designed a performance scenario to meet functional verification. Could you please kindly take a look? ovs-vsctl set Open_vSwitch . other_config:pmd-rxq-isolate=false ovs-vsctl set Interface ens3f0 other_config:pmd-rxq-affinity="0:17,1:19" ovs-vsctl set Interface ens3f1 other_config:pmd-rxq-affinity="0:18,1:19" ovs-appctl dpif-netdev/pmd-rxq-show Displaying last 60 seconds pmd usage % pmd thread numa_id 0 core_id 17: isolated : false port: ens1f0np0 queue-id: 0 (enabled) pmd usage: 0 % port: vhost1 queue-id: 0 (enabled) pmd usage: 0 % overhead: 0 % pmd thread numa_id 0 core_id 18: isolated : false port: ens1f1np1 queue-id: 0 (enabled) pmd usage: 0 % port: vhost0 queue-id: 0 (enabled) pmd usage: 0 % pmd thread numa_id 0 core_id 19: port: ens1f0np0 queue-id: 1 (enabled) pmd usage: 0 % port: ens1f1np1 queue-id: 1 (enabled) pmd usage: 0 % I don't think option 2 should be an issue but also I don't think it is necessary to run any performance nor functional test. I am not sure what side effects these options can have. @ktraynor what would you recommend? This bug did not meet the criteria for automatic migration and is being closed. If the issue remains, please open a new ticket in https://issues.redhat.com/browse/FDP |