Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
The FDP team is no longer accepting new bugs in Bugzilla. Please report your issues under FDP project in Jira. Thanks.

Bug 1892815

Summary: [OVS-DPDK][bnxt] Ping failed over OVS-dpdk/bnxt using openvswitch-2.9.7-1.el7fdn under 7.9
Product: Red Hat Enterprise Linux Fast Datapath Reporter: Jean-Tsung Hsiao <jhsiao>
Component: openvswitchAssignee: Timothy Redaelli <tredaelli>
openvswitch sub component: ovs-dpdk QA Contact: Jean-Tsung Hsiao <jhsiao>
Status: CLOSED EOL Docs Contact:
Severity: unspecified    
Priority: unspecified CC: ctrautma, fleitner, gcase, jhsiao, ktraynor, qding, tli, tredaelli
Version: RHEL 7.7   
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-03-11 19:27:48 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Jean-Tsung Hsiao 2020-10-29 17:47:41 UTC
Description of problem: [OVS-DPDK][bnxt] Ping failed over OVS-dpdk/bnxt using openvswitch-2.9.7-1.el7fdn under 7.9

One of the failed automation jobs: https://beaker.engineering.redhat.com/jobs/4690877

The test bed is netqe10/bnxt <-> netqe29/i40e. Need to login netqe10, and ran "systemctl restart openvswitch" to make ping work.

Same issue on another test bed: netqe10/bnxt <-> netqe30/nfp


Version-Release number of selected component (if applicable):

openvswitch-2.9.7-1.el7fdn
Rhel-7.9

How reproducible: Reproducible


Steps to Reproduce:
1. Use one of the test beds mentioned above.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 liting 2020-11-02 02:50:01 UTC
I also run pvp ovs dpdk performance case failed on bnxt_en card. 
beaker job:
https://beaker.engineering.redhat.com/jobs/4694206 
https://beaker.engineering.redhat.com/jobs/4692789

Comment 2 Jean-Tsung Hsiao 2021-04-16 19:53:21 UTC
This same issue happened again under 2.9.9.
It can be reproduced easily using the following loop:
[root@netqe10 home]# for i in {1..5}; do echo Test $i; date; systemctl stop openvswitch; systemctl start openvswitch; sh ovs-dpdk.sh; done
Test 1
Fri Apr 16 15:21:12 EDT 2021
default via 10.19.15.254 dev em3 proto dhcp metric 100
10.19.15.0/24 dev em3 proto kernel scope link src 10.19.15.47 metric 100
192.168.9.0/24 dev ovsbr0 proto kernel scope link src 192.168.9.106
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
link_state          : down
PING 192.168.9.105 (192.168.9.105) 56(84) bytes of data.
From 192.168.9.106 icmp_seq=1 Destination Host Unreachable
From 192.168.9.106 icmp_seq=2 Destination Host Unreachable
From 192.168.9.106 icmp_seq=3 Destination Host Unreachable
From 192.168.9.106 icmp_seq=4 Destination Host Unreachable
64 bytes from 192.168.9.105: icmp_seq=5 ttl=64 time=0.526 ms
64 bytes from 192.168.9.105: icmp_seq=6 ttl=64 time=0.071 ms
64 bytes from 192.168.9.105: icmp_seq=7 ttl=64 time=0.134 ms
64 bytes from 192.168.9.105: icmp_seq=8 ttl=64 time=0.089 ms
64 bytes from 192.168.9.105: icmp_seq=9 ttl=64 time=0.131 ms
64 bytes from 192.168.9.105: icmp_seq=10 ttl=64 time=0.129 ms

--- 192.168.9.105 ping statistics ---
10 packets transmitted, 6 received, +4 errors, 40% packet loss, time 9000ms
rtt min/avg/max/mdev = 0.071/0.180/0.526/0.156 ms, pipe 4
Test 2
Fri Apr 16 15:21:32 EDT 2021
default via 10.19.15.254 dev em3 proto dhcp metric 100
10.19.15.0/24 dev em3 proto kernel scope link src 10.19.15.47 metric 100
192.168.9.0/24 dev ovsbr0 proto kernel scope link src 192.168.9.106
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
link_state          : up
PING 192.168.9.105 (192.168.9.105) 56(84) bytes of data.
From 192.168.9.106 icmp_seq=1 Destination Host Unreachable
From 192.168.9.106 icmp_seq=2 Destination Host Unreachable
From 192.168.9.106 icmp_seq=3 Destination Host Unreachable
From 192.168.9.106 icmp_seq=4 Destination Host Unreachable
From 192.168.9.106 icmp_seq=5 Destination Host Unreachable
From 192.168.9.106 icmp_seq=6 Destination Host Unreachable
From 192.168.9.106 icmp_seq=7 Destination Host Unreachable
From 192.168.9.106 icmp_seq=8 Destination Host Unreachable
64 bytes from 192.168.9.105: icmp_seq=9 ttl=64 time=0.554 ms
64 bytes from 192.168.9.105: icmp_seq=10 ttl=64 time=0.089 ms

--- 192.168.9.105 ping statistics ---
10 packets transmitted, 2 received, +8 errors, 80% packet loss, time 9001ms
rtt min/avg/max/mdev = 0.089/0.321/0.554/0.233 ms, pipe 4
Test 3
Fri Apr 16 15:21:51 EDT 2021
default via 10.19.15.254 dev em3 proto dhcp metric 100
10.19.15.0/24 dev em3 proto kernel scope link src 10.19.15.47 metric 100
192.168.9.0/24 dev ovsbr0 proto kernel scope link src 192.168.9.106
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
link_state          : up
PING 192.168.9.105 (192.168.9.105) 56(84) bytes of data.
64 bytes from 192.168.9.105: icmp_seq=1 ttl=64 time=1001 ms
64 bytes from 192.168.9.105: icmp_seq=2 ttl=64 time=2.33 ms
64 bytes from 192.168.9.105: icmp_seq=3 ttl=64 time=0.154 ms
64 bytes from 192.168.9.105: icmp_seq=4 ttl=64 time=0.070 ms
64 bytes from 192.168.9.105: icmp_seq=5 ttl=64 time=0.160 ms
64 bytes from 192.168.9.105: icmp_seq=6 ttl=64 time=0.083 ms
64 bytes from 192.168.9.105: icmp_seq=7 ttl=64 time=0.082 ms
64 bytes from 192.168.9.105: icmp_seq=8 ttl=64 time=0.083 ms
64 bytes from 192.168.9.105: icmp_seq=9 ttl=64 time=0.068 ms
64 bytes from 192.168.9.105: icmp_seq=10 ttl=64 time=0.083 ms

--- 192.168.9.105 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9000ms
rtt min/avg/max/mdev = 0.068/100.508/1001.968/300.487 ms, pipe 2
Test 4
Fri Apr 16 15:22:11 EDT 2021
default via 10.19.15.254 dev em3 proto dhcp metric 100
10.19.15.0/24 dev em3 proto kernel scope link src 10.19.15.47 metric 100
192.168.9.0/24 dev ovsbr0 proto kernel scope link src 192.168.9.106
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
link_state          : up
PING 192.168.9.105 (192.168.9.105) 56(84) bytes of data.
64 bytes from 192.168.9.105: icmp_seq=1 ttl=64 time=1001 ms
64 bytes from 192.168.9.105: icmp_seq=2 ttl=64 time=2.32 ms
64 bytes from 192.168.9.105: icmp_seq=3 ttl=64 time=0.150 ms
64 bytes from 192.168.9.105: icmp_seq=4 ttl=64 time=0.087 ms
64 bytes from 192.168.9.105: icmp_seq=5 ttl=64 time=0.096 ms
64 bytes from 192.168.9.105: icmp_seq=6 ttl=64 time=0.082 ms
64 bytes from 192.168.9.105: icmp_seq=7 ttl=64 time=0.082 ms
64 bytes from 192.168.9.105: icmp_seq=8 ttl=64 time=0.095 ms
64 bytes from 192.168.9.105: icmp_seq=9 ttl=64 time=0.085 ms
64 bytes from 192.168.9.105: icmp_seq=10 ttl=64 time=0.083 ms

--- 192.168.9.105 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9000ms
rtt min/avg/max/mdev = 0.082/100.481/1001.736/300.419 ms, pipe 2
Test 5
Fri Apr 16 15:22:31 EDT 2021
default via 10.19.15.254 dev em3 proto dhcp metric 100
10.19.15.0/24 dev em3 proto kernel scope link src 10.19.15.47 metric 100
192.168.9.0/24 dev ovsbr0 proto kernel scope link src 192.168.9.106
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
link_state          : up
PING 192.168.9.105 (192.168.9.105) 56(84) bytes of data.
64 bytes from 192.168.9.105: icmp_seq=1 ttl=64 time=1002 ms
64 bytes from 192.168.9.105: icmp_seq=2 ttl=64 time=3.51 ms
64 bytes from 192.168.9.105: icmp_seq=3 ttl=64 time=0.149 ms
64 bytes from 192.168.9.105: icmp_seq=4 ttl=64 time=0.137 ms
64 bytes from 192.168.9.105: icmp_seq=5 ttl=64 time=0.142 ms
64 bytes from 192.168.9.105: icmp_seq=6 ttl=64 time=0.065 ms
64 bytes from 192.168.9.105: icmp_seq=7 ttl=64 time=0.080 ms
64 bytes from 192.168.9.105: icmp_seq=8 ttl=64 time=0.094 ms
64 bytes from 192.168.9.105: icmp_seq=9 ttl=64 time=0.085 ms
64 bytes from 192.168.9.105: icmp_seq=10 ttl=64 time=0.084 ms

--- 192.168.9.105 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9000ms
rtt min/avg/max/mdev = 0.065/100.714/1002.787/300.692 ms, pipe 2

*** ovs-dpdk.sh ***
[root@netqe10 home]# cat ovs-dpdk.sh
ovs-vsctl set Open_vSwitch . other_config={}
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x002002
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="4096,4096"
ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xa00a00
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true

ovs-vsctl --if-exists del-br ovsbr0
ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
ovs-vsctl add-port ovsbr0 dpdk-10 -- set interface dpdk-10 type=dpdk ofport_request=10 \
    options:dpdk-devargs=0000:83:00.0 options:n_rxq=4

ip a a 192.168.9.106/24 dev ovsbr0
ip lin set ovsbr0 up
ip r
ovs-vsctl list interface dpdk-10 | grep link_state
ping 192.168.9.105 -c 10
[root@netqe10 home]#

Comment 3 Gary Case 2021-05-21 21:12:45 UTC
Please let me know if we need to have Broadcom take a look at this bug.

Comment 9 Red Hat Bugzilla 2024-07-10 04:25:02 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days