Bug 2220900 - The i40e port will down when it is connected to a bnxt card and the bnxt port is bound to vfio pci on rhel9.2
Summary: The i40e port will down when it is connected to a bnxt card and the bnxt por...
Keywords:
Status: NEW
Alias: None
Product: Red Hat Enterprise Linux Fast Datapath
Classification: Red Hat
Component: DPDK
Version: FDP 23.E
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Kevin Traynor
QA Contact: liting
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-07-06 13:16 UTC by liting
Modified: 2023-07-26 03:18 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker FD-2991 0 None None None 2023-07-06 13:17:44 UTC

Description liting 2023-07-06 13:16:44 UTC
Description of problem:


Version-Release number of selected component (if applicable):
kernel-5.14.0-284.21.1.el9_2

How reproducible:


Steps to Reproduce:
netqe22 10g bnxt <--directly connect--> netqe32 10g i40e
1. netqe22 install rhel9.2, netqe32 install rhel8.4
2. check the netqe32 i40e port, both ports are up
12: ens3f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 3c:fd:fe:ad:7b:4c brd ff:ff:ff:ff:ff:ff
    inet6 fe80::3efd:feff:fead:7b4c/64 scope link 
       valid_lft forever preferred_lft forever
13: ens3f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 3c:fd:fe:ad:7b:4d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::3efd:feff:fead:7b4d/64 scope link tentative 
       valid_lft forever preferred_lft forever


3. netqe22: bind the two bnxt card to vfio-pci
driverctl -v set-override 0000:82:00.0 vfio-pci
driverctl -v set-override 0000:82:00.1 vfio-pci
[root@netqe22 ~]# driverctl -v set-override 0000:82:00.0 vfio-pci
driverctl: setting driver override for 0000:82:00.0: vfio-pci
driverctl: loading driver vfio-pci
driverctl: unbinding previous driver bnxt_en
driverctl: reprobing driver for 0000:82:00.0
driverctl: saving driver override for 0000:82:00.0
[root@netqe22 ~]# driverctl -v set-override 0000:82:00.1 vfio-pci
driverctl: setting driver override for 0000:82:00.1: vfio-pci
driverctl: loading driver vfio-pci
driverctl: unbinding previous driver bnxt_en
driverctl: reprobing driver for 0000:82:00.1
driverctl: saving driver override for 0000:82:00.1

4. check the netqe32 i40e port, both ports are down.
12: ens3f0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 3c:fd:fe:ad:7b:4c brd ff:ff:ff:ff:ff:ff
    inet6 fe80::3efd:feff:fead:7b4c/64 scope link 
       valid_lft forever preferred_lft forever
13: ens3f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 3c:fd:fe:ad:7b:4d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::3efd:feff:fead:7b4d/64 scope link 
       valid_lft forever preferred_lft forever

Actual results:
The i40e port will down after bnxt port bound to vfio-pci on rhel9.2/9.0. The rhel8.6 and rhel8.4 has no this issue.

Expected results:
The i40e port should be still up after bnxt port bound to vfio-pci on rhel9.2/9.0.

Additional info:
https://beaker.engineering.redhat.com/jobs/8040156
https://beaker.engineering.redhat.com/jobs/8040335

Comment 1 liting 2023-07-12 09:25:41 UTC
I try the same steps on rhel8.6 and rhel8.4, rhel8.6 and rhel8.4 also has this issue.

Comment 4 liting 2023-07-14 09:37:24 UTC
For rhel8.6 ovs3.1, it also has this issue.
https://beaker.engineering.redhat.com/jobs/8064501

Comment 5 liting 2023-07-21 08:17:51 UTC
For rhel9.2, it also has this issue.
https://beaker.engineering.redhat.com/jobs/8078952

Comment 6 liting 2023-07-26 03:18:37 UTC
For 25g bnxt_en card(BCM57414), there is no this issue.
https://beaker.engineering.redhat.com/jobs/8106163


Note You need to log in before you can comment on or make changes to this bug.