Bug 1921462

Summary: i40e/ice driver: ipv6 ping failed when sr-iov vf add to ovs bridge
Product: Red Hat Enterprise Linux Fast Datapath Reporter: liting <tli>
Component: openvswitch2.13Assignee: David Marchand <dmarchan>
Status: NEW --- QA Contact: liting <tli>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: FDP 21.ACC: ctrautma, fleitner, jhsiao, ralongi, sassmann
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description liting 2021-01-28 02:48:30 UTC
Description of problem:
i40e driver: ipv6 ping failed when sr-iov vf add to ovs bridge

Version-Release number of selected component (if applicable):
[root@netqe22 ~]# cat /etc/redhat-release 
Red Hat Enterprise Linux release 8.3 (Ootpa)
[root@netqe22 ~]# uname -a
Linux netqe22.knqe.lab.eng.bos.redhat.com 4.18.0-240.el8.x86_64 #1 SMP Wed Sep 23 05:13:10 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux
[root@netqe22 ~]# rpm -qa|grep dpdk
dpdk-19.11.3-1.el8.x86_64
dpdk-tools-19.11.3-1.el8.x86_64

[root@netqe22 ~]# rpm -qa|grep openv
openvswitch-selinux-extra-policy-1.0-23.el8fdp.noarch
openvswitch2.13-2.13.0-77.el8fdp.x86_64

[root@netqe22 ~]# ethtool -i enp3s0f1
driver: i40e
version: 2.8.20-k
firmware-version: 6.01 0x800035cf 1.1747.0
expansion-rom-version: 
bus-info: 0000:03:00.1
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

How reproducible:


Steps to Reproduce:
Netqe22 i40e nic connect with Netqe32 i40e nic directly.

Two server build same topo. On netqe22
1. create two vf for pf enp3s0f0

2.Bind vf 1 to dpdk, and add dpdk0 to ovs bridge.
 /usr/share/dpdk/usertools/dpdk-devbind.py -b vfio-pci 0000:03:02.1
 systemctl restart openvswitch
 ovs-vsctl set Open_vSwitch . 'other_config={}'
 ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
 ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=1024,1024
 ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x500000500000
 ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
 ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 type=dpdk type=dpdk options:dpdk-devargs=0000:03:02.1
 ovs-vsctl add-port ovsbr0 dpdkvhostuserclient0 -- set Interface dpdkvhostuserclient0 type=dpdkvhostuserclient -- set Interface dpdkvhostuserclient0 options:vhost-server-path=/tmp/dpdkvhostuserclient0

3.Configure the vf 1 mac same with guest's mac
ip link set enp3s0f0 vf 1 mac $guest_mac

4. Inside guest, configure ipv4 and ipv6 address for eth0
ip addr add 20.0.0.2/24 dev eth0
ip addr add 2001:5c0:9168::2/24 dev eth0

5. On netqe32 system, run above similar command on it. 

[root@netqe22 ~]# ovs-vsctl show
0f33bdd1-3a4d-4b05-ba46-55413d87c26d
    Bridge ovsbr0
        datapath_type: netdev
        Port ovsbr0
            Interface ovsbr0
                type: internal
        Port dpdk0
            Interface dpdk0
                type: dpdk
                options: {dpdk-devargs="0000:03:02.1"}
        Port dpdkvhostuserclient0
            Interface dpdkvhostuserclient0
                type: dpdkvhostuserclient
                options: {vhost-server-path="/tmp/dpdkvhostuserclient0"}
    ovs_version: "2.13.2"
6. On netqe32 guest, ping the ip on netqe22 guest.


Actual results:
On netqe32, ipv4 ping successfully, ipv6 ping failed. When configure vf 1 trust on, ipv6 still ping failed.

Expected results:
ipv6 can ping successfully.

Additional info:
https://beaker.engineering.redhat.com/jobs/5035715

Comment 1 liting 2021-08-06 14:35:34 UTC
The issue still exist on openvswitch2.15-2.15.0-26.
[root@netqe22 ~]# uname -r
4.18.0-305.el8.x86_64
[root@netqe22 ~]# rpm -qa|grep openvswitch
openvswitch2.15-2.15.0-26.el8fdp.x86_64
openvswitch-selinux-extra-policy-1.0-28.el8fdp.noarch
beaker job link.
https://beaker.engineering.redhat.com/jobs/5677369

Comment 2 liting 2021-08-06 14:38:41 UTC
The issue also exist on ice card.
https://beaker.engineering.redhat.com/jobs/5613952

Comment 3 liting 2021-08-09 06:29:58 UTC
After disable spoof check, and turn trust on. The ipv6 ping successfully.

Comment 4 liting 2022-05-23 09:39:49 UTC
For ice card. it also exist this issue. ipv6 ping failed
https://beaker.engineering.redhat.com/jobs/6644207

Comment 5 liting 2022-06-13 10:17:05 UTC
For fdp22e, rhel9.0, ice card. this issue still exist
https://beaker.engineering.redhat.com/jobs/6713629

Comment 6 liting 2022-07-15 02:54:05 UTC
for fdp22F, ice card, this issue still exist
https://beaker.engineering.redhat.com/jobs/6815954

Comment 7 liting 2022-08-30 12:24:32 UTC
for fdp22G, ice card, this issue still exist.
https://beaker.engineering.redhat.com/jobs/6956356

Comment 8 Stefan Assmann 2022-09-12 13:57:58 UTC
What happens if you use the iavf driver from the kernel instead of dpdk?

Comment 9 liting 2022-10-09 10:44:16 UTC
(In reply to Stefan Assmann from comment #8)
> What happens if you use the iavf driver from the kernel instead of dpdk?

I try to use iavf driver to run the same cases with kernel, the ipv6 ping succcessfully. ipv6 ping failed only exist in dpdk case.

Comment 10 liting 2022-11-11 06:48:03 UTC
For fdp22k, this issue still exist.
i40e:
https://beaker.engineering.redhat.com/jobs/7210502

Comment 11 liting 2022-12-08 07:10:34 UTC
For fdp22.L, ice still has this issue.
https://beaker.engineering.redhat.com/jobs/7316969
https://beaker.engineering.redhat.com/jobs/7316975

Comment 12 liting 2023-02-15 07:14:49 UTC
For fdp23.A, ice still has this issue
https://beaker.engineering.redhat.com/jobs/7529038

Comment 13 liting 2023-04-10 03:41:15 UTC
For fdp23.C, i40e has no this issue, but ice still has this issue. 
ice rhel9.2 job:
https://beaker.engineering.redhat.com/jobs/7719135
ice rhel8.6 job:
https://beaker.engineering.redhat.com/jobs/7715761