Bug 1949738 - The 1queue ovs dpdk pvp case didn't receive any packet on bnxt_en nic on openvswitch-2.9.9-1.el7
Summary: The 1queue ovs dpdk pvp case didn't receive any packet on bnxt_en nic on open...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux Fast Datapath
Classification: Red Hat
Component: openvswitch
Version: FDP 19.C
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Timothy Redaelli
QA Contact: qding
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-04-15 00:24 UTC by liting
Modified: 2021-08-06 13:06 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-06 13:06:37 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description liting 2021-04-15 00:24:55 UTC
Description of problem:
The 1queue ovs dpdk pvp case didn't receive any packet on bnxt_en nic on openvswitch-2.9.9-1.el7, It should same with bug1655858.

Version-Release number of selected component (if applicable):
[root@netqe22 ~]# rpm -qa|grep openv
kernel-kernel-networking-openvswitch-common-2.0-122.noarch
openvswitch-selinux-extra-policy-1.0-18.el7fdp.noarch
openvswitch-2.9.9-1.el7fdp.x86_64
[root@netqe22 ~]# uname -a
Linux netqe22.knqe.lab.eng.bos.redhat.com 3.10.0-1160.el7.x86_64 #1 SMP Tue Aug 18 14:50:17 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux


How reproducible:


Steps to Reproduce:
Run 1queue/2pmd or 1queue/4pmd case, detail case steps.
1. On host, run following command.
rm -rf /var/run/openvswitch/
rm -rf /etc/openvswitch/
 /usr/bin/ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema
 /usr/sbin/ovsdb-server --remote=punix:/var/run/openvswitch/db.sock --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile=/var/run/openvswitch/ovsdb-server.pid --overwrite-pidfile
 /usr/bin/ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
 /usr/bin/ovs-vsctl --no-wait set Open_vSwitch . other_config:vhost-iommu-support=true
 /usr/bin/ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x2
 /usr/bin/ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=1024,1024
 /bin/bash -c "sudo -E /usr/sbin/ovs-vswitchd --pidfile=/var/run/openvswitch/ovs-vswitchd.pid --overwrite-pidfile --log-file=/tmp/vswitchd.log"
 /usr/bin/ovs-vsctl --timeout 10 add-br br0 -- set bridge br0 datapath_type=netdev
 /usr/bin/ovs-vsctl --timeout 10 set Open_vSwitch . other_config:max-idle=30000
 /usr/bin/ovs-vsctl --timeout 10 set Open_vSwitch . other_config:pmd-cpu-mask=0xa00000a00000
 /usr/bin/ovs-vsctl --timeout 10 add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk options:dpdk-devargs=0000:82:00.0 options:n_rxq=1
 /usr/bin/ovs-vsctl --timeout 10 add-port br0 dpdk1 -- set Interface dpdk1 type=dpdk options:dpdk-devargs=0000:82:00.1 options:n_rxq=1
 /usr/bin/ovs-vsctl --timeout 10 add-port br0 dpdkvhostuserclient0 -- set Interface dpdkvhostuserclient0 type=dpdkvhostuserclient -- set Interface dpdkvhostuserclient0 options:vhost-server-path=/var/run/openvswitch/dpdkvhostuserclient0
 /usr/bin/ovs-vsctl --timeout 10 add-port br0 dpdkvhostuserclient1 -- set Interface dpdkvhostuserclient1 type=dpdkvhostuserclient -- set Interface dpdkvhostuserclient1 options:vhost-server-path=/var/run/openvswitch/dpdkvhostuserclient1
 /usr/bin/ovs-ofctl -O OpenFlow13 --timeout 10 del-flows br0
 /usr/bin/ovs-ofctl -O OpenFlow13 --timeout 10 add-flow br0 in_port=1,idle_timeout=0,action=output:3
 /usr/bin/ovs-ofctl -O OpenFlow13 --timeout 10 add-flow br0 in_port=3,idle_timeout=0,action=output:1
 /usr/bin/ovs-ofctl -O OpenFlow13 --timeout 10 add-flow br0 in_port=4,idle_timeout=0,action=output:2
 /usr/bin/ovs-ofctl -O OpenFlow13 --timeout 10 add-flow br0 in_port=2,idle_timeout=0,action=output:4

2. Start guest
sudo -E taskset -c 3,5,29 /usr/libexec/qemu-kvm -m 8192 -smp 3 -cpu host,migratable=off -drive if=ide,file=rhel7.6-vsperf-1Q-viommu.qcow2 -boot c --enable-kvm -monitor unix:/tmp/vm0monitor,server,nowait -object memory-backend-file,id=mem,size=8192M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc -nographic -vnc :0 -name Client0 -snapshot -net none -no-reboot -M q35,kernel-irqchip=split -device intel-iommu,device-iotlb=on,intremap,caching-mode=true -device pcie-root-port,id=root.1,slot=1 -device pcie-root-port,id=root.2,slot=2 -device pcie-root-port,id=root.3,slot=3 -device pcie-root-port,id=root.4,slot=4 -chardev socket,id=char0,path=/var/run/openvswitch/dpdkvhostuserclient0,server -netdev type=vhost-user,id=net1,chardev=char0,vhostforce,queues=1 -device virtio-net-pci,mac=00:00:00:00:00:01,iommu_platform=on,ats=on,bus=root.2,netdev=net1,csum=off,mrg_rxbuf=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off,rx_queue_size=1024,mq=on,vectors=4 -chardev socket,id=char1,path=/var/run/openvswitch/dpdkvhostuserclient1,server -netdev type=vhost-user,id=net2,chardev=char1,vhostforce,queues=1 -device virtio-net-pci,mac=00:00:00:00:00:02,iommu_platform=on,ats=on,bus=root.3,netdev=net2,csum=off,mrg_rxbuf=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off,rx_queue_size=1024,mq=on,vectors=4

3. On guest, run following command.
sysctl vm.nr_hugepages=1
 mkdir -p /dev/hugepages
 mount -t hugetlbfs hugetlbfs /dev/hugepages
 rpm -ivh /root/dpdkrpms/1711-15/dpdk*.rpm
 modprobe vfio
 modprobe vfio-pci
 /usr/share/dpdk/usertools/dpdk-devbind.py -b vfio-pci 02:00.0 
 /usr/share/dpdk/usertools/dpdk-devbind.py -b vfio-pci 03:00.0
 /usr/share/dpdk/usertools/dpdk-devbind.py --status
 cd /usr/bin
 ./testpmd -l 0,1,2 -n 4 --socket-mem 1024 -- --burst=64 -i --txqflags=0xf00 --disable-hw-vlan --nb-cores=2 --txq=1 --rxq=1 --forward-mode=mac --auto-start

4. Send rfc2544 traffic with using Xena

Actual results:
The 1queue/2pmd or 1queue/4pmd case dpdk port did not receive any packet and  got 0 throughput. 2queue and 4queue case work well.

[root@netqe22 conf]# ovs-ofctl dump-flows br0
 cookie=0x0, duration=305.278s, table=0, n_packets=1288, n_bytes=77348, in_port=1 actions=output:3
 cookie=0x0, duration=305.253s, table=0, n_packets=1270, n_bytes=76268, in_port=3 actions=output:1
 cookie=0x0, duration=305.228s, table=0, n_packets=1288, n_bytes=77348, in_port=4 actions=output:2
 cookie=0x0, duration=305.204s, table=0, n_packets=1270, n_bytes=76268, in_port=2 actions=output:4
Expected results:


Additional info:
https://beaker.engineering.redhat.com/jobs/5272091

Comment 1 Jean-Tsung Hsiao 2021-04-22 13:28:54 UTC
Looks like the culprit is ovs-dpdk bridge:
[root@netqe10 jhsiao]# rpm -q openvswitch
openvswitch-2.9.9-1.el7fdp.x86_64
[root@netqe10 jhsiao]#

With n_rxq=1 for all four interfaces packets from trex got dropper at dpdk interfaces.

[root@netqe10 jhsiao]# ovs-vsctl show
576d8f08-9348-451b-a3a6-4e90b4d4993d
    Bridge "ovsbr0"
        Port "vhost0"
            Interface "vhost0"
                type: dpdkvhostuserclient
                options: {n_rxq="1", vhost-server-path="/tmp/vhost0"}
        Port "vhost1"
            Interface "vhost1"
                type: dpdkvhostuserclient
                options: {n_rxq="1", vhost-server-path="/tmp/vhost1"}
        Port "dpdk-10"
            Interface "dpdk-10"
                type: dpdk
                options: {dpdk-devargs="0000:83:00.0", n_rxq="1"}
        Port "dpdk-11"
            Interface "dpdk-11"
                type: dpdk
                options: {dpdk-devargs="0000:83:00.1", n_rxq="1"}
        Port "ovsbr0"
            Interface "ovsbr0"
                type: internal
    ovs_version: "2.9.9"
[root@netqe10 jhsiao]#

Comment 2 Jean-Tsung Hsiao 2021-04-22 13:35:59 UTC
(In reply to Jean-Tsung Hsiao from comment #1)
> Looks like the culprit is ovs-dpdk bridge:
> [root@netqe10 jhsiao]# rpm -q openvswitch
> openvswitch-2.9.9-1.el7fdp.x86_64
> [root@netqe10 jhsiao]#
> 
> With n_rxq=1 for all four interfaces packets from trex got dropper at dpdk
... got dropped at dpdk interfaces
> interfaces.

Comment 3 Christian Trautman 2021-07-28 15:10:54 UTC
As 2.9 is no longer being updated,  should we just close this bug?

Comment 4 Franck Baudin 2021-08-06 13:06:37 UTC
(In reply to Christian Trautman from comment #3)
> As 2.9 is no longer being updated,  should we just close this bug?

Yes, let me close it.


Note You need to log in before you can comment on or make changes to this bug.