Description of problem: Version-Release number of selected component (if applicable): [root@dell-per730-56 ~]# rpm -qa|grep openv kernel-kernel-networking-openvswitch-common-2.0-67.noarch openvswitch2.11-2.11.0-53.20200327gita4efc59.el8fdp.x86_64 openvswitch-selinux-extra-policy-1.0-22.el8fdp.noarch [root@dell-per730-56 ~]# cat /etc/redhat-release Red Hat Enterprise Linux release 8.2 (Ootpa) [root@dell-per730-56 ~]# uname -a Linux dell-per730-56.rhts.eng.pek2.redhat.com 4.18.0-193.el8.x86_64 #1 SMP Fri Mar 27 14:35:58 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux How reproducible: Steps to Reproduce: 1. Build ovs pvp kernel topo ip link set dev enp130s0f0 up /usr/bin/ovs-vsctl --timeout 10 add-port br0 enp130s0f0 ip link set dev enp130s0f1 up /usr/bin/ovs-vsctl --timeout 10 add-port br0 enp130s0f1 ip tuntap del tap0 mode tap multi_queue ip tuntap add tap0 mode tap multi_queue ip link set dev tap0 up /usr/bin/ovs-vsctl --timeout 10 add-port br0 tap0 ip tuntap del tap1 mode tap multi_queue ip tuntap add tap1 mode tap multi_queue ip link set dev tap1 up /usr/bin/ovs-vsctl --timeout 10 add-port br0 tap1 /usr/bin/ovs-ofctl -O OpenFlow13 --timeout 10 del-flows br0 /usr/bin/ovs-ofctl -O OpenFlow13 --timeout 10 add-flow br0 idle_timeout=0,in_port=1,action=output:3 /usr/bin/ovs-ofctl -O OpenFlow13 --timeout 10 add-flow br0 idle_timeout=0,in_port=3,action=output:1 /usr/bin/ovs-ofctl -O OpenFlow13 --timeout 10 add-flow br0 idle_timeout=0,in_port=4,action=output:2 /usr/bin/ovs-ofctl -O OpenFlow13 --timeout 10 add-flow br0 idle_timeout=0,in_port=2,action=output:4 2.start guest if 1 queue case: /bin/bash -c "sudo -E taskset -c 3,5,29 /usr/libexec/qemu-kvm -m 8192 -smp 3 -cpu host,migratable=off -drive if=ide,file=rhel8.2-vsperf-1Q-noviommu.qcow2 -boot c --enable-kvm -monitor unix:/tmp/vm0monitor,server,nowait -object memory-backend-file,id=mem,size=8192M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc -nographic -vnc :0 -name Client0 -snapshot -net none -no-reboot -M q35,kernel-irqchip=split -device intel-iommu,device-iotlb=on,intremap,caching-mode=true -device pcie-root-port,id=root.1,slot=1 -device pcie-root-port,id=root.2,slot=2 -device pcie-root-port,id=root.3,slot=3 -device pcie-root-port,id=root.4,slot=4 -netdev type=tap,id=eth0,script=no,downscript=no,ifname=tap0,vhost=on -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=eth0,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off -netdev type=tap,id=eth1,script=no,downscript=no,ifname=tap1,vhost=on -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=eth1,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off" if 2 queue case: /bin/bash -c "sudo -E taskset -c 3,5,29,7,31 /usr/libexec/qemu-kvm -m 8192 -smp 5 -cpu host,migratable=off -drive if=ide,file=rhel8.2-vsperf-2Q-noviommu.qcow2 -boot c --enable-kvm -monitor unix:/tmp/vm0monitor,server,nowait -object memory-backend-file,id=mem,size=8192M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc -nographic -vnc :0 -name Client0 -snapshot -net none -no-reboot -M q35,kernel-irqchip=split -device intel-iommu,device-iotlb=on,intremap,caching-mode=true -device pcie-root-port,id=root.1,slot=1 -device pcie-root-port,id=root.2,slot=2 -device pcie-root-port,id=root.3,slot=3 -device pcie-root-port,id=root.4,slot=4 -netdev type=tap,id=eth0,queues=2,script=no,downscript=no,ifname=tap0,vhost=on -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=eth0,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off,mq=on,vectors=6 -netdev type=tap,id=eth1,queues=2,script=no,downscript=no,ifname=tap1,vhost=on -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=eth1,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off,mq=on,vectors=6" 3. Run following command inside guest ip link add br0 type bridge ip link set dev enp0s6 master br0 ip link set dev enp0s7 master br0 ip addr add 1.1.1.5/16 dev br0 ip link set dev br0 up arp -s 1.1.1.10 3c:fd:fe:ad:bf:c4 arp -s 1.1.2.10 3c:fd:fe:ad:bf:c5 sysctl -w net.ipv4.ip_forward=1 tuned-adm profile network-latency sysctl -w net.ipv4.conf.all.rp_filter=0 sysctl -w net.ipv4.conf.eth0.rp_filter=0 sysctl -w net.ipv4.conf.eth1.rp_filter=0 4.use trex send the traffic, Actual results: 1 queue case got 0.6mpps, 2 queue case got 0.3-0.4mpps. 2 queue performance lower than 1 queue. Expected results: 2 queue case got higher performance than 1 queue case Additional info:
note to self: internal RM ticket 2254996
Thanks for reporting the bug. The OVS 2.11 is EOL, so only critical fixes at this point. Could you confirm if this issue happens with OVS 3.1? If yes, please update the component, so we can prioritize it properly. Thanks, fbl