Bug 1973016

Summary: Poor network performance with Win2022
Product: Red Hat Enterprise Linux 8 Reporter: Quan Wenli <wquan>
Component: virtio-winAssignee: Meirav Dean <mdean>
virtio-win sub component: virtio-win-prewhql QA Contact: Quan Wenli <wquan>
Status: CLOSED DUPLICATE Docs Contact:
Severity: high    
Priority: high Flags: pm-rhel: mirror+
Version: 8.5   
Target Milestone: beta   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-06-17 06:09:00 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Quan Wenli 2021-06-17 06:06:39 UTC
Description of problem:
Compare network performance between 2022 and 2019 with virtio-win-prewhql-0.1-199, we can see two performance issues:

1. For TCP_STREAM tests:

  1.1 TX: there is about 17%-60% performance gap compared with Win2019[1]
  1.2 RX: almost no difference between 2022 and 2019[1]

2. For TCP_RR test, about 30% performance drop compared with Win2019[2]

[1] TCP_STREAM: http://10.73.60.69/results/regression/2021-6-10-network-Win2022/netperf.with_jumbo.host_guest.html

[2] TCP_RR: http://10.73.60.69/results/regression/2021-6-10-network-Win2022/netperf.default.host_guest.html

Version-Release number of selected component (if applicable):

userspace:  qemu-kvm-6.0.50-18.scrmod+el8.5.0+11348+c852f1ac.wrb210609.x86_64
host kernel:  4.18.0-310.el8.x86_64
virtio-win-prewhql: virtio-win-prewhql-0.1-199


How reproducible:


Steps to Reproduce:
1.Boot vm with virtio/vhost like:

numactl \
    -m 1  /usr/libexec/qemu-kvm \
    -S  \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 4096 \
    -object memory-backend-ram,size=4096M,id=mem-machine_mem  \
    -smp 4,maxcpus=4,cores=2,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server-noTSX',hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0xfff,hv_stimer,hv_synic,hv_vpindex,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_frequencies,hv_runtime,hv_tlbflush,hv_reenlightenment,hv_stimer_direct,hv_ipi,+kvm_pv_unhalt \
    -chardev socket,server=on,id=qmp_id_qmpmonitor1,path=/tmp/avocado_63wgacu1/monitor-qmpmonitor1-20210610-033654-ki0n0WVw,wait=off  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,server=on,id=qmp_id_catch_monitor,path=/tmp/avocado_63wgacu1/monitor-catch_monitor-20210610-033654-ki0n0WVw,wait=off  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idVGnkeH \
    -chardev socket,server=on,id=chardev_serial0,path=/tmp/avocado_63wgacu1/serial-serial0-20210610-033654-ki0n0WVw,wait=off \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20210610-033654-ki0n0WVw,path=/tmp/avocado_63wgacu1/seabios-20210610-033654-ki0n0WVw,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20210610-033654-ki0n0WVw,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -blockdev node-name=file_image1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/root/avocado/data/avocado-vt/vl_avocado-vt-vm1_image1.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,bus=pcie-root-port-2,addr=0x0 \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-net-pci,mac=9a:be:3e:64:42:96,id=id57ApcZ,netdev=idq84UnN,bus=pcie-root-port-3,addr=0x0  \
    -netdev tap,id=idq84UnN,vhost=on,vhostfd=20,fd=16 \
    -device rtl8139,mac=9a:37:37:37:37:7e,id=idpNfodd,netdev=idLwmFfj,bus=pcie-pci-bridge-0,addr=0x1  \
    -netdev tap,id=idLwmFfj,fd=21 \
    -blockdev node-name=file_cd1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/iso/windows/winutils.iso,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_cd1,driver=raw,read-only=on,cache.direct=on,cache.no-flush=off,file=file_cd1 \
    -device ide-cd,id=cd1,drive=drive_cd1,bootindex=1,write-cache=on,bus=ide.0,unit=0  \
    -vnc :0  \
    -rtc base=localtime,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=5

2.run netperf-2.6.0 server on guest
3.run "netperf -H 192.168.58.73(guest_ip) -l 67.5 -t TCP_STREAM -- -m 65535" external host.
4.run "netperf -D 1 -H 192.168.58.73 -l 67.5 -t TCP_RR -v 1 -- -r 64,64" on external host.

Actual results:

Win 2022's performance is not worse than Win2016. 

Expected results:

Win 2022's performance is not worse than Win2019. 

Additional info:

Comment 1 Quan Wenli 2021-06-17 06:09:00 UTC
Close it as duplicate with Bug 1973017

*** This bug has been marked as a duplicate of bug 1973017 ***