Bug 1855331

Summary: [CNV] Windows 10 is showing 300-400% slowness in Disk IO, latency, and transfer rate on KVM
Product: Red Hat Enterprise Linux 7 Reporter: Jonathan Edwards <joedward>
Component: virtio-winAssignee: Vadim Rozenfeld <vrozenfe>
virtio-win sub component: distribution QA Contact: Yanhui Ma <yama>
Status: CLOSED NOTABUG Docs Contact:
Severity: high    
Priority: high CC: ailan, akamra, bbenshab, chayang, coli, danken, fdeutsch, iheim, ipinto, jhopper, jinzhao, juzhang, mdean, mimehta, mprivozn, ncredi, oyahud, rmohr, stefanha, vkuznets, vrozenfe, yama
Version: 7.7Flags: areis: needinfo? (joedward)
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Windows   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-10-26 14:07:01 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Windows 10 KVM v ESX disk IO test none

Description Jonathan Edwards 2020-07-09 15:05:43 UTC
Created attachment 1700460 [details]
Windows 10 KVM v ESX disk IO test

Description of problem:
Observing excessive slowness of storage IO with a Windows 10 guest on RHEL 7 KVM vs a similar configuration on ESX - on the order of 300-400% worse over KVM


Version-Release number of selected component (if applicable):
qemu-img-rhev-2.12.0-33.el7_7.4.x86_64
qemu-kvm-common-rhev-2.12.0-33.el7_7.4.x86_64
libvirt-daemon-driver-qemu-4.5.0-23.el7_7.1.x86_64
qemu-kvm-rhev-2.12.0-33.el7_7.4.x86_64
ipxe-roms-qemu-20180825-2.git133f4c.el7.noarch

Windows Qemu Package
    c:\Program Files\Qemu-ga>qemu-ga.exe -V
    QEMU Guest Agent 100.0.0

Backing Store
  Local disk
    File System Type xfs:
    /dev/sda5      xfs       6.9T  1.6T  5.4T  23% /mnt/datastore

    Getting Created Partition 5:
    sgdisk -n 1:2048:+${uefi_size}M \
       -n 2:0:+${bootsz}M \
       -n 3:0:+${biosbootsz}M \
       -n 4:0:+${rootsz}M \
       -n 5:0:0 \
       -t 1:EF00 \
       -t 2:0700 \
       -t 3:EF02 \
       -t 4:8E00 \
       -t 5:8E00 /dev/${bid}

    Hardware (HPE) Controller:
    Smart Array P440ar
    RAID 5

ESX controller emulation
  LSI Logic SAS


How reproducible:
100%

Steps to reproduce:
1. iometer, fio, winsat disk (pick your favorite disk io testing tool)

Results:
https://docs.google.com/spreadsheets/d/1ZP8V_Q0-peIqJJO5aigGmzGn6YJwNXf6HkI1iUJ_x_s

Comment 45 Fabian Deutsch 2020-10-23 21:39:44 UTC
Jenifer, I suppose we kow the root cause (hyperv) and can close this bug?