Bug 1023894 - [virtio-win][viostor] Write/Randwrite IOPS is poor when block size is 256k and iodepth is 64
[virtio-win][viostor] Write/Randwrite IOPS is poor when block size is 256k an...
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: virtio-win (Show other bugs)
7.0
x86_64 Linux
medium Severity medium
: rc
: 7.0
Assigned To: Vadim Rozenfeld
Yanhui Ma
:
Depends On:
Blocks: 1288337
  Show dependency treegraph
 
Reported: 2013-10-28 05:23 EDT by Xiaomei Gao
Modified: 2017-11-09 02:38 EST (History)
14 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-11-09 02:38:36 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Xiaomei Gao 2013-10-28 05:23:50 EDT
Description of problem:
On Window2008R2/Win2012 platform, write/randwrite IOPS of virtio_blk driver degrades ~8%-%17% compared to ide driver only when block size is 256k and iodepth is 64.

Version-Release number of selected component (if applicable):
qemu-kvm-0.12.1.2-2.405.el6.x86_64
kernel-2.6.32-420.el6.x86_64
virtio-win-prewhql-0.1-72

How reproducible:
100%

Steps to Reproduce:
1. Setup
   - Bechmark: fio (direct io)
   - Storage Backend: raw SSD
   - Block I/O elevator:deadline

2. Boot up Window2008R2/Win2012 guest with raw SSD
    /usr/libexec/qemu-kvm  \
    -name 'virt-tests-vm1' \
    -nodefaults \
    -drive file='/usr/local/autotest/tests/virt/shared/data/images/win2008r2-64.raw',index=0,if=none,id=drive-virtio-disk1,media=disk,cache=none,snapshot=off,format=raw,aio=native \
    -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk1,bootindex=0 \
    -drive file='/dev/sdb',index=2,if=none,id=drive-virtio-disk2,media=disk,cache=none,snapshot=off,format=raw,aio=native \
    -device virtio-blk-pci,bus=pci.0,addr=0x5,drive=drive-virtio-disk2,bootindex=1 \
    -device rtl8139,netdev=idmnve6x,mac='9a:37:37:37:37:8e',bus=pci.0,addr=0x6,id='idosVDhd' \
    -netdev tap,id=idmnve6x \
    -m 4096 \
    -smp 2,maxcpus=2,cores=1,threads=1,sockets=2 \
    -cpu 'Westmere' \
    -M rhel6.5.0 \
    -drive file='/usr/local/autotest/tests/virt/shared/data/isos/windows/winutils.iso',index=1,if=none,id=drive-ide0-0-0,media=cdrom,format=raw \
    -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0 \
    -vnc :0 \
    -vga cirrus \
    -rtc base=localtime,clock=host,driftfix=slew  \
    -device sga \
    -enable-kvm

3. Do fio test in windows guest
   C:\fio-2.0.15-x64\fio.exe --rw=write/randwrite --bs=256k --iodepth=64 --runtime=1m --direct=1 --filename=\\.\PHYSICALDRIVE1 --name=job1 --ioengine=windowsaio --thread --group_reporting --numjobs=16 --size=512MB --time_based

Actual results:
- Window2008r2 platform
  +-----------+------------+---------+----------+----------------+---------+
  | Category  | block_size | iodepth | IDE IOPS | VIRTIO_BLK IOPS| VS      |
  +-----------+------------+---------+----------+----------------+---------+
  | Write     |    256K    |   64    |   359    | 296            |-17.549% |
  +-----------+------------+---------+----------+----------------+---------+
  | Randwrite |    256K    |   64    |   330    | 276            |-16.330% |
  +-----------+------------+---------+-------------------------------------+

- Win2012.x86_64 platform
  +-----------+------------+---------+----------+----------------+---------+
  | Category  | block_size | iodepth | IDE IOPS | VIRTIO_BLK IOPS| VS      |
  +-----------+------------+---------+----------+----------------+---------+
  | Write     |    256K    |   64    |   347    | 316            |-8.934%  |
  +-----------+------------+---------+----------+----------------+---------+
  | Randwrite |    256K    |   64    |   324    | 276            |-14.815% |
  +-----------+------------+---------+-------------------------------------+

Expected results:
The performance of virtio_blk driver should be better than ide driver.

Additional info:
Comment 2 Ronen Hod 2014-01-02 06:03:18 EST
In all the other cases there was no regression (actually in improvement). Deferring to 7.1.
Comment 7 Vadim Rozenfeld 2017-08-17 23:16:43 EDT
can we it a try on 7.5 host (especially with multi-queue feature enabled)?

It also will be quite useful to compare ide vs virtio-blk vs virtio-scsi performance data again.

Thanks,
Vadim
Comment 8 lijin 2017-08-17 23:29:51 EDT
Hi maya,

Could you help to handle comment#7?
Comment 9 Yanhui Ma 2017-08-18 01:40:29 EDT
(In reply to Vadim Rozenfeld from comment #7)
> can we it a try on 7.5 host (especially with multi-queue feature enabled)?
> 

Hello Vadim,

Now we don't have tree, qemu and kernel for 7.5 in downstream. Do I need to use upstream? or try it on 7.4.

Thanks,
Yanhui
> It also will be quite useful to compare ide vs virtio-blk vs virtio-scsi
> performance data again.
> 
> Thanks,
> Vadim
Comment 10 Vadim Rozenfeld 2017-08-18 02:42:34 EDT
(In reply to Yanhui Ma from comment #9)
> (In reply to Vadim Rozenfeld from comment #7)
> > can we it a try on 7.5 host (especially with multi-queue feature enabled)?
> > 
> 
> Hello Vadim,
> 
> Now we don't have tree, qemu and kernel for 7.5 in downstream. Do I need to
> use upstream? or try it on 7.4.
> 

upstream should be fine.

Thanks,
Vadim.


> Thanks,
> Yanhui
> > It also will be quite useful to compare ide vs virtio-blk vs virtio-scsi
> > performance data again.
> > 
> > Thanks,
> > Vadim
Comment 11 Yanhui Ma 2017-08-24 04:26:24 EDT
(In reply to Vadim Rozenfeld from comment #7)
> can we it a try on 7.5 host (especially with multi-queue feature enabled)?
> 
> It also will be quite useful to compare ide vs virtio-blk vs virtio-scsi
> performance data again.
> 

Hello Vadim,
Here are results comparing ide vs virtio_blk, even if without multi-queue, there is obvious improvement for virtio_blk compared with ide, no performance regression is found.

http://kvm-perf.englab.nay.redhat.com/results/request/bug1023894/idevsblk/raw.ide.*.Win2012.x86_64.html

host qemu: qemu-2.10.0-rc3(./configure --enable-kvm --enable-linux-aio --enable-tcmalloc --enable-spice --target-list=x86_64-softmmu)
host kernel: kernel-3.10.0-702.el7.x86_64
virtio_win driver: virtio-win-1.9.3-1.el7

virtio_blk vs virtio_scsi:
http://kvm-perf.englab.nay.redhat.com/results/request/bug1023894/blkvsscsi/raw.ide.*.Win2012.x86_64.html

No obvious performance difference between virtio_blk and virtio_scsi.


> Thanks,
> Vadim
Comment 12 Vadim Rozenfeld 2017-08-24 04:43:32 EDT
(In reply to Yanhui Ma from comment #11)
> (In reply to Vadim Rozenfeld from comment #7)
> > can we it a try on 7.5 host (especially with multi-queue feature enabled)?
> > 
> > It also will be quite useful to compare ide vs virtio-blk vs virtio-scsi
> > performance data again.
> > 
> 
> Hello Vadim,
> Here are results comparing ide vs virtio_blk, even if without multi-queue,
> there is obvious improvement for virtio_blk compared with ide, no
> performance regression is found.
> 
> http://kvm-perf.englab.nay.redhat.com/results/request/bug1023894/idevsblk/
> raw.ide.*.Win2012.x86_64.html
> 
> host qemu: qemu-2.10.0-rc3(./configure --enable-kvm --enable-linux-aio
> --enable-tcmalloc --enable-spice --target-list=x86_64-softmmu)
> host kernel: kernel-3.10.0-702.el7.x86_64
> virtio_win driver: virtio-win-1.9.3-1.el7
> 
> virtio_blk vs virtio_scsi:
> http://kvm-perf.englab.nay.redhat.com/results/request/bug1023894/blkvsscsi/
> raw.ide.*.Win2012.x86_64.html
> 
> No obvious performance difference between virtio_blk and virtio_scsi.
> 
> 
> > Thanks,
> > Vadim

Thanks a lot Yanhui.

It really doesn't look bad now.
We will probably close this bug soon.

All the best,
Vadim.
Comment 13 Vadim Rozenfeld 2017-11-09 02:38:36 EST
Closing the issue, based on the above results.

Note You need to log in before you can comment on or make changes to this bug.