Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1327613

Summary: AIO is processing only 1 request, even if >1 requests in virtio
Product: Red Hat Enterprise Linux 7 Reporter: Marcel Kolaja <mkolaja>
Component: qemu-kvm-rhevAssignee: Stefan Hajnoczi <stefanha>
Status: CLOSED ERRATA QA Contact: Virtualization Bugs <virt-bugs>
Severity: high Docs Contact:
Priority: high    
Version: 7.1CC: atheurer, dshaks, huding, jen, jherrman, juzhang, knoel, mst, pm-eus, psuriset, sherold, snagar, stefanha, virt-maint, xfu, yama
Target Milestone: rcKeywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: qemu-kvm-rhev-2.3.0-31.el7_2.14 Doc Type: Bug Fix
Doc Text:
When QEMU was configured with the aio=native parameter, KVM virtual machines were slowed significantly. With this update, asynchronous I/O (AIO) can correctly process more than one request at a time, and using aio=native no longer has a negative impact on guest performance.
Story Points: ---
Clone Of: 1243548 Environment:
Last Closed: 2016-06-29 16:21:50 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1243548    
Bug Blocks:    

Description Marcel Kolaja 2016-04-15 13:21:35 UTC
This bug has been copied from bug #1243548 and has been proposed
to be backported to 7.2 z-stream (EUS).

Comment 3 Stefan Hajnoczi 2016-04-20 10:59:50 UTC
Patch sent to rhvirt-list.

Comment 4 Miroslav Rezanina 2016-05-16 10:16:47 UTC
Fix included in qemu-kvm-rhev-2.3.0-31.el7_2.14

Comment 6 Yanhui Ma 2016-05-24 03:18:39 UTC
Hi Stefan,

Could you check the following steps and results?

Best regards,
Yanhui

verify the bug:

host:
qemu-kvm-rhev-2.3.0-31.el7_2.14.x86_64
kernel-3.10.0-327.8.1.el7.x86_64

guest:
kernel-3.10.0-327.10.1.el7.x86_64

steps:
1. boot a rhel7.2.z guest with a virtio_blk data disk
/usr/libexec/qemu-kvm  \
    -S \
    -name 'virt-tests-vm1' \
    -nodefaults \
    -chardev socket,id=hmp_id_humanmonitor1,path=/tmp/monitor-humanmonitor1-20160518-053630-gSzcfLJy,server,nowait \
    -mon chardev=hmp_id_humanmonitor1,mode=readline \
    -chardev socket,id=serial_id_serial1,path=/tmp/serial-serial1-20160518-053630-gSzcfLJy,server,nowait \
    -device isa-serial,chardev=serial_id_serial1 \
    -chardev socket,id=seabioslog_id_20160518-053630-gSzcfLJy,path=/tmp/seabios-20160518-053630-gSzcfLJy,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20160518-053630-gSzcfLJy,iobase=0x402 \
    -device ich9-usb-uhci1,id=usb1,bus=pci.0,addr=0x3 \
    -drive file='/home/RHEL-Server-7.2-64-virtio.qcow2',index=0,if=none,id=drive-ide0-0-0,media=disk,cache=none,snapshot=off,format=qcow2 \
    -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,bootindex=0 \
    -drive file='/mnt/storage2.qcow2',index=2,if=none,id=drive-virtio-disk2,media=disk,cache=none,snapshot=off,format=qcow2,aio=native \
    -device virtio-blk-pci,bus=pci.0,addr=0x5,drive=drive-virtio-disk2,bootindex=1 \
    -device e1000,netdev=idHsDhbB,bus=pci.0,addr=0x4,id='id4NBOCl' \
    -netdev tap,id=idHsDhbB \
    -m 4096 \
    -smp 2,cores=1,threads=1,sockets=2 \
    -cpu 'SandyBridge' \
    -M pc \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -vnc :0 \
    -vga cirrus \
    -rtc base=utc,clock=host,driftfix=slew  \
    -device sga \
    -enable-kvm -monitor stdio

2.in guest:
/dev/vda------the virtio_blk data disk
fio --rw=read --bs=4k --iodepth=8 --runtime=1m --direct=1 --filename=/dev/vda --name=job1 --size=512MB --time_based --ioengine=libaio --numjobs=16 --thread

3.in host:
stap test.stp

[root@localhost tapset]# cat test.stp 
probe qemu.kvm.virtio_queue_notify
{
	printf("virtio_queue_notify:\n")
	printf("ts=%d,vdev=%d,n=%d,vq=%d\n",gettimeofday_ns(),vdev,n,vq) 
}
probe qemu.kvm.virtio_blk_handle_read 
{
	printf("virtio_blk_handle_read:\n")
	printf("ts=%d,req=%d,sector=%d,nsectors=%d\n",gettimeofday_ns(),req,sector,nsectors)
}

resutls:

[root@localhost home]# stap test.stp 
virtio_queue_notify:
ts=1464056891074017712,vdev=139685315698496,n=0,vq=139685357248512
virtio_blk_handle_read:
ts=1464056891074034653,req=139685346762752,sector=0,nsectors=8
virtio_blk_handle_read:
ts=1464056891074041757,req=139685346820096,sector=0,nsectors=8
virtio_blk_handle_read:
ts=1464056891074057607,req=139685346877440,sector=8,nsectors=8
virtio_blk_handle_read:
ts=1464056891074065414,req=139685353873408,sector=8,nsectors=8
virtio_blk_handle_read:
ts=1464056891074071246,req=139685357174784,sector=16,nsectors=8
virtio_blk_handle_read:
ts=1464056891074088795,req=139685353570304,sector=16,nsectors=8
virtio_blk_handle_read:
ts=1464056891074104425,req=139685332164608,sector=24,nsectors=8
virtio_blk_handle_read:
ts=1464056891074122717,req=139685357101056,sector=24,nsectors=8
virtio_blk_handle_read:
ts=1464056891074136825,req=139685335105536,sector=32,nsectors=8
virtio_blk_handle_read:
ts=1464056891074153388,req=139685354610688,sector=32,nsectors=8
virtio_blk_handle_read:
ts=1464056891074173917,req=139685338923008,sector=40,nsectors=8
virtio_blk_handle_read:
ts=1464056891074188823,req=139685342470144,sector=40,nsectors=8
virtio_blk_handle_read:
ts=1464056891074206011,req=139685409775616,sector=48,nsectors=8
virtio_blk_handle_read:
ts=1464056891074220796,req=139685355134976,sector=56,nsectors=8
virtio_blk_handle_read:
ts=1464056891074234468,req=139685355192320,sector=48,nsectors=8
virtio_blk_handle_read:
ts=1464056891074250236,req=139685356855296,sector=56,nsectors=8
virtio_blk_handle_read:
ts=1464056891074268646,req=139685356912640,sector=0,nsectors=8
virtio_blk_handle_read:
ts=1464056891074282998,req=139685355823104,sector=0,nsectors=8
virtio_blk_handle_read:
ts=1464056891074299022,req=139685355880448,sector=8,nsectors=8
virtio_blk_handle_read:
ts=1464056891074314186,req=139685353406464,sector=8,nsectors=8
virtio_blk_handle_read:
ts=1464056891074327963,req=139685353463808,sector=16,nsectors=8
virtio_blk_handle_read:
ts=1464056891074344009,req=139685334843392,sector=16,nsectors=8
virtio_blk_handle_read:
ts=1464056891074361200,req=139685334900736,sector=24,nsectors=8
virtio_blk_handle_read:
ts=1464056891074376812,req=139685352349696,sector=24,nsectors=8

Comment 7 Stefan Hajnoczi 2016-06-02 01:14:17 UTC
(In reply to Yanhui Ma from comment #6)
> [root@localhost home]# stap test.stp 
> virtio_queue_notify:
> ts=1464056891074017712,vdev=139685315698496,n=0,vq=139685357248512
> virtio_blk_handle_read:
> ts=1464056891074034653,req=139685346762752,sector=0,nsectors=8
> virtio_blk_handle_read:
> ts=1464056891074041757,req=139685346820096,sector=0,nsectors=8
...

This trace shows that virtqueue processing is multiple requests are handled per notify but this BZ is about something different.  It's about io_submit(2) syscalls made by QEMU with -drive aio=native.

With this patch applied virtio-blk should submit multiple I/O requests to the host kernel per io_submit(2) call.  Typically this means one virtio_queue_notify is followed by multiple virtio_blk_handle_read, and then finally there is one io_submit(2) syscall.

So you should see io_submit(_, nr, _) where nr > 1.  Without the patch nr == 1.  This can be used to verify the BZ.

Comment 8 Yanhui Ma 2016-06-02 05:57:13 UTC
modify test.stp in comment6 according to comment 7:
[root@localhost linux]# cat /home/test.stp 
probe qemu.kvm.virtio_queue_notify
{
	printf("virtio_queue_notify:\n")
	printf("ts=%d,vdev=%d,n=%d,vq=%d\n",gettimeofday_ns(),vdev,n,vq) 
}
probe qemu.kvm.virtio_blk_handle_read 
{
	printf("virtio_blk_handle_read:\n")
	printf("ts=%d,req=%d,sector=%d,nsectors=%d\n",gettimeofday_ns(),req,sector,nsectors)
}
probe syscall.io_submit     
{
	printf("name=%s,argstr=%s\n",name,argstr)
}	      


reproduce the issue:
host:
qemu-kvm-rhev-2.3.0-31.el7.x86_64
kernel-3.10.0-327.8.1.el7.x86_64
guest:
kernel-3.10.0-327.10.1.el7.x86_64

steps:
Steps are the same as comment 6.

results:
virtio_queue_notify:
ts=1464845967457170914,vdev=140004803706688,n=0,vq=140004845256704
virtio_blk_handle_read:
ts=1464845967457179687,req=140004836147200,sector=72,nsectors=8
virtio_blk_handle_read:
ts=1464845967457187453,req=140004836261888,sector=72,nsectors=8
virtio_blk_handle_read:
ts=1464845967457194253,req=140004823080960,sector=72,nsectors=8
virtio_blk_handle_read:
ts=1464845967457201233,req=140004823023616,sector=80,nsectors=8
virtio_blk_handle_read:
ts=1464845967457208027,req=140004844527616,sector=88,nsectors=8
name=io_submit,argstr=140004778954752, 1, 0x7f557287b640
name=io_submit,argstr=140004778954752, 1, 0x7f557477b640
name=io_submit,argstr=140004778954752, 1, 0x7f557277b640
virtio_queue_notify:
ts=1464845967457391884,vdev=140004803706688,n=0,vq=140004845256704
virtio_blk_handle_read:
ts=1464845967457398504,req=140004843773952,sector=96,nsectors=8
virtio_blk_handle_read:
ts=1464845967457404119,req=140004844355584,sector=80,nsectors=8
virtio_blk_handle_read:
ts=1464845967457409391,req=140004822908928,sector=88,nsectors=8
virtio_blk_handle_read:
ts=1464845967457414576,req=140004844584960,sector=96,nsectors=8
virtio_blk_handle_read:
ts=1464845967457419631,req=140004842545152,sector=104,nsectors=8
virtio_blk_handle_read:
ts=1464845967457424898,req=140004844642304,sector=64,nsectors=8
name=io_submit,argstr=140004778954752, 1, 0x7f557447b640
name=io_submit,argstr=140004778954752, 1, 0x7f557267b640
name=io_submit,argstr=140004778954752, 1, 0x7f557257b640

We can see nr==1 from "argstr=140004778954752, 1, 0x7f557257b640".


============================================================
verify the issue:
host:
qemu-kvm-rhev-2.3.0-31.el7_2.14.x86_64
kernel-3.10.0-327.8.1.el7.x86_64
guest:
kernel-3.10.0-327.10.1.el7.x86_64

steps:
Steps are the same as above.

resutls:

virtio_blk_handle_read:
ts=1464845158879968164,req=140137711788032,sector=56,nsectors=8
name=io_submit,argstr=140137482285056, 100, 0x7ffe44744720
virtio_queue_notify:
ts=1464845158885842670,vdev=140137514778432,n=0,vq=140137556328448
virtio_blk_handle_read:
ts=1464845158885850415,req=140137553969152,sector=64,nsectors=8
virtio_blk_handle_read:
ts=1464845158885856321,req=140137789497344,sector=64,nsectors=8
virtio_blk_handle_read:
ts=1464845158885861458,req=140137534095360,sector=72,nsectors=8
virtio_blk_handle_read:
ts=1464845158885866577,req=140137544908800,sector=80,nsectors=8
virtio_blk_handle_read:
ts=1464845158885871690,req=140137545023488,sector=88,nsectors=8
virtio_blk_handle_read:
ts=1464845158885881970,req=140137544794112,sector=64,nsectors=8
virtio_blk_handle_read:
ts=1464845158885887025,req=140137553084416,sector=96,nsectors=8
virtio_blk_handle_read:
ts=1464845158885892096,req=140137553141760,sector=104,nsectors=8
name=io_submit,argstr=140137482285056, 3, 0x7ffe44744720
virtio_queue_notify:
ts=1464845158887842112,vdev=140137514778432,n=0,vq=140137556328448
virtio_blk_handle_read:
ts=1464845158887851342,req=140137553199104,sector=112,nsectors=8
virtio_blk_handle_read:
ts=1464845158887858969,req=140137532096512,sector=64,nsectors=8


We can see the nr >1 from "argstr=140137482285056, 100, 0x7ffe44744720".

Hi, stefan

Based on the above results, I think the bug has been fixed. Would you check them again?

Best regards,
Yanhui

Comment 9 Stefan Hajnoczi 2016-06-03 18:30:33 UTC
(In reply to Yanhui Ma from comment #8)
> We can see the nr >1 from "argstr=140137482285056, 100, 0x7ffe44744720".
> 
> Hi, stefan
> 
> Based on the above results, I think the bug has been fixed. Would you check
> them again?

Excellent, that looks correct and you can set the BZ to VERIFIED.

Comment 11 errata-xmlrpc 2016-06-29 16:21:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1371