RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 885681 - IO throttling result is not accurate by using fio tools and iodepth=100
Summary: IO throttling result is not accurate by using fio tools and iodepth=100
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm
Version: 7.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Fam Zheng
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-12-10 12:12 UTC by juzhang
Modified: 2014-02-24 01:50 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-02-24 01:50:07 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description juzhang 2012-12-10 12:12:37 UTC
Description of problem:
Boot quest and set bps=512000, then run "fio --filename=/dev/sdc --direct=1 --rw=randread --bs=1M --size=10M --name=test --iodepth=100 --ioengine=libaio" in guest. the result 8441KB/s far bigger than bps=512000 512KB/s

Version-Release number of selected component (if applicable):
qemu-img-1.2.0-20.el7.x86_64
Host kernel:
#uname -r
3.6.0-0.29.el7.x86_64
Guest kernel:
#uname -r
2.6.32-343.el6.x86_64


How reproducible:
100%

Steps to Reproduce:
1.boot guest and set bps=512000 in one virtio-scsi block
# /usr/libexec/qemu-kvm -cpu Opteron_G3 -m 2048 -smp 2,sockets=1,cores=2,threads=1 -enable-kvm -name rhel64 -smbios type=1,manufacturer='Red Hat',product='RHEV Hypervisor',version=el6,serial=koTUXQrb,uuid=feebc8fd-f8b0-4e75-abc3-e63fcdb67170 -k en-us -rtc base=localtime,clock=host,driftfix=slew  -drive file=/root/zhangjunyi/cdrom.qcow2,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=3 -monitor stdio -qmp tcp:0:6666,server,nowait -boot menu=on -bios /usr/share/seabios/bios.bin -drive file=/home/rhel6u4_mazhang.qcow2,if=none,id=drive-scsi-disk,format=qcow2,cache=none,werror=stop,rerror=stop -device virtio-scsi-pci,id=scsi0,addr=0x5 -device scsi-disk,drive=drive-scsi-disk,bus=scsi0.0,scsi-id=0,lun=0,id=scsi-disk,bootindex=1 -netdev tap,id=hostnet0,downscript=no -device e1000,netdev=hostnet0,id=net0,mac=00:1a:4a:2e:28:1a,bus=pci.0,addr=0x4,bootindex=2 -chardev socket,path=/tmp/isa-serial,server,nowait,id=isa1 -device isa-serial,chardev=isa1,id=isa-serial1 -vnc :10 -balloon virtio -smbios type=0,vendor=DW,version=0.1,date=2011-11-22,release=0.1 -drive file=/root/zhangjunyi/floopy.qcow2,if=none,id=drive-fdc0-0-0,format=qcow2,cache=none -global isa-fdc.driveA=drive-fdc0-0-0 -drive file=/root/zhangjunyi/cdrom_scsi.qcow2,if=none,media=cdrom,readonly=on,format=qcow2,id=cdrom1 -device scsi-cd,bus=scsi0.0,drive=cdrom1,id=scsi0-0 -device usb-ehci,id=ehci -drive file=/root/zhangjunyi/usb.qcow2,if=none,id=drive-usb-2-0,media=disk,format=qcow2,cache=none -device usb-storage,drive=drive-usb-2-0,id=usb-0-0,removable=on,bus=ehci.0,port=1 -drive file=/root/zhangjunyi/ide.qcow2,if=none,id=block-ide,format=qcow2,werror=stop,rerror=stop,cache=none -device ide-drive,drive=block-ide,id=block-ide -drive file=/root/zhangjunyi/virtio.qcow2,format=qcow2,if=none,id=block-virtio,cache=none,werror=stop,rerror=stop  -device virtio-blk-pci,bus=pci.0,addr=0x8,drive=block-virtio,id=block-virtio -device sga -chardev socket,id=serial0,path=/var/test1,server,nowait -device isa-serial,chardev=serial0 -drive file=/root/zhangjunyi/test_scsi.qcow2,if=none,id=drive-scsi-disk_test,format=qcow2,cache=none,werror=stop,rerror=stop,bps=512000 -device scsi-disk,drive=drive-scsi-disk_test,bus=scsi0.0,scsi-id=0,lun=1,id=scsi-disk_test
2.Check the bps value by using hmp monitor
(qemu) info block
.....
drive-scsi-disk_test: removable=0 io-status=ok file=/root/zhangjunyi/test_scsi.qcow2 ro=0 drv=qcow2 encrypted=0 bps=512000 bps_rd=0 bps_wr=0 iops=0 iops_rd=0 iops_wr=0

3.Run fio tools in guest
fio --filename=/dev/sdc --direct=1 --rw=randread --bs=1M --size=10M --name=test --iodepth=100 --ioengine=libaio
test: (g=0): rw=randread, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=100
fio-2.0.10
Starting 1 process
Jobs: 1 (f=1)
test: (groupid=0, jobs=1): err= 0: pid=20504: Mon Dec 10 14:48:23 2012
  read : io=10240KB, bw=8441.9KB/s, iops=8 , runt=  1213msec
    slat (usec): min=226 , max=1525 , avg=808.00, stdev=507.02
    clat (msec): min=1204 , max=1211 , avg=1207.61, stdev= 2.24
     lat (msec): min=1205 , max=1212 , avg=1208.42, stdev= 2.34
    clat percentiles (msec):
     |  1.00th=[ 1205],  5.00th=[ 1205], 10.00th=[ 1205], 20.00th=[ 1205],
     | 30.00th=[ 1205], 40.00th=[ 1205], 50.00th=[ 1205], 60.00th=[ 1205],
     | 70.00th=[ 1205], 80.00th=[ 1205], 90.00th=[ 1205], 95.00th=[ 1205],
     | 99.00th=[ 1205], 99.50th=[ 1205], 99.90th=[ 1205], 99.95th=[ 1205],
     | 99.99th=[ 1205]
    bw (KB/s)  : min=  844, max=  844, per=10.00%, avg=844.00, stdev= 0.00
    lat (msec) : 2000=100.00%
  cpu          : usr=0.00%, sys=0.66%, ctx=10, majf=0, minf=403
  IO depths    : 1=10.0%, 2=20.0%, 4=40.0%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=10/w=0/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=10240KB, aggrb=8441KB/s, minb=8441KB/s, maxb=8441KB/s, mint=1213msec, maxt=1213msec

Disk stats (read/write):
  sdc: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%

  
Actual results:
8441KB/s far bigger than bps=512000 512KB/s

Expected results:
On average, close 512KB/s as much as possible.

Additional info:
If I set iodepth=1, the result is much better
fio --filename=/dev/sdc --direct=1 --rw=randread --bs=1M --size=10M --name=test --ioengine=libaio
test: (g=0): rw=randread, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=1
fio-2.0.10
Starting 1 process
Jobs: 1 (f=1): [r] [81.8% done] [0K/0K/0K /s] [0 /0 /0  iops] [eta 00m:04s]   
test: (groupid=0, jobs=1): err= 0: pid=20501: Mon Dec 10 14:48:11 2012
  read : io=10240KB, bw=600112 B/s, iops=0 , runt= 17473msec
    slat (usec): min=188 , max=987 , avg=311.60, stdev=239.73
    clat (msec): min=18 , max=6109 , avg=1746.88, stdev=2244.59
     lat (msec): min=18 , max=6109 , avg=1747.20, stdev=2244.55
    clat percentiles (msec):
     |  1.00th=[   19],  5.00th=[   19], 10.00th=[   19], 20.00th=[   20],
     | 30.00th=[   20], 40.00th=[   21], 50.00th=[   23], 60.00th=[ 1057],
     | 70.00th=[ 2057], 80.00th=[ 4047], 90.00th=[ 4080], 95.00th=[ 6128],
     | 99.00th=[ 6128], 99.50th=[ 6128], 99.90th=[ 6128], 99.95th=[ 6128],
     | 99.99th=[ 6128]
    bw (KB/s)  : min=  498, max=  967, per=100.00%, avg=592.80, stdev=209.19
    lat (msec) : 20=30.00%, 50=20.00%, 2000=10.00%, >=2000=40.00%
  cpu          : usr=0.00%, sys=0.02%, ctx=11, majf=0, minf=284
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=10/w=0/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=10240KB, aggrb=586KB/s, minb=586KB/s, maxb=586KB/s, mint=17473msec, maxt=17473msec

Comment 4 juzhang 2014-02-21 05:57:10 UTC
Hi Fam,

Please have a look comment3. Any improvements about this issue after qemu1.2?

Best Regards,
Junyi

Comment 5 Fam Zheng 2014-02-21 14:47:16 UTC
I think there is no change to IO throttling since then.

Fam

Comment 6 Fam Zheng 2014-02-21 15:29:07 UTC
Since this cannot be reproduced, I suggest close this bug. Junyi, would you confirm?

Comment 7 juzhang 2014-02-24 01:50:07 UTC
(In reply to Fam Zheng from comment #6)
> Since this cannot be reproduced, I suggest close this bug. Junyi, would you
> confirm?

Sure. Close this bz as currently release. Any further testing, free to update it in the bz.

Best Regards,
Junyi


Note You need to log in before you can comment on or make changes to this bug.