RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1476188 - [GSS] The "read_iops_sec" and "write_iops_sec" items can't become effective when run fio tools with read/write mode
Summary: [GSS] The "read_iops_sec" and "write_iops_sec" items can't become effective ...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.3
Hardware: x86_64
OS: Linux
low
low
Target Milestone: rc
: ---
Assignee: Fam Zheng
QA Contact: Gu Nini
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-07-28 08:33 UTC by liuwei
Modified: 2020-12-14 09:16 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-22 07:03:03 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description liuwei 2017-07-28 08:33:48 UTC
Description of problem:

Define the disk limitation in VM xml file.For example , the configuration is below:

<disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/rhel7.0-1.qcow2'/>
      <backingStore/>
      <target dev='vdb' bus='virtio'/>
      <iotune>
        <read_iops_sec>300</read_iops_sec>
        <write_iops_sec>200</write_iops_sec>
      </iotune>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </disk>

The "read_iops_sec" and "write_iops_sec" items can't become effective 	when run fio tools with read/write mode.

The output is below:

read mode:

[root@test ~]# fio -filename=/dev/vdb -direct=1 -iodepth 32 -thread -rw=read -ioengine=libaio -bs=4k -size=10G -numjobs=10 -runtime=200 -group_reporting -name=mytestfionew
mytestfionew: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
...
fio-2.2.8
Starting 10 threads
Jobs: 10 (f=10): [R(10)] [0.4% done] [1756KB/0KB/0KB /s] [439/0/0 iops] [eta 14h:37m:44s]
mytestfionew: (groupid=0, jobs=10): err= 0: pid=2620: Fri Jul 28 03:18:58 2017
  read : io=438468KB, bw=2189.2KB/s, iops=547, runt=200304msec   <<<here
    slat (usec): min=3, max=1146.4K, avg=17451.83, stdev=47769.78
    clat (msec): min=2, max=2910, avg=566.98, stdev=257.62
     lat (msec): min=2, max=2910, avg=584.44, stdev=264.20
    clat percentiles (msec):
     |  1.00th=[  212],  5.00th=[  243], 10.00th=[  269], 20.00th=[  351],
     | 30.00th=[  416], 40.00th=[  469], 50.00th=[  529], 60.00th=[  586],
     | 70.00th=[  652], 80.00th=[  750], 90.00th=[  898], 95.00th=[ 1045],
     | 99.00th=[ 1385], 99.50th=[ 1565], 99.90th=[ 1991], 99.95th=[ 2114],
     | 99.99th=[ 2573]
    bw (KB  /s): min=    7, max=  644, per=10.16%, avg=222.30, stdev=98.19
    lat (msec) : 4=0.01%, 50=0.01%, 100=0.02%, 250=5.79%, 500=39.52%
    lat (msec) : 750=34.98%, 1000=13.50%, 2000=6.08%, >=2000=0.09%
  cpu          : usr=0.02%, sys=0.06%, ctx=37969, majf=0, minf=346
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=99.7%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued    : total=r=109617/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: io=438468KB, aggrb=2189KB/s, minb=2189KB/s, maxb=2189KB/s, mint=200304msec, maxt=200304msec

Disk stats (read/write):
  vdb: ios=109512/0, merge=0/0, ticks=25083763/0, in_queue=25100984, util=100.00%

write mode:

[root@test ~]# fio -filename=/dev/vdb -direct=1 -iodepth 32 -thread -rw=write -ioengine=libaio -bs=4k -size=10G -numjobs=10 -runtime=200 -group_reporting -name=mytestfionew
mytestfionew: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
...
fio-2.2.8
Starting 10 threads
Jobs: 10 (f=10): [W(10)] [0.2% done] [0KB/1072KB/0KB /s] [0/268/0 iops] [eta 01d:03h:09m:52s]
mytestfionew: (groupid=0, jobs=10): err= 0: pid=2690: Fri Jul 28 03:30:06 2017
  write: io=254808KB, bw=1270.1KB/s, iops=317, runt=200495msec   <<here
    slat (usec): min=2, max=1524.9K, avg=28936.79, stdev=79231.09
    clat (usec): min=591, max=4334.6K, avg=977451.30, stdev=500041.46
     lat (usec): min=594, max=4395.3K, avg=1006388.66, stdev=514918.41
    clat percentiles (msec):
     |  1.00th=[  347],  5.00th=[  396], 10.00th=[  420], 20.00th=[  461],
     | 30.00th=[  652], 40.00th=[  783], 50.00th=[  906], 60.00th=[ 1029],
     | 70.00th=[ 1172], 80.00th=[ 1352], 90.00th=[ 1647], 95.00th=[ 1926],
     | 99.00th=[ 2474], 99.50th=[ 2769], 99.90th=[ 3425], 99.95th=[ 3720],
     | 99.99th=[ 4080]
    bw (KB  /s): min=    2, max=  441, per=10.40%, avg=132.07, stdev=76.35
    lat (usec) : 750=0.03%, 1000=0.01%
    lat (msec) : 2=0.01%, 10=0.01%, 20=0.01%, 50=0.01%, 100=0.04%
    lat (msec) : 250=0.06%, 500=22.23%, 750=14.53%, 1000=20.72%, 2000=38.33%
    lat (msec) : >=2000=4.02%
  cpu          : usr=0.01%, sys=0.04%, ctx=26788, majf=0, minf=21
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=99.5%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=63702/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: io=254808KB, aggrb=1270KB/s, minb=1270KB/s, maxb=1270KB/s, mint=200495msec, maxt=200495msec

Disk stats (read/write):
  vdb: ios=59/63654, merge=0/0, ticks=98/25188054, in_queue=25206098, util=100.00%

But with randwrite && randread mode, the items become effective. Please check the below:

1 randread mode :

[root@test ~]# fio -filename=/dev/vdb -direct=1 -iodepth 32 -thread -rw=randread -ioengine=libaio -bs=4k -size=10G -numjobs=10 -runtime=200 -group_reporting-name=mytestfio
fio: unrecognized option '-group_reporting-name=mytestfio'
fio: unrecognized option '-group_reporting-name=mytestfio'
Did you mean group_reporting?
[root@test ~]# fio -filename=/dev/vdb -direct=1 -iodepth 32 -thread -rw=randread -ioengine=libaio -bs=4k -size=10G -numjobs=10 -runtime=200 -group_reporting -name=mytestfio
mytestfio: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
...
fio-2.2.8
Starting 10 threads
Jobs: 10 (f=10): [r(10)] [0.2% done] [1140KB/0KB/0KB /s] [285/0/0 iops] [eta 01d:02h:24m:23s]
mytestfio: (groupid=0, jobs=10): err= 0: pid=2607: Fri Jul 28 03:14:48 2017
  read : io=240680KB, bw=1200.7KB/s, iops=300, runt=200460msec  <<<<<here
    slat (usec): min=3, max=2826.8K, avg=31946.99, stdev=87470.79
    clat (msec): min=3, max=4720, avg=1032.53, stdev=451.24
     lat (msec): min=3, max=4853, avg=1064.48, stdev=462.51
    clat percentiles (msec):
     |  1.00th=[  412],  5.00th=[  429], 10.00th=[  429], 20.00th=[  668],
     | 30.00th=[  775], 40.00th=[  881], 50.00th=[  963], 60.00th=[ 1074],
     | 70.00th=[ 1205], 80.00th=[ 1352], 90.00th=[ 1631], 95.00th=[ 1860],
     | 99.00th=[ 2376], 99.50th=[ 2606], 99.90th=[ 3556], 99.95th=[ 3720],
     | 99.99th=[ 4424]
    bw (KB  /s): min=    1, max=  384, per=10.32%, avg=123.85, stdev=62.91
    lat (msec) : 4=0.01%, 250=0.05%, 500=11.63%, 750=16.72%, 1000=24.58%
    lat (msec) : 2000=43.58%, >=2000=3.45%
  cpu          : usr=0.01%, sys=0.04%, ctx=22506, majf=0, minf=337
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=99.5%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued    : total=r=60170/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: io=240680KB, aggrb=1200KB/s, minb=1200KB/s, maxb=1200KB/s, mint=200460msec, maxt=200460msec

Disk stats (read/write):
  vdb: ios=60128/0, merge=0/0, ticks=24995191/0, in_queue=25009153, util=100.00%


3 randwrite mode:

fio -filename=/dev/vdb -direct=1 -iodepth 32 -thread -rw=randwrite -ioengine=libaio -bs=4k -size=10G -numjobs=10 -runtime=200 -group_reporting -name=mytestfionew
mytestfionew: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
...
fio-2.2.8
Starting 10 threads
Jobs: 10 (f=10): [w(10)] [0.1% done] [0KB/780KB/0KB /s] [0/195/0 iops] [eta 02d:01h:28m:30s]
mytestfionew: (groupid=0, jobs=10): err= 0: pid=2675: Fri Jul 28 03:25:38 2017
  write: io=160632KB, bw=819628B/s, iops=200, runt=200685msec   <<<here
    slat (usec): min=3, max=2719.1K, avg=44665.25, stdev=122235.56
    clat (usec): min=287, max=6039.7K, avg=1550783.26, stdev=801664.41
     lat (usec): min=313, max=6039.7K, avg=1595449.12, stdev=828195.16
    clat percentiles (msec):
     |  1.00th=[  635],  5.00th=[  644], 10.00th=[  644], 20.00th=[  644],
     | 30.00th=[  963], 40.00th=[ 1287], 50.00th=[ 1483], 60.00th=[ 1713],
     | 70.00th=[ 1926], 80.00th=[ 2212], 90.00th=[ 2606], 95.00th=[ 2999],
     | 99.00th=[ 3752], 99.50th=[ 4146], 99.90th=[ 5145], 99.95th=[ 5538],
     | 99.99th=[ 5866]
    bw (KB  /s): min=    1, max=  240, per=10.40%, avg=83.21, stdev=52.47
    lat (usec) : 500=0.01%, 1000=0.01%
    lat (msec) : 2=0.01%, 50=0.01%, 100=0.01%, 250=0.03%, 500=0.09%
    lat (msec) : 750=26.44%, 1000=4.22%, 2000=42.27%, >=2000=26.92%
  cpu          : usr=0.01%, sys=0.03%, ctx=20792, majf=0, minf=17
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=99.2%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=40158/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: io=160632KB, aggrb=800KB/s, minb=800KB/s, maxb=800KB/s, mint=200685msec, maxt=200685msec

Disk stats (read/write):
  vdb: ios=59/40136, merge=0/0, ticks=96/25225303, in_queue=25236880, util=100.00%



Version-Release number of selected component (if applicable):

cat sosreport-20170725-084016/overcloud-compute-14.localdomain/installed-rpms | grep libvirt-
libvirt-2.0.0-10.el7_3.4.x86_64                             Tue Feb 28 16:39:19 2017
libvirt-client-2.0.0-10.el7_3.4.x86_64                      Tue Feb 28 16:30:29 2017
libvirt-daemon-2.0.0-10.el7_3.4.x86_64                      Tue Feb 28 16:30:30 2017
libvirt-daemon-config-network-2.0.0-10.el7_3.4.x86_64       Tue Feb 28 16:30:33 2017
libvirt-daemon-config-nwfilter-2.0.0-10.el7_3.4.x86_64      Tue Feb 28 16:30:33 2017
libvirt-daemon-driver-interface-2.0.0-10.el7_3.4.x86_64     Tue Feb 28 16:30:32 2017
libvirt-daemon-driver-lxc-2.0.0-10.el7_3.4.x86_64           Tue Feb 28 16:30:33 2017
libvirt-daemon-driver-network-2.0.0-10.el7_3.4.x86_64       Tue Feb 28 16:30:30 2017
libvirt-daemon-driver-nodedev-2.0.0-10.el7_3.4.x86_64       Tue Feb 28 16:30:32 2017
libvirt-daemon-driver-nwfilter-2.0.0-10.el7_3.4.x86_64      Tue Feb 28 16:30:31 2017
libvirt-daemon-driver-qemu-2.0.0-10.el7_3.4.x86_64          Tue Feb 28 16:32:12 2017
libvirt-daemon-driver-secret-2.0.0-10.el7_3.4.x86_64        Tue Feb 28 16:30:32 2017
libvirt-daemon-driver-storage-2.0.0-10.el7_3.4.x86_64       Tue Feb 28 16:32:12 2017
libvirt-daemon-kvm-2.0.0-10.el7_3.4.x86_64                  Tue Feb 28 16:39:03 2017
libvirt-python-2.0.0-2.el7.x86_64                           Tue Feb 28 16:30:31 2017

cat sosreport-20170725-084016/overcloud-compute-14.localdomain/installed-rpms | grep qemu
ipxe-roms-qemu-20160127-5.git6366fa7a.el7.noarch            Tue Feb 28 16:27:19 2017
libvirt-daemon-driver-qemu-2.0.0-10.el7_3.4.x86_64          Tue Feb 28 16:32:12 2017
qemu-img-rhev-2.6.0-28.el7_3.6.x86_64                       Tue Feb 28 16:30:50 2017
qemu-kvm-common-rhev-2.6.0-28.el7_3.6.x86_64                Tue Feb 28 16:31:54 2017
qemu-kvm-rhev-2.6.0-28.el7_3.6.x86_64                       Tue Feb 28 16:38:24 2017

cat sosreport-20170725-084016/overcloud-compute-14.localdomain/installed-rpms |  grep kernel-
erlang-kernel-18.3.4.4-1.el7ost.x86_64                      Tue Feb 28 16:25:36 2017
kernel-3.10.0-514.6.2.el7.x86_64                            Tue Feb 28 15:23:44 2017
kernel-devel-3.10.0-514.6.2.el7.x86_64                      Tue Feb 28 16:47:12 2017
kernel-headers-3.10.0-514.6.2.el7.x86_64                    Tue Feb 28 16:46:58 2017
kernel-tools-3.10.0-514.6.2.el7.x86_64                      Tue Feb 28 15:26:41 2017
kernel-tools-libs-3.10.0-514.6.2.el7.x86_64                 Tue Feb 28 15:19:16 2017


How reproducible:

100% reproduced 

Steps to Reproduce:
1. Define the vm xml file with disk 

<disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/rhel7.0-1.qcow2'/>
      <backingStore/>
      <target dev='vdb' bus='virtio'/>
      <iotune>  <<<<here
        <read_iops_sec>300</read_iops_sec>  <<here
        <write_iops_sec>200</write_iops_sec>  <<here
      </iotune>  <<here
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </disk>

2. Run VM with virsh start <VM xml file>

3. fio test

Actual results:

During disk testing with fio tool.  The item "read_iops_sec" and "write_iops_sec" can't become effective  in "read" and "write" model.

Expected results:

For those items , it can work in every IO mode (for example,  randread/randwrite/write/read).

Additional info:

Comment 5 Ping Li 2017-08-01 15:30:52 UTC
Reproduced the issue with below packages:
Host:
kernel-3.10.0-693.el7
qemu-kvm-rhev-2.9.0-16.el7_4.3
Guest:
kernel-3.10.0-693.el7

Test steps:
1. Boot up guest with below options:
    -drive id=drive_image2,if=none,snapshot=off,aio=native,cache=none,format=qcow2,file=/home/testrun/diskfile/data.qcow2,iops_rd=300,iops_wr=200 \
    -device scsi-hd,id=image2,drive=drive_image2,bootindex=2 \

2. Run fio with read/write/random read/random write mode
2.1 read mode
# fio -filename=/dev/sdb -direct=1 -iodepth 32 -thread -rw=read -ioengine=libaio -bs=4k -size=10G -numjobs=10 -runtime=200 -group_reporting -name=mytestfionew
mytestfionew: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
...
fio-2.99
Starting 10 threads
Jobs: 10 (f=10): [R(10)][100.0%][r=3211KiB/s,w=0KiB/s][r=802,w=0 IOPS][eta 00m:00s]
mytestfionew: (groupid=0, jobs=10): err= 0: pid=6481: Tue Aug  1 10:19:22 2017
   read: IOPS=738, BW=2952KiB/s (3023kB/s)(578MiB/200433msec) ---> 738
    slat (nsec): min=1965, max=80093k, avg=617287.61, stdev=2010352.03
    clat (usec): min=181, max=859300, avg=432682.38, stdev=11957.36
     lat (usec): min=188, max=859306, avg=433300.16, stdev=11841.26
    clat percentiles (msec):
     |  1.00th=[  426],  5.00th=[  426], 10.00th=[  430], 20.00th=[  435],
     | 30.00th=[  435], 40.00th=[  435], 50.00th=[  435], 60.00th=[  435],
     | 70.00th=[  435], 80.00th=[  435], 90.00th=[  435], 95.00th=[  435],
     | 99.00th=[  439], 99.50th=[  439], 99.90th=[  464], 99.95th=[  584],
     | 99.99th=[  802]
   bw (  KiB/s): min=  255, max=  514, per=10.01%, avg=295.41, stdev=74.77, samples=3999
   iops        : min=   63, max=  128, avg=73.83, stdev=18.66, samples=3999
    lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%
    lat (msec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=0.02%, 250=0.03%
    lat (msec) : 500=99.86%, 750=0.04%, 1000=0.02%
  cpu          : usr=0.05%, sys=0.12%, ctx=51103, majf=0, minf=346
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=99.8%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwt: total=147943,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=2952KiB/s (3023kB/s), 2952KiB/s-2952KiB/s (3023kB/s-3023kB/s), io=578MiB (606MB), run=200433-200433msec

Disk stats (read/write):
  sdb: ios=60149/0, merge=87779/0, ticks=25892118/0, in_queue=25897376, util=100.00%

2.2 write mode
# fio -filename=/dev/sdb -direct=1 -iodepth 32 -thread -rw=write -ioengine=libaio -bs=4k -size=10G -numjobs=10 -runtime=200 -group_reporting -name=mytestfionew
mytestfionew: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
...
fio-2.99
Starting 10 threads
Jobs: 10 (f=10): [W(10)][100.0%][r=0KiB/s,w=1864KiB/s][r=0,w=466 IOPS][eta 00m:00s]
mytestfionew: (groupid=0, jobs=10): err= 0: pid=6495: Tue Aug  1 10:24:27 2017
  write: IOPS=490, BW=1961KiB/s (2008kB/s)(384MiB/200651msec) ---> 490
    slat (usec): min=2, max=74151, avg=1242.72, stdev=4357.37
    clat (msec): min=59, max=1299, avg=650.84, stdev=25.58
     lat (msec): min=59, max=1299, avg=652.08, stdev=25.53
    clat percentiles (msec):
     |  1.00th=[  634],  5.00th=[  642], 10.00th=[  642], 20.00th=[  651],
     | 30.00th=[  651], 40.00th=[  651], 50.00th=[  651], 60.00th=[  651],
     | 70.00th=[  659], 80.00th=[  659], 90.00th=[  659], 95.00th=[  667],
     | 99.00th=[  684], 99.50th=[  693], 99.90th=[  986], 99.95th=[ 1167],
     | 99.99th=[ 1301]
   bw (  KiB/s): min=    6, max=  257, per=10.42%, avg=204.32, stdev=65.50, samples=3840
   iops        : min=    1, max=   64, avg=51.08, stdev=16.37, samples=3840
    lat (msec) : 100=0.04%, 250=0.03%, 500=0.08%, 750=99.64%, 1000=0.11%
    lat (msec) : 2000=0.10%
  cpu          : usr=0.04%, sys=0.10%, ctx=34640, majf=0, minf=20
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=99.7%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwt: total=0,98389,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=1961KiB/s (2008kB/s), 1961KiB/s-1961KiB/s (2008kB/s-2008kB/s), io=384MiB (403MB), run=200651-200651msec

Disk stats (read/write):
  sdb: ios=149/40148, merge=0/58121, ticks=449/25892795, in_queue=25894547, util=100.00%

2.3 random read mode
# fio -filename=/dev/sdb -direct=1 -iodepth 32 -thread -rw=randread -ioengine=libaio -bs=4k -size=10G -numjobs=10 -runtime=200 -group_reporting -name=mytestfio
mytestfio: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
...
fio-2.99
Starting 10 threads
Jobs: 10 (f=10): [r(10)][100.0%][r=1201KiB/s,w=0KiB/s][r=300,w=0 IOPS][eta 00m:00s]
mytestfio: (groupid=0, jobs=10): err= 0: pid=16288: Tue Aug  1 10:28:34 2017
   read: IOPS=300, BW=1201KiB/s (1229kB/s)(235MiB/200459msec) ---> 300
    slat (usec): min=2, max=770139, avg=20306.09, stdev=115392.72
    clat (usec): min=222, max=1533.8k, avg=1044605.54, stdev=188238.21
     lat (usec): min=231, max=1789.0k, avg=1064912.00, stdev=154126.36
    clat percentiles (msec):
     |  1.00th=[  426],  5.00th=[  439], 10.00th=[  684], 20.00th=[ 1099],
     | 30.00th=[ 1099], 40.00th=[ 1099], 50.00th=[ 1099], 60.00th=[ 1116],
     | 70.00th=[ 1116], 80.00th=[ 1116], 90.00th=[ 1116], 95.00th=[ 1116],
     | 99.00th=[ 1116], 99.50th=[ 1116], 99.90th=[ 1301], 99.95th=[ 1435],
     | 99.99th=[ 1536]
   bw (  KiB/s): min=    8, max=  313, per=11.88%, avg=142.52, stdev=116.77, samples=3360
   iops        : min=    2, max=   78, avg=35.63, stdev=29.19, samples=3360
    lat (usec) : 250=0.01%, 500=0.04%, 750=0.01%
    lat (msec) : 10=0.01%, 250=0.01%, 500=6.73%, 750=3.93%, 1000=0.33%
    lat (msec) : 2000=88.94%
  cpu          : usr=0.01%, sys=0.03%, ctx=5991, majf=0, minf=343
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=99.5%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwt: total=60171,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=1201KiB/s (1229kB/s), 1201KiB/s-1201KiB/s (1229kB/s-1229kB/s), io=235MiB (246MB), run=200459-200459msec

Disk stats (read/write):
  sdb: ios=60149/0, merge=1/0, ticks=28844777/0, in_queue=28854146, util=100.00%

2.4 random write mode
# fio -filename=/dev/sdb -direct=1 -iodepth 32 -thread -rw=randwrite -ioengine=libaio -bs=4k -size=10G -numjobs=10 -runtime=200 -group_reporting -name=mytestfionew
mytestfionew: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
...
fio-2.99
Starting 10 threads
Jobs: 10 (f=10): [w(10)][100.0%][r=0KiB/s,w=816KiB/s][r=0,w=204 IOPS][eta 00m:00s]
mytestfionew: (groupid=0, jobs=10): err= 0: pid=1564: Tue Aug  1 11:28:43 2017
  write: IOPS=200, BW=800KiB/s (819kB/s)(157MiB/200595msec) ---> 200
    slat (usec): min=2, max=1681.9k, avg=30358.08, stdev=171227.94
    clat (msec): min=47, max=2748, avg=1567.51, stdev=259.92
     lat (msec): min=47, max=2813, avg=1597.87, stdev=206.50
    clat percentiles (msec):
     |  1.00th=[  642],  5.00th=[  659], 10.00th=[ 1502], 20.00th=[ 1636],
     | 30.00th=[ 1636], 40.00th=[ 1636], 50.00th=[ 1636], 60.00th=[ 1653],
     | 70.00th=[ 1653], 80.00th=[ 1653], 90.00th=[ 1653], 95.00th=[ 1653],
     | 99.00th=[ 1787], 99.50th=[ 1804], 99.90th=[ 2601], 99.95th=[ 2601],
     | 99.99th=[ 2601]
   bw (  KiB/s): min=    7, max=  248, per=16.22%, avg=129.74, stdev=117.05, samples=2455
   iops        : min=    1, max=   62, avg=32.43, stdev=29.26, samples=2455
    lat (msec) : 50=0.01%, 100=0.01%, 250=0.01%, 500=0.01%, 750=5.83%
    lat (msec) : 1000=1.83%, 2000=91.97%, >=2000=0.34%
  cpu          : usr=0.01%, sys=0.02%, ctx=3550, majf=0, minf=18
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=99.2%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwt: total=0,40125,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=800KiB/s (819kB/s), 800KiB/s-800KiB/s (819kB/s-819kB/s), io=157MiB (164MB), run=200595-200595msec

Disk stats (read/write):
  sdb: ios=40/40092, merge=0/1, ticks=66/28876152, in_queue=28891489, util=100.00%

Comment 6 Fam Zheng 2017-08-22 07:03:03 UTC
In comment 0, the read/write requests are throttled _after_ request merging in virtio-blk virtual device, so the sequential loads slightly exceed the iops limit because adjacent reads/writes are coalesed into one and done together. This is the expected result of current code.

Random RW rarely get merged, so the iops match the specified limits.

In comment 5 the iops number are within specified limits.

The fact that requests are merged before being throttled has no effect over bps limits. So only iops has this specialty.

I hope this is made clear. If there is still confusion please let me know.

It is technically possible to do throttling according to unmerged requests, which will make the output exactly match the specified limits. But I don't see much point in doing so. In a typical worload, requests are presumably merged by guest kernel anyway, even before submitting to virt queue.


Note You need to log in before you can comment on or make changes to this bug.