Bug 1476188
Summary: | [GSS] The "read_iops_sec" and "write_iops_sec" items can't become effective when run fio tools with read/write mode | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | liuwei <wliu> |
Component: | qemu-kvm-rhev | Assignee: | Fam Zheng <famz> |
Status: | CLOSED NOTABUG | QA Contact: | Gu Nini <ngu> |
Severity: | low | Docs Contact: | |
Priority: | low | ||
Version: | 7.3 | CC: | aliang, chayang, coli, juzhang, knoel, michen, ngu, pingl, shuang, virt-maint, xuwei |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-08-22 07:03:03 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
liuwei
2017-07-28 08:33:48 UTC
Reproduced the issue with below packages: Host: kernel-3.10.0-693.el7 qemu-kvm-rhev-2.9.0-16.el7_4.3 Guest: kernel-3.10.0-693.el7 Test steps: 1. Boot up guest with below options: -drive id=drive_image2,if=none,snapshot=off,aio=native,cache=none,format=qcow2,file=/home/testrun/diskfile/data.qcow2,iops_rd=300,iops_wr=200 \ -device scsi-hd,id=image2,drive=drive_image2,bootindex=2 \ 2. Run fio with read/write/random read/random write mode 2.1 read mode # fio -filename=/dev/sdb -direct=1 -iodepth 32 -thread -rw=read -ioengine=libaio -bs=4k -size=10G -numjobs=10 -runtime=200 -group_reporting -name=mytestfionew mytestfionew: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 ... fio-2.99 Starting 10 threads Jobs: 10 (f=10): [R(10)][100.0%][r=3211KiB/s,w=0KiB/s][r=802,w=0 IOPS][eta 00m:00s] mytestfionew: (groupid=0, jobs=10): err= 0: pid=6481: Tue Aug 1 10:19:22 2017 read: IOPS=738, BW=2952KiB/s (3023kB/s)(578MiB/200433msec) ---> 738 slat (nsec): min=1965, max=80093k, avg=617287.61, stdev=2010352.03 clat (usec): min=181, max=859300, avg=432682.38, stdev=11957.36 lat (usec): min=188, max=859306, avg=433300.16, stdev=11841.26 clat percentiles (msec): | 1.00th=[ 426], 5.00th=[ 426], 10.00th=[ 430], 20.00th=[ 435], | 30.00th=[ 435], 40.00th=[ 435], 50.00th=[ 435], 60.00th=[ 435], | 70.00th=[ 435], 80.00th=[ 435], 90.00th=[ 435], 95.00th=[ 435], | 99.00th=[ 439], 99.50th=[ 439], 99.90th=[ 464], 99.95th=[ 584], | 99.99th=[ 802] bw ( KiB/s): min= 255, max= 514, per=10.01%, avg=295.41, stdev=74.77, samples=3999 iops : min= 63, max= 128, avg=73.83, stdev=18.66, samples=3999 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01% lat (msec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=0.02%, 250=0.03% lat (msec) : 500=99.86%, 750=0.04%, 1000=0.02% cpu : usr=0.05%, sys=0.12%, ctx=51103, majf=0, minf=346 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=99.8%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% issued rwt: total=147943,0,0, short=0,0,0, dropped=0,0,0 latency : target=0, window=0, percentile=100.00%, depth=32 Run status group 0 (all jobs): READ: bw=2952KiB/s (3023kB/s), 2952KiB/s-2952KiB/s (3023kB/s-3023kB/s), io=578MiB (606MB), run=200433-200433msec Disk stats (read/write): sdb: ios=60149/0, merge=87779/0, ticks=25892118/0, in_queue=25897376, util=100.00% 2.2 write mode # fio -filename=/dev/sdb -direct=1 -iodepth 32 -thread -rw=write -ioengine=libaio -bs=4k -size=10G -numjobs=10 -runtime=200 -group_reporting -name=mytestfionew mytestfionew: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 ... fio-2.99 Starting 10 threads Jobs: 10 (f=10): [W(10)][100.0%][r=0KiB/s,w=1864KiB/s][r=0,w=466 IOPS][eta 00m:00s] mytestfionew: (groupid=0, jobs=10): err= 0: pid=6495: Tue Aug 1 10:24:27 2017 write: IOPS=490, BW=1961KiB/s (2008kB/s)(384MiB/200651msec) ---> 490 slat (usec): min=2, max=74151, avg=1242.72, stdev=4357.37 clat (msec): min=59, max=1299, avg=650.84, stdev=25.58 lat (msec): min=59, max=1299, avg=652.08, stdev=25.53 clat percentiles (msec): | 1.00th=[ 634], 5.00th=[ 642], 10.00th=[ 642], 20.00th=[ 651], | 30.00th=[ 651], 40.00th=[ 651], 50.00th=[ 651], 60.00th=[ 651], | 70.00th=[ 659], 80.00th=[ 659], 90.00th=[ 659], 95.00th=[ 667], | 99.00th=[ 684], 99.50th=[ 693], 99.90th=[ 986], 99.95th=[ 1167], | 99.99th=[ 1301] bw ( KiB/s): min= 6, max= 257, per=10.42%, avg=204.32, stdev=65.50, samples=3840 iops : min= 1, max= 64, avg=51.08, stdev=16.37, samples=3840 lat (msec) : 100=0.04%, 250=0.03%, 500=0.08%, 750=99.64%, 1000=0.11% lat (msec) : 2000=0.10% cpu : usr=0.04%, sys=0.10%, ctx=34640, majf=0, minf=20 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=99.7%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% issued rwt: total=0,98389,0, short=0,0,0, dropped=0,0,0 latency : target=0, window=0, percentile=100.00%, depth=32 Run status group 0 (all jobs): WRITE: bw=1961KiB/s (2008kB/s), 1961KiB/s-1961KiB/s (2008kB/s-2008kB/s), io=384MiB (403MB), run=200651-200651msec Disk stats (read/write): sdb: ios=149/40148, merge=0/58121, ticks=449/25892795, in_queue=25894547, util=100.00% 2.3 random read mode # fio -filename=/dev/sdb -direct=1 -iodepth 32 -thread -rw=randread -ioengine=libaio -bs=4k -size=10G -numjobs=10 -runtime=200 -group_reporting -name=mytestfio mytestfio: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 ... fio-2.99 Starting 10 threads Jobs: 10 (f=10): [r(10)][100.0%][r=1201KiB/s,w=0KiB/s][r=300,w=0 IOPS][eta 00m:00s] mytestfio: (groupid=0, jobs=10): err= 0: pid=16288: Tue Aug 1 10:28:34 2017 read: IOPS=300, BW=1201KiB/s (1229kB/s)(235MiB/200459msec) ---> 300 slat (usec): min=2, max=770139, avg=20306.09, stdev=115392.72 clat (usec): min=222, max=1533.8k, avg=1044605.54, stdev=188238.21 lat (usec): min=231, max=1789.0k, avg=1064912.00, stdev=154126.36 clat percentiles (msec): | 1.00th=[ 426], 5.00th=[ 439], 10.00th=[ 684], 20.00th=[ 1099], | 30.00th=[ 1099], 40.00th=[ 1099], 50.00th=[ 1099], 60.00th=[ 1116], | 70.00th=[ 1116], 80.00th=[ 1116], 90.00th=[ 1116], 95.00th=[ 1116], | 99.00th=[ 1116], 99.50th=[ 1116], 99.90th=[ 1301], 99.95th=[ 1435], | 99.99th=[ 1536] bw ( KiB/s): min= 8, max= 313, per=11.88%, avg=142.52, stdev=116.77, samples=3360 iops : min= 2, max= 78, avg=35.63, stdev=29.19, samples=3360 lat (usec) : 250=0.01%, 500=0.04%, 750=0.01% lat (msec) : 10=0.01%, 250=0.01%, 500=6.73%, 750=3.93%, 1000=0.33% lat (msec) : 2000=88.94% cpu : usr=0.01%, sys=0.03%, ctx=5991, majf=0, minf=343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=99.5%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% issued rwt: total=60171,0,0, short=0,0,0, dropped=0,0,0 latency : target=0, window=0, percentile=100.00%, depth=32 Run status group 0 (all jobs): READ: bw=1201KiB/s (1229kB/s), 1201KiB/s-1201KiB/s (1229kB/s-1229kB/s), io=235MiB (246MB), run=200459-200459msec Disk stats (read/write): sdb: ios=60149/0, merge=1/0, ticks=28844777/0, in_queue=28854146, util=100.00% 2.4 random write mode # fio -filename=/dev/sdb -direct=1 -iodepth 32 -thread -rw=randwrite -ioengine=libaio -bs=4k -size=10G -numjobs=10 -runtime=200 -group_reporting -name=mytestfionew mytestfionew: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 ... fio-2.99 Starting 10 threads Jobs: 10 (f=10): [w(10)][100.0%][r=0KiB/s,w=816KiB/s][r=0,w=204 IOPS][eta 00m:00s] mytestfionew: (groupid=0, jobs=10): err= 0: pid=1564: Tue Aug 1 11:28:43 2017 write: IOPS=200, BW=800KiB/s (819kB/s)(157MiB/200595msec) ---> 200 slat (usec): min=2, max=1681.9k, avg=30358.08, stdev=171227.94 clat (msec): min=47, max=2748, avg=1567.51, stdev=259.92 lat (msec): min=47, max=2813, avg=1597.87, stdev=206.50 clat percentiles (msec): | 1.00th=[ 642], 5.00th=[ 659], 10.00th=[ 1502], 20.00th=[ 1636], | 30.00th=[ 1636], 40.00th=[ 1636], 50.00th=[ 1636], 60.00th=[ 1653], | 70.00th=[ 1653], 80.00th=[ 1653], 90.00th=[ 1653], 95.00th=[ 1653], | 99.00th=[ 1787], 99.50th=[ 1804], 99.90th=[ 2601], 99.95th=[ 2601], | 99.99th=[ 2601] bw ( KiB/s): min= 7, max= 248, per=16.22%, avg=129.74, stdev=117.05, samples=2455 iops : min= 1, max= 62, avg=32.43, stdev=29.26, samples=2455 lat (msec) : 50=0.01%, 100=0.01%, 250=0.01%, 500=0.01%, 750=5.83% lat (msec) : 1000=1.83%, 2000=91.97%, >=2000=0.34% cpu : usr=0.01%, sys=0.02%, ctx=3550, majf=0, minf=18 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=99.2%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% issued rwt: total=0,40125,0, short=0,0,0, dropped=0,0,0 latency : target=0, window=0, percentile=100.00%, depth=32 Run status group 0 (all jobs): WRITE: bw=800KiB/s (819kB/s), 800KiB/s-800KiB/s (819kB/s-819kB/s), io=157MiB (164MB), run=200595-200595msec Disk stats (read/write): sdb: ios=40/40092, merge=0/1, ticks=66/28876152, in_queue=28891489, util=100.00% In comment 0, the read/write requests are throttled _after_ request merging in virtio-blk virtual device, so the sequential loads slightly exceed the iops limit because adjacent reads/writes are coalesed into one and done together. This is the expected result of current code. Random RW rarely get merged, so the iops match the specified limits. In comment 5 the iops number are within specified limits. The fact that requests are merged before being throttled has no effect over bps limits. So only iops has this specialty. I hope this is made clear. If there is still confusion please let me know. It is technically possible to do throttling according to unmerged requests, which will make the output exactly match the specified limits. But I don't see much point in doing so. In a typical worload, requests are presumably merged by guest kernel anyway, even before submitting to virt queue. |