RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1415042 - The throttling of option bps_wr_max/iops_wr_max/iops_rd_max can not work during bursts.
Summary: The throttling of option bps_wr_max/iops_wr_max/iops_rd_max can not work duri...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.4
Hardware: All
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Stefan Hajnoczi
QA Contact: Gu Nini
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-01-20 05:02 UTC by Yongxue Hong
Modified: 2017-08-01 07:18 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-01-24 11:39:25 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Yongxue Hong 2017-01-20 05:02:19 UTC
Description of problem:
Set the option bps_wr_max in order to limit write throughput during bursts in qemu command,but it does not work by the test of fio.

Version-Release number of selected component (if applicable):
Host kernel:3.10.0-543.el7.ppc64le 
qemu: qemu-kvm-rhev-2.8.0-2.el7 
slof: SLOF-20160223-6.gitdbbfda4.el7 
Guest kernel:3.10.0-543.el7.ppc64

How reproducible:
100%

Steps to Reproduce:
1.boot a guest with a virtio scsi disk and set the throttle value
eg:
/usr/libexec/qemu-kvm \
-name RHEL7-8398 \
-M pseries-rhel7.4.0 \
-m 64G \
-smp 4 \
-boot menu=on,order=c \
-device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=06 \
-drive file=/home/hyx/os/RHEL-7.3-20161019.0-Server-ppc64-dvd1.iso,if=none,media=cdrom,id=image0 \
-device scsi-cd,id=scsi-cd0,drive=image0,channel=0,scsi-id=0,lun=0,bootindex=1 \
-drive file=/home/hyx/image/RHEL7-8398-2-25G.raw,if=none,media=disk,format=raw,cache=none,iops_rd=10000,iops_rd_max=40000,iops_wr=10000,iops_wr_max=40000,bps_rd=1024000,bps_rd_max=2048000,bps_wr=1024000,bps_wr_max=2048000,id=image2 \
-device scsi-hd,id=scsi-hd2,drive=image2,channel=0,scsi-id=0,lun=1 \
-drive file=/home/hyx/image/RHEL7-8398-20G.raw,if=none,media=disk,format=raw,id=image1 \
-device scsi-hd,id=scsi-hd1,drive=image1,channel=0,scsi-id=0,lun=2,bootindex=0 \
-device nec-usb-xhci,id=xhci \
-drive file=/home/hyx/image/ubs-1G.raw,if=none,id=stick,bps_rd=102400,iops_rd=100,bps_wr=102400,iops_wr=100 \
-device usb-storage,removable=on,bus=xhci.0,drive=stick \
-netdev tap,id=hostnet0,script=/etc/qemu-ifup \
-device spapr-vlan,netdev=hostnet0,id=virtio-net-pci0,mac=70:e2:84:14:0e:23 \
-rtc base=utc,clock=vm \
-monitor stdio \
-serial unix:./sock2,server,nowait \
-qmp tcp:0:3000,server,nowait \
-device usb-tablet \
-vnc :1

2.run fio in guest
eg:
fio --filename=/dev/sdb --direct=1 --rw=write --bs=64k --size=1000M --name=test --iodepth=1 --runtime=1

Actual results:
[root@dhcp112-199 ~]# fio --filename=/dev/sdb --direct=1 --rw=write --bs=64k --size=1000M --name=test --iodepth=1 --runtime=1
test: (g=0): rw=write, bs=64K-64K/64K-64K/64K-64K, ioengine=sync, iodepth=1
fio-2.2.8
Starting 1 process
Jobs: 1 (f=1)
test: (groupid=0, jobs=1): err= 0: pid=6594: Thu Jan 19 22:38:25 2017
  write: io=3072.0KB, bw=3041.6KB/s, iops=47, runt=  1010msec
    clat (usec): min=486, max=104716, avg=21025.79, stdev=29149.76
     lat (usec): min=486, max=104717, avg=21026.56, stdev=29149.90
    clat percentiles (usec):
     |  1.00th=[  486],  5.00th=[ 1880], 10.00th=[ 1928], 20.00th=[ 2096],
     | 30.00th=[ 2192], 40.00th=[ 2352], 50.00th=[ 2576], 60.00th=[ 3152],
     | 70.00th=[23168], 80.00th=[63232], 90.00th=[64256], 95.00th=[67072],
     | 99.00th=[104960], 99.50th=[104960], 99.90th=[104960], 99.95th=[104960],
     | 99.99th=[104960]
    bw (KB  /s): min= 1005, max= 5120, per=100.00%, avg=3062.50, stdev=2909.74
    lat (usec) : 500=2.08%, 1000=2.08%
    lat (msec) : 2=8.33%, 4=50.00%, 10=4.17%, 50=8.33%, 100=22.92%
    lat (msec) : 250=2.08%
  cpu          : usr=0.00%, sys=0.00%, ctx=48, majf=0, minf=5
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=48/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: io=3072KB, aggrb=3041KB/s, minb=3041KB/s, maxb=3041KB/s, mint=1010msec, maxt=1010msec

Disk stats (read/write):
  sdb: ios=16/46, merge=0/0, ticks=10/890, in_queue=910, util=91.27%

Expected results:
The result of bw(Bandwidth) should be limit to 2048KB/s.

Additional info:

Comment 1 Yongxue Hong 2017-01-20 09:18:40 UTC
When testing throttling of iops_wr_max/iops_rd_max during bursts, the function of limit does not work.

Comment 6 Gu Nini 2017-01-24 11:39:25 UTC
This is the expected result.

You can test longer than 1 second and check the result again.

For running only 1 second, the io rate could reach 'bps + bps_max'; that's for the leaky bucket algorithm, there is not only 'bps_max' io enter but also bps io leak in one second.

You can refer to comment #6 ~ comment #12 in following bug for details:
https://bugzilla.redhat.com/show_bug.cgi?id=1150403


Note You need to log in before you can comment on or make changes to this bug.