1. Feature Overview: a) Name of feature: Support blkio SLA features b) Feature Description: With this feature we can limit a VM's blkio resource consumption. What we'd like to set up here are following: - weight (blkio.weight in blkio cgroup) - weight_device (blkio.weight_device in blkio cgroup) - total_bytes_sec (block_set_io_throttle command in qemu monitor) - read_bytes_sec (block_set_io_throttle command in qemu monitor) - write_bytes_sec (block_set_io_throttle command in qemu monitor) - total_iops_sec (block_set_io_throttle command in qemu monitor) - read_iops_sec (block_set_io_throttle command in qemu monitor) - write_iops_sec (block_set_io_throttle command in qemu monitor) 2. Feature Details: a) Architectures: 64-bit Intel EM64T/AMD64 b) Bugzilla Dependencies: None c) Drivers or hardware dependencies: None d) Upstream acceptance information: None e) External links: None f) Severity (H,M,L): High g) Feature Needed by: 2013 4Q
worth mentioning/participating in: http://gerrit.ovirt.org/#/c/14636/ http://gerrit.ovirt.org/#/c/14394/
3.5 adds I/O throttling as defined in 1.b. we'd appreciate if you can test our implementation. for future gaps from this functionality, please file a new RFE. thanks
fixed in vt3, moving to on_qa. if you believe this bug isn't released in vt3, please report to rhev-integ
Hi Martin, I'm testing the BLKIO support feature and I encountered the following: Created a new QOS rule for storage in which I set the total IOPS to be 15. Created a new storage profile for the relevant storage domain. Then I created a new VM with 2 disks. 1 disk with OS with the default profile and the second disk set to be with the new profile (which limits the total IOPS to be 15). I created a ext4 FS on the second disk and mounted it. I copied a 1G file to the second disk which is supped to be limited to 15 IOPS (total) I measured the IOPS with iostat tool and got (the device is vdb): Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util vda 0.00 0.00 337.76 0.00 86204.08 0.00 255.23 1.68 4.60 2.99 100.92 vdb 0.00 0.00 0.00 97.96 0.00 95681.63 976.75 70.26 1106.98 10.43 102.14 Here is the ioTune parameter as shown in the XML request in engine.log: specParams={ioTune={total_iops_sec=15}} Moving to ASSIGNED
Further to Comment #9, It was tested after the VM was started without any update to it while it is running which, AFAIK, is not supported yet.
Elad, I have a few questions; 1. What was the result of a limitation to specific reads and writes for bytes? IOPS? 2. Where did you run iostat?
(In reply to Doron Fediuck from comment #11) > Elad, > I have a few questions; > > 1. What was the result of a limitation to specific reads and writes for > bytes? IOPS? For both IOPS and throughput, the results were inconsistent, i.e, sometimes the limitaion seemed to enforced and sometimes not. > 2. Where did you run iostat? On the VM - rhel6.5
(In reply to Elad from comment #12) > (In reply to Doron Fediuck from comment #11) > > Elad, > > I have a few questions; > > > > 1. What was the result of a limitation to specific reads and writes for > > bytes? IOPS? > > For both IOPS and throughput, the results were inconsistent, i.e, sometimes > the limitaion seemed to enforced and sometimes not. > > 2. Where did you run iostat? > On the VM - rhel6.5 Elad, The guest is unaware of the limitation which happens from the outside by libvirt, and the concept of time is not accurate with guests in general. Hence the above inconsistencies. The right way should probably start at the host level. I also suggest to reach out to libvirt QE to see how they test this part of the API. Once we have the 'right' numbers, I suggest opening a BZ for each issue, as it may end up in a different resolution or component.
Opened a bug to libvirt https://bugzilla.redhat.com/show_bug.cgi?id=1151957
Since Bug 1151957 wasn't reproduce, moving to on_qa to verify this one.
verified (https://tcms.engineering.redhat.com/run/173682/)
Tested Using: vdsm-4.16.7.2-1.el7.x86_64 libvirt-daemon-1.1.1-29.el7_0.3.x86_64 qemu-kvm-rhev-1.5.3-60.el7_0.10.x86_64
If this bug requires doc text for errata release, please provide draft text in the doc text field in the following format: Cause: Consequence: Fix: Result: The documentation team will review, edit, and approve the text. If this bug does not require doc text, please set the 'requires_doc_text' flag to -.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-0158.html