1. Feature Overview:
a) Name of feature:
Support blkio SLA features
b) Feature Description:
With this feature we can limit a VM's blkio resource consumption.
What we'd like to set up here are following:
- weight (blkio.weight in blkio cgroup)
- weight_device (blkio.weight_device in blkio cgroup)
- total_bytes_sec (block_set_io_throttle command in qemu monitor)
- read_bytes_sec (block_set_io_throttle command in qemu monitor)
- write_bytes_sec (block_set_io_throttle command in qemu monitor)
- total_iops_sec (block_set_io_throttle command in qemu monitor)
- read_iops_sec (block_set_io_throttle command in qemu monitor)
- write_iops_sec (block_set_io_throttle command in qemu monitor)
2. Feature Details:
64-bit Intel EM64T/AMD64
b) Bugzilla Dependencies:
c) Drivers or hardware dependencies:
d) Upstream acceptance information:
e) External links:
f) Severity (H,M,L):
g) Feature Needed by:
worth mentioning/participating in:
3.5 adds I/O throttling as defined in 1.b.
we'd appreciate if you can test our implementation.
for future gaps from this functionality, please file a new RFE.
fixed in vt3, moving to on_qa.
if you believe this bug isn't released in vt3, please report to firstname.lastname@example.org
I'm testing the BLKIO support feature and I encountered the following:
Created a new QOS rule for storage in which I set the total IOPS to be 15. Created a new storage profile for the relevant storage domain.
Then I created a new VM with 2 disks. 1 disk with OS with the default profile and the second disk set to be with the new profile (which limits the total IOPS to be 15).
I created a ext4 FS on the second disk and mounted it.
I copied a 1G file to the second disk which is supped to be limited to 15 IOPS (total)
I measured the IOPS with iostat tool and got (the device is vdb):
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
vda 0.00 0.00 337.76 0.00 86204.08 0.00 255.23 1.68 4.60 2.99 100.92
vdb 0.00 0.00 0.00 97.96 0.00 95681.63 976.75 70.26 1106.98 10.43 102.14
Here is the ioTune parameter as shown in the XML request in engine.log:
Moving to ASSIGNED
Further to Comment #9, It was tested after the VM was started without any update to it while it is running which, AFAIK, is not supported yet.
I have a few questions;
1. What was the result of a limitation to specific reads and writes for bytes? IOPS?
2. Where did you run iostat?
(In reply to Doron Fediuck from comment #11)
> I have a few questions;
> 1. What was the result of a limitation to specific reads and writes for
> bytes? IOPS?
For both IOPS and throughput, the results were inconsistent, i.e, sometimes the limitaion seemed to enforced and sometimes not.
> 2. Where did you run iostat?
On the VM - rhel6.5
(In reply to Elad from comment #12)
> (In reply to Doron Fediuck from comment #11)
> > Elad,
> > I have a few questions;
> > 1. What was the result of a limitation to specific reads and writes for
> > bytes? IOPS?
> For both IOPS and throughput, the results were inconsistent, i.e, sometimes
> the limitaion seemed to enforced and sometimes not.
> > 2. Where did you run iostat?
> On the VM - rhel6.5
The guest is unaware of the limitation which happens from the outside
by libvirt, and the concept of time is not accurate with guests in general.
Hence the above inconsistencies. The right way should probably start at the
host level. I also suggest to reach out to libvirt QE to see how they test
this part of the API.
Once we have the 'right' numbers, I suggest opening a BZ for each issue,
as it may end up in a different resolution or component.
Opened a bug to libvirt
Since Bug 1151957 wasn't reproduce, moving to on_qa to verify this one.
If this bug requires doc text for errata release, please provide draft text in the doc text field in the following format:
The documentation team will review, edit, and approve the text.
If this bug does not require doc text, please set the 'requires_doc_text' flag to -.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.