Bug 906938 - PRD35 - [RFE] Support blkio SLA features
Summary: PRD35 - [RFE] Support blkio SLA features
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: RFEs
Version: unspecified
Hardware: x86_64
OS: Linux
urgent
high
Target Milestone: ---
: 3.5.0
Assignee: Gilad Chaplik
QA Contact: Kevin Alon Goldblatt
URL:
Whiteboard: sla
Depends On: 1151957
Blocks: 1032276 1075672 1123960 rhev3.5beta 1156165
TreeView+ depends on / blocked
 
Reported: 2013-02-01 23:10 UTC by Satoru Moriya
Modified: 2016-02-10 20:16 UTC (History)
24 users (show)

Fixed In Version: vt3
Doc Type: Enhancement
Doc Text:
With this update, support for storage quality of service has been added.
Clone Of:
Environment:
Last Closed: 2015-02-11 17:52:00 UTC
oVirt Team: SLA
sgrinber: Triaged+


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:0158 normal SHIPPED_LIVE Important: Red Hat Enterprise Virtualization Manager 3.5.0 2015-02-11 22:38:50 UTC
Red Hat Bugzilla 1032276 None None None Never

Internal Links: 1032276

Description Satoru Moriya 2013-02-01 23:10:57 UTC
1. Feature Overview:
   a) Name of feature:
      Support blkio SLA features

   b) Feature Description:
      With this feature we can limit a VM's blkio resource consumption.
      What we'd like to set up here are following:

       - weight (blkio.weight in blkio cgroup) 
       - weight_device (blkio.weight_device in blkio cgroup)
       - total_bytes_sec (block_set_io_throttle command in qemu monitor)
       - read_bytes_sec (block_set_io_throttle command in qemu monitor)
       - write_bytes_sec (block_set_io_throttle command in qemu monitor)
       - total_iops_sec (block_set_io_throttle command in qemu monitor)
       - read_iops_sec (block_set_io_throttle command in qemu monitor)
       - write_iops_sec (block_set_io_throttle command in qemu monitor)

2. Feature Details:
   a) Architectures:
         64-bit Intel EM64T/AMD64

   b) Bugzilla Dependencies:
      None

   c) Drivers or hardware dependencies:
      None

   d) Upstream acceptance information:
      None

   e) External links:
      None

   f) Severity (H,M,L):
      High

   g) Feature Needed by:
      2013 4Q

Comment 4 Itamar Heim 2013-07-08 15:20:21 UTC
worth mentioning/participating in:
http://gerrit.ovirt.org/#/c/14636/
http://gerrit.ovirt.org/#/c/14394/

Comment 7 Doron Fediuck 2014-06-18 13:18:19 UTC
3.5 adds I/O throttling as defined in 1.b.
we'd appreciate if you can test our implementation.
for future gaps from this functionality, please file a new RFE.
thanks

Comment 8 Eyal Edri 2014-09-10 20:21:29 UTC
fixed in vt3, moving to on_qa.
if you believe this bug isn't released in vt3, please report to rhev-integ@redhat.com

Comment 9 Elad 2014-10-05 06:32:16 UTC
Hi Martin,

I'm testing the BLKIO support feature and I encountered the following:
Created a new QOS rule for storage in which I set the total IOPS to be 15. Created a new storage profile for the relevant storage domain.
Then I created a new VM with 2 disks. 1 disk with OS with the default profile and the second disk set to be with the new profile (which limits the total IOPS to be 15).
I created a ext4 FS on the second disk and mounted it.
I copied a 1G file to the second disk which is supped to be limited to 15 IOPS (total)
I measured the IOPS with iostat tool and got (the device is vdb):

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
vda               0.00     0.00  337.76    0.00 86204.08     0.00   255.23     1.68    4.60   2.99 100.92
vdb               0.00     0.00    0.00   97.96     0.00 95681.63   976.75    70.26 1106.98  10.43 102.14



Here is the ioTune parameter as shown in the XML request in engine.log:

specParams={ioTune={total_iops_sec=15}}


Moving to ASSIGNED

Comment 10 Elad 2014-10-05 06:51:32 UTC
Further to Comment #9, It was tested after the VM was started without any update to it while it is running which, AFAIK, is not supported yet.

Comment 11 Doron Fediuck 2014-10-12 09:53:49 UTC
Elad,
I have a few questions;

1. What was the result of a limitation to specific reads and writes for bytes? IOPS?
2. Where did you run iostat?

Comment 12 Elad 2014-10-12 10:09:32 UTC
(In reply to Doron Fediuck from comment #11)
> Elad,
> I have a few questions;
> 
> 1. What was the result of a limitation to specific reads and writes for
> bytes? IOPS?

For both IOPS and throughput, the results were inconsistent, i.e, sometimes the limitaion seemed to enforced and sometimes not.
> 2. Where did you run iostat?
On the VM - rhel6.5

Comment 13 Doron Fediuck 2014-10-12 14:07:15 UTC
(In reply to Elad from comment #12)
> (In reply to Doron Fediuck from comment #11)
> > Elad,
> > I have a few questions;
> > 
> > 1. What was the result of a limitation to specific reads and writes for
> > bytes? IOPS?
> 
> For both IOPS and throughput, the results were inconsistent, i.e, sometimes
> the limitaion seemed to enforced and sometimes not.
> > 2. Where did you run iostat?
> On the VM - rhel6.5

Elad,
The guest is unaware of the limitation which happens from the outside
by libvirt, and the concept of time is not accurate with guests in general.
Hence the above inconsistencies. The right way should probably start at the
host level. I also suggest to reach out to libvirt QE to see how they test
this part of the API.

Once we have the 'right' numbers, I suggest opening a BZ for each issue,
as it may end up in a different resolution or component.

Comment 14 Elad 2014-10-13 08:01:43 UTC
Opened a bug to libvirt 
https://bugzilla.redhat.com/show_bug.cgi?id=1151957

Comment 15 Doron Fediuck 2014-11-02 21:18:20 UTC
Since Bug 1151957 wasn't reproduce, moving to on_qa to verify this one.

Comment 16 Aharon Canan 2014-11-03 05:27:10 UTC
verified (https://tcms.engineering.redhat.com/run/173682/)

Comment 17 Elad 2014-11-03 06:35:13 UTC
Tested Using:

vdsm-4.16.7.2-1.el7.x86_64
libvirt-daemon-1.1.1-29.el7_0.3.x86_64
qemu-kvm-rhev-1.5.3-60.el7_0.10.x86_64

Comment 18 Julie 2015-02-05 00:38:49 UTC
If this bug requires doc text for errata release, please provide draft text in the doc text field in the following format:

Cause:
Consequence:
Fix:
Result:

The documentation team will review, edit, and approve the text.

If this bug does not require doc text, please set the 'requires_doc_text' flag to -.

Comment 20 errata-xmlrpc 2015-02-11 17:52:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-0158.html


Note You need to log in before you can comment on or make changes to this bug.