Bug 906938
Summary: | PRD35 - [RFE] Support blkio SLA features | ||
---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Satoru Moriya <smoriya> |
Component: | RFEs | Assignee: | Gilad Chaplik <gchaplik> |
Status: | CLOSED ERRATA | QA Contact: | Kevin Alon Goldblatt <kgoldbla> |
Severity: | high | Docs Contact: | |
Priority: | urgent | ||
Version: | unspecified | CC: | acanan, dfediuck, ebenahar, gchaplik, gklein, iheim, juwu, lpeer, ltroan, lwang, masaki.kimura.kz, mavital, mitsuhiro.tanino.gm, msivak, mtanino, noboru.obata.ar, rbalakri, satoru.moriya.br, scohen, seiji.aguchi.tr, sherold, takahiro.yasui.mp, tsekiyam, ylavi |
Target Milestone: | --- | Keywords: | FutureFeature |
Target Release: | 3.5.0 | Flags: | sgrinber:
Triaged+
|
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | sla | ||
Fixed In Version: | vt3 | Doc Type: | Enhancement |
Doc Text: |
With this update, support for storage quality of service has been added.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2015-02-11 17:52:00 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | SLA | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1151957 | ||
Bug Blocks: | 1032276, 1075672, 1123960, 1142923, 1156165 |
Description
Satoru Moriya
2013-02-01 23:10:57 UTC
worth mentioning/participating in: http://gerrit.ovirt.org/#/c/14636/ http://gerrit.ovirt.org/#/c/14394/ 3.5 adds I/O throttling as defined in 1.b. we'd appreciate if you can test our implementation. for future gaps from this functionality, please file a new RFE. thanks fixed in vt3, moving to on_qa. if you believe this bug isn't released in vt3, please report to rhev-integ Hi Martin, I'm testing the BLKIO support feature and I encountered the following: Created a new QOS rule for storage in which I set the total IOPS to be 15. Created a new storage profile for the relevant storage domain. Then I created a new VM with 2 disks. 1 disk with OS with the default profile and the second disk set to be with the new profile (which limits the total IOPS to be 15). I created a ext4 FS on the second disk and mounted it. I copied a 1G file to the second disk which is supped to be limited to 15 IOPS (total) I measured the IOPS with iostat tool and got (the device is vdb): Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util vda 0.00 0.00 337.76 0.00 86204.08 0.00 255.23 1.68 4.60 2.99 100.92 vdb 0.00 0.00 0.00 97.96 0.00 95681.63 976.75 70.26 1106.98 10.43 102.14 Here is the ioTune parameter as shown in the XML request in engine.log: specParams={ioTune={total_iops_sec=15}} Moving to ASSIGNED Further to Comment #9, It was tested after the VM was started without any update to it while it is running which, AFAIK, is not supported yet. Elad, I have a few questions; 1. What was the result of a limitation to specific reads and writes for bytes? IOPS? 2. Where did you run iostat? (In reply to Doron Fediuck from comment #11) > Elad, > I have a few questions; > > 1. What was the result of a limitation to specific reads and writes for > bytes? IOPS? For both IOPS and throughput, the results were inconsistent, i.e, sometimes the limitaion seemed to enforced and sometimes not. > 2. Where did you run iostat? On the VM - rhel6.5 (In reply to Elad from comment #12) > (In reply to Doron Fediuck from comment #11) > > Elad, > > I have a few questions; > > > > 1. What was the result of a limitation to specific reads and writes for > > bytes? IOPS? > > For both IOPS and throughput, the results were inconsistent, i.e, sometimes > the limitaion seemed to enforced and sometimes not. > > 2. Where did you run iostat? > On the VM - rhel6.5 Elad, The guest is unaware of the limitation which happens from the outside by libvirt, and the concept of time is not accurate with guests in general. Hence the above inconsistencies. The right way should probably start at the host level. I also suggest to reach out to libvirt QE to see how they test this part of the API. Once we have the 'right' numbers, I suggest opening a BZ for each issue, as it may end up in a different resolution or component. Opened a bug to libvirt https://bugzilla.redhat.com/show_bug.cgi?id=1151957 Since Bug 1151957 wasn't reproduce, moving to on_qa to verify this one. Tested Using: vdsm-4.16.7.2-1.el7.x86_64 libvirt-daemon-1.1.1-29.el7_0.3.x86_64 qemu-kvm-rhev-1.5.3-60.el7_0.10.x86_64 If this bug requires doc text for errata release, please provide draft text in the doc text field in the following format: Cause: Consequence: Fix: Result: The documentation team will review, edit, and approve the text. If this bug does not require doc text, please set the 'requires_doc_text' flag to -. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-0158.html |