Bug 1120246 - [RFE] Support IO QoS features
Summary: [RFE] Support IO QoS features
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: oVirt
Classification: Retired
Component: vdsm
Version: 3.5
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 3.5.1
Assignee: Martin Sivák
QA Contact: Gil Klein
URL:
Whiteboard: sla
Depends On: 1085049
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-07-16 14:15 UTC by Martin Sivák
Modified: 2016-02-10 19:42 UTC (History)
21 users (show)

Fixed In Version: ovirt-3.5.1_rc1
Clone Of: 1085049
Environment:
Last Closed: 2015-01-21 16:02:01 UTC
oVirt Team: SLA
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1162774 0 unspecified CLOSED CPU SLA policy doesn't updates via Json RPC the VDSM. 2021-02-22 00:41:40 UTC
oVirt gerrit 28712 0 None None None Never
oVirt gerrit 28713 0 None None None Never
oVirt gerrit 28714 0 None None None Never
oVirt gerrit 28715 0 None None None Never
oVirt gerrit 28895 0 None None None Never
oVirt gerrit 28896 0 None None None Never
oVirt gerrit 29059 0 None None None Never
oVirt gerrit 29115 0 None None None Never
oVirt gerrit 30066 0 ovirt-3.5 MERGED Extract the DOM to Drive name, alias and path logic to reusable method Never
oVirt gerrit 30067 0 ovirt-3.5 MERGED tests: make api check support falsey values Never
oVirt gerrit 30068 0 ovirt-3.5 MERGED Fix the API definition for cpu tune methods Never
oVirt gerrit 30069 0 ovirt-3.5 MERGED Improve the _validateIoTuneParams so the params are passed as argument Never
oVirt gerrit 30070 0 ovirt-3.5 MERGED Refactor updateVmPolicy to use DOM manipulation Never
oVirt gerrit 30071 0 ovirt-3.5 MERGED Refactor XMLElement to virt.utils Never
oVirt gerrit 30072 0 ovirt-3.5 MERGED vm: fix getVmPolicy return type. Never
oVirt gerrit 30073 0 ovirt-3.5 MERGED tests: fix pep8 errors Never
oVirt gerrit 30074 0 ovirt-3.5 MERGED Add IO tunables support to updateVmPolicy Never
oVirt gerrit 30075 0 ovirt-3.5 MERGED Add setIoTune and getIoTunePolicy to the xml-rpc API Never
oVirt gerrit 30076 0 ovirt-3.5 MERGED Add API.VM.setIoTune Never
oVirt gerrit 30077 0 ovirt-3.5 MERGED Add API.VM.getIoTunePolicy Never
oVirt gerrit 30078 0 ovirt-3.5 MERGED Change method names in vmtune.py to the PEP8 style Never
oVirt gerrit 30099 0 ovirt-3.5 MERGED janitorial: move isVdsmImage into utils Never
oVirt gerrit 30175 0 ovirt-3.5 MERGED Collect current QoS settings for IO devices and report through RunningVmStats Never
oVirt gerrit 30233 0 ovirt-3.5 MERGED Support ioTune values >2^31 in getStats over xml-rpc Never

Internal Links: 1162774

Description Martin Sivák 2014-07-16 14:15:23 UTC
This RFE is about adding a support for IO quality of service tuning parameters.

Those are going to be used by the oVirt 3.5 engine.

The actual parameters we want to control are:

- total_bytes_sec
- read_bytes_sec
- write_bytes_sec
- total_iops_sec
- read_iops_sec
- write_iops_sec

And we will store them directly in the device description (VDSM already supports that when the disk is attached) and as upper and lower bounds in the metadata section for future MoM routine to use.

We will add some new API calls to VM for that - updateVmPolicy, setIoTune, getIoTunePolicy plus the necessary data collection to getStats.

+++ This bug was initially created as a clone of Bug #1085049 +++

Feature Description:
With this feature we can limit a VM's blkio resource consumption.
We'd like to set up the following:

- weight (blkio.weight in blkio cgroup) 
- weight_device (blkio.weight_device in blkio cgroup)
- total_bytes_sec (block_set_io_throttle command in qemu monitor)
- read_bytes_sec (block_set_io_throttle command in qemu monitor)
- write_bytes_sec (block_set_io_throttle command in qemu monitor)
- total_iops_sec (block_set_io_throttle command in qemu monitor)
- read_iops_sec (block_set_io_throttle command in qemu monitor)
- write_iops_sec (block_set_io_throttle command in qemu monitor)

--- Additional comment from Sven Kieske on 2014-04-08 03:49:14 EDT ---

Would this just work with virtio-blk or also with virtio-scsi?

--- Additional comment from Scott Herold on 2014-04-23 15:51:48 EDT ---

RE: Comment 1, I need to defer to Gilad on how this was implemented.  The libvirt documentation leads me to believe that at the libvirt level, the settings are controller agnostic: http://libvirt.org/formatdomain.html#elementsDisks and are implemented at an individual disk level, not controller level.  In fact, in the libvirt documentation, the following is quoted: "Any device that looks like a disk, be it a floppy, harddisk, cdrom, or paravirtualized driver is specified via the disk element."  This leads me to believe it should also be compatible with the paravirtualized virtio-scsi devices as well.

--- Additional comment from Doron Fediuck on 2014-05-13 22:16:40 EDT ---

(In reply to Scott Herold from comment #2)
> RE: Comment 1, I need to defer to Gilad on how this was implemented. 

The iotune element doc states:
"
Currently, the only tuning available is Block I/O throttling for qemu.
"

Eric, care to shed some light on current status? 
ie- what happens when type='network' and protocol is iscsi?

--- Additional comment from Eric Blake on 2014-07-03 08:47:39 EDT ---

(In reply to Doron Fediuck from comment #3)
> (In reply to Scott Herold from comment #2)
> > RE: Comment 1, I need to defer to Gilad on how this was implemented. 
> 
> The iotune element doc states:
> "
> Currently, the only tuning available is Block I/O throttling for qemu.
> "
> 
> Eric, care to shed some light on current status? 
> ie- what happens when type='network' and protocol is iscsi?

At the libvirt level, there are two separate throttling points.  One is <blkiotune> at the top <domain> level, which can only throttle things via cgroups on the host at the host block device level.  Because it is done on host block devices, it is not very fine-grained (if a guest has more than one <disk> mapped as files, but both files live on the same block device, then you cannot throttle them independently), and limited in what it can throttle (a type='network' <disk> element has no corresponding block device on the host, so it can't be throttled).  The other is <iotune> at the <disk> level, which throttles solely based on qemu command line arguments.  At this level, the throttling is enforced by qemu, and theoretically works on ANY guest device.  But you'd have to ask the qemu folks to make sure it does the throttling you are interested in; also be aware that <blkiotune> was implemented first, and <iotune> later, so it may be a matter of which throttling points have been backported to the qemu/libvirt combo you are using.

Comment 1 Sandro Bonazzola 2014-10-17 12:27:11 UTC
oVirt 3.5 has been released and should include the fix for this issue.

Comment 2 Dan Kenigsberg 2014-10-25 21:57:32 UTC
The relevant patches have NOT been taken into vdsm's ovirt-3.5 branch yet. This feature is NOT in 3.5.0.

Comment 3 Sandro Bonazzola 2015-01-15 14:25:31 UTC
This is an automated message: 
This bug should be fixed in oVirt 3.5.1 RC1, moving to QA

Comment 4 Sandro Bonazzola 2015-01-21 16:02:01 UTC
oVirt 3.5.1 has been released. If problems still persist, please make note of it in this bug report.


Note You need to log in before you can comment on or make changes to this bug.