Bug 1324919 - [z-stream clone - 3.6.5] Storage QoS is not applying on a Live VM/disk
Summary: [z-stream clone - 3.6.5] Storage QoS is not applying on a Live VM/disk
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: mom
Version: 3.5.0
Hardware: Unspecified
OS: Unspecified
urgent
medium
Target Milestone: ovirt-3.6.5
: ---
Assignee: Andrej Krejcir
QA Contact: meital avital
URL:
Whiteboard:
Depends On: 1201482
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-07 15:09 UTC by rhev-integ
Modified: 2019-12-16 05:37 UTC (History)
22 users (show)

Fixed In Version: mom-0.5.3-1
Doc Type: Enhancement
Doc Text:
The Memory Overcommitment Manager (MOM) now knows how to read the IO Quality of Service settings from metadata and set the respective ioTune limits to a running virtual machine's disk. This feature allows proper support for disk hot plug and changes to disk QoS for an already-running virtual machine.
Clone Of: 1201482
Environment:
Last Closed: 2016-04-20 16:25:25 UTC
oVirt Team: SLA
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0657 0 normal SHIPPED_LIVE MOM enhancement and bug fix update 2016-04-20 20:24:00 UTC
oVirt gerrit 52743 0 master MERGED core: Enable live QoS change for cpu and IO 2016-04-11 11:31:09 UTC
oVirt gerrit 52746 0 master MERGED Apply storage QoS on running VM 2016-04-07 15:09:41 UTC
oVirt gerrit 52748 0 master MERGED Expose IO limits to policies 2016-04-07 15:09:41 UTC
oVirt gerrit 53438 0 master MERGED core: Refactor - created helper class for IoTune 2016-04-11 06:06:45 UTC
oVirt gerrit 54208 0 master MERGED Add MoM scripts to change storage QoS on running VM 2016-04-07 15:09:41 UTC
oVirt gerrit 55056 0 master MERGED core: Dao functions take lists of ids 2016-04-11 06:31:34 UTC
oVirt gerrit 55820 0 ovirt-3.6 MERGED Apply storage QoS on running VM 2016-04-08 09:02:15 UTC
oVirt gerrit 55821 0 ovirt-3.6 MERGED Add MoM scripts to change storage QoS on running VM 2016-04-08 09:02:22 UTC

Comment 2 Elad 2016-04-13 09:19:49 UTC
Andrej, Roy,
Is the fix applied for both BW and IOPS? 
And also, just to make sure, the steps to reproduce are the ones specified in the desorption? i.e:
- create a VM with the default disk profile (no limitation) 
- create a QoS rule with write limitation and attach it to the storage domain
- while the VM is running, change disk profile to the one that includes the limitation. Limitation should be enforced.

Comment 3 Elad 2016-04-13 14:19:49 UTC
Roy, can you please reset the target release and bug status as we don't have the fix in engine merged in the latest build? 
Thanks

Comment 4 Elad 2016-04-14 05:48:25 UTC
Moving to ASSIGNED as we the bug is not fixed in engine

Comment 5 Yaniv Kaul 2016-04-14 07:42:52 UTC
Moving to 3.6.6.

Comment 6 Martin Sivák 2016-04-20 07:48:51 UTC
This is fixed in mom and mom 0.5.3 has to be released or the vdsm part of the fix will break the whole vdsm configuration.

Reverting the version targeting and changing the component to mom. We will clone this for the engine part.

Comment 7 Elad 2016-04-20 10:10:46 UTC
Martin, isn't this bug depends on the fix in engine (1328731)?

Comment 8 Martin Sivák 2016-04-20 10:36:32 UTC
No, it can be tested even without the engine using a command similar to the following:

# python
from vdsm import vdscli
vmID = ""
domainID = ""
poolID = ""
imageID= ""
volumeID = "" 
ioTune = {"total_bytes_sec": 1e5}
vdscli.connect().updateVmPolicy({"vmId": vmID, "ioTune": [{"domainID": domainID, "poolID": poolID, "imageID": imageID, "volumeID": volumeID, "maximum": ioTune, "guaranteed": ioTune}]})


Obviously you have to provide all the necessary IDs from the engine of libvirt xml.

Comment 9 Elad 2016-04-20 14:52:24 UTC
Updating iotune while VM is running is not ignored. The disk device is updated successfully as seen here in the VM dumpxml:

<disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads'/>
      <source file='/rhev/data-center/694fc69a-9309-4af2-a3a3-0b73ec3bf9bc/77cc08c0-ca53-4486-92a2-ba1253b08f5d/images/63394d1f-b913-444a-8e2a-325a8c69fdbe/38457c95-43f1-445b-bf5d-dd662467f9e4'>
        <seclabel model='selinux' labelskip='yes'/>
      </source>
      <backingStore/>
      <target dev='vdb' bus='virtio'/>
      <iotune>
        <total_bytes_sec>100000</total_bytes_sec>
      </iotune>


Steps:
1) Started a VM with OS disk attached (balloon device enabled)
2) Created and attached a new disk to the VM, created FS and mounted in the guest 
3) Tested writing speed, got ~250Mb/s
4) updated the VM disk for writing at 100Kb/s (total bytes per sec) using the python script suggested in comment #8
5) Checked the VM xml and tested writing speed in the guest. Writing speed got reduced 

*Note: tested without engine*

Used:
RHEL7.2
vdsm-4.17.26-0.el7ev.noarch
mom-0.5.3-1.el7ev.noarch
libvirt-daemon-1.2.17-13.el7_2.4.x86_64
qemu-kvm-rhev-2.3.0-31.el7_2.10.x86_64

Comment 10 Martin Sivák 2016-04-20 14:54:59 UTC
Just for future reference:

All ioTune fields have to be provided and the values have to be integers (1e5 is converted to float by python).

A short snippet to update the one from comment #8:

ioTune = {"total_bytes_sec": 100000, "read_bytes_sec": 0, "write_bytes_sec": 0, "total_iops_sec": 0, "read_iops_sec": 0, "write_iops_sec": 0}

Comment 12 errata-xmlrpc 2016-04-20 16:25:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0657.html


Note You need to log in before you can comment on or make changes to this bug.