Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1324919 - [z-stream clone - 3.6.5] Storage QoS is not applying on a Live VM/disk
[z-stream clone - 3.6.5] Storage QoS is not applying on a Live VM/disk
Status: CLOSED ERRATA
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: mom (Show other bugs)
3.5.0
Unspecified Unspecified
urgent Severity medium
: ovirt-3.6.5
: ---
Assigned To: Andrej Krejcir
meital avital
: ZStream
Depends On: 1201482
Blocks:
  Show dependency treegraph
 
Reported: 2016-04-07 11:09 EDT by rhev-integ
Modified: 2018-08-02 03:44 EDT (History)
24 users (show)

See Also:
Fixed In Version: mom-0.5.3-1
Doc Type: Enhancement
Doc Text:
The Memory Overcommitment Manager (MOM) now knows how to read the IO Quality of Service settings from metadata and set the respective ioTune limits to a running virtual machine's disk. This feature allows proper support for disk hot plug and changes to disk QoS for an already-running virtual machine.
Story Points: ---
Clone Of: 1201482
Environment:
Last Closed: 2016-04-20 12:25:25 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: SLA
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 52743 master MERGED core: Enable live QoS change for cpu and IO 2016-04-11 07:31 EDT
oVirt gerrit 52746 master MERGED Apply storage QoS on running VM 2016-04-07 11:09 EDT
oVirt gerrit 52748 master MERGED Expose IO limits to policies 2016-04-07 11:09 EDT
oVirt gerrit 53438 master MERGED core: Refactor - created helper class for IoTune 2016-04-11 02:06 EDT
oVirt gerrit 54208 master MERGED Add MoM scripts to change storage QoS on running VM 2016-04-07 11:09 EDT
oVirt gerrit 55056 master MERGED core: Dao functions take lists of ids 2016-04-11 02:31 EDT
oVirt gerrit 55820 ovirt-3.6 MERGED Apply storage QoS on running VM 2016-04-08 05:02 EDT
oVirt gerrit 55821 ovirt-3.6 MERGED Add MoM scripts to change storage QoS on running VM 2016-04-08 05:02 EDT
Red Hat Product Errata RHBA-2016:0657 normal SHIPPED_LIVE MOM enhancement and bug fix update 2016-04-20 16:24:00 EDT

  None (edit)
Comment 2 Elad 2016-04-13 05:19:49 EDT
Andrej, Roy,
Is the fix applied for both BW and IOPS? 
And also, just to make sure, the steps to reproduce are the ones specified in the desorption? i.e:
- create a VM with the default disk profile (no limitation) 
- create a QoS rule with write limitation and attach it to the storage domain
- while the VM is running, change disk profile to the one that includes the limitation. Limitation should be enforced.
Comment 3 Elad 2016-04-13 10:19:49 EDT
Roy, can you please reset the target release and bug status as we don't have the fix in engine merged in the latest build? 
Thanks
Comment 4 Elad 2016-04-14 01:48:25 EDT
Moving to ASSIGNED as we the bug is not fixed in engine
Comment 5 Yaniv Kaul 2016-04-14 03:42:52 EDT
Moving to 3.6.6.
Comment 6 Martin Sivák 2016-04-20 03:48:51 EDT
This is fixed in mom and mom 0.5.3 has to be released or the vdsm part of the fix will break the whole vdsm configuration.

Reverting the version targeting and changing the component to mom. We will clone this for the engine part.
Comment 7 Elad 2016-04-20 06:10:46 EDT
Martin, isn't this bug depends on the fix in engine (1328731)?
Comment 8 Martin Sivák 2016-04-20 06:36:32 EDT
No, it can be tested even without the engine using a command similar to the following:

# python
from vdsm import vdscli
vmID = ""
domainID = ""
poolID = ""
imageID= ""
volumeID = "" 
ioTune = {"total_bytes_sec": 1e5}
vdscli.connect().updateVmPolicy({"vmId": vmID, "ioTune": [{"domainID": domainID, "poolID": poolID, "imageID": imageID, "volumeID": volumeID, "maximum": ioTune, "guaranteed": ioTune}]})


Obviously you have to provide all the necessary IDs from the engine of libvirt xml.
Comment 9 Elad 2016-04-20 10:52:24 EDT
Updating iotune while VM is running is not ignored. The disk device is updated successfully as seen here in the VM dumpxml:

<disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads'/>
      <source file='/rhev/data-center/694fc69a-9309-4af2-a3a3-0b73ec3bf9bc/77cc08c0-ca53-4486-92a2-ba1253b08f5d/images/63394d1f-b913-444a-8e2a-325a8c69fdbe/38457c95-43f1-445b-bf5d-dd662467f9e4'>
        <seclabel model='selinux' labelskip='yes'/>
      </source>
      <backingStore/>
      <target dev='vdb' bus='virtio'/>
      <iotune>
        <total_bytes_sec>100000</total_bytes_sec>
      </iotune>


Steps:
1) Started a VM with OS disk attached (balloon device enabled)
2) Created and attached a new disk to the VM, created FS and mounted in the guest 
3) Tested writing speed, got ~250Mb/s
4) updated the VM disk for writing at 100Kb/s (total bytes per sec) using the python script suggested in comment #8
5) Checked the VM xml and tested writing speed in the guest. Writing speed got reduced 

*Note: tested without engine*

Used:
RHEL7.2
vdsm-4.17.26-0.el7ev.noarch
mom-0.5.3-1.el7ev.noarch
libvirt-daemon-1.2.17-13.el7_2.4.x86_64
qemu-kvm-rhev-2.3.0-31.el7_2.10.x86_64
Comment 10 Martin Sivák 2016-04-20 10:54:59 EDT
Just for future reference:

All ioTune fields have to be provided and the values have to be integers (1e5 is converted to float by python).

A short snippet to update the one from comment #8:

ioTune = {"total_bytes_sec": 100000, "read_bytes_sec": 0, "write_bytes_sec": 0, "total_iops_sec": 0, "read_iops_sec": 0, "write_iops_sec": 0}
Comment 12 errata-xmlrpc 2016-04-20 12:25:25 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0657.html

Note You need to log in before you can comment on or make changes to this bug.