Bug 1324919

Summary: [z-stream clone - 3.6.5] Storage QoS is not applying on a Live VM/disk
Product: Red Hat Enterprise Virtualization Manager Reporter: rhev-integ
Component: momAssignee: Andrej Krejcir <akrejcir>
Status: CLOSED ERRATA QA Contact: meital avital <mavital>
Severity: medium Docs Contact:
Priority: urgent    
Version: 3.5.0CC: acanan, akrejcir, amureini, dfediuck, iheim, istein, lbopf, lpeer, lsurette, mgoldboi, msivak, nashok, pcuzner, pstehlik, rbalakri, Rhev-m-bugs, sapandit, sherold, s.kieske, srevivo, usurse, ykaul
Target Milestone: ovirt-3.6.5Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: mom-0.5.3-1 Doc Type: Enhancement
Doc Text:
The Memory Overcommitment Manager (MOM) now knows how to read the IO Quality of Service settings from metadata and set the respective ioTune limits to a running virtual machine's disk. This feature allows proper support for disk hot plug and changes to disk QoS for an already-running virtual machine.
Story Points: ---
Clone Of: 1201482 Environment:
Last Closed: 2016-04-20 16:25:25 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: SLA RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1201482    
Bug Blocks:    

Comment 2 Elad 2016-04-13 09:19:49 UTC
Andrej, Roy,
Is the fix applied for both BW and IOPS? 
And also, just to make sure, the steps to reproduce are the ones specified in the desorption? i.e:
- create a VM with the default disk profile (no limitation) 
- create a QoS rule with write limitation and attach it to the storage domain
- while the VM is running, change disk profile to the one that includes the limitation. Limitation should be enforced.

Comment 3 Elad 2016-04-13 14:19:49 UTC
Roy, can you please reset the target release and bug status as we don't have the fix in engine merged in the latest build? 
Thanks

Comment 4 Elad 2016-04-14 05:48:25 UTC
Moving to ASSIGNED as we the bug is not fixed in engine

Comment 5 Yaniv Kaul 2016-04-14 07:42:52 UTC
Moving to 3.6.6.

Comment 6 Martin Sivák 2016-04-20 07:48:51 UTC
This is fixed in mom and mom 0.5.3 has to be released or the vdsm part of the fix will break the whole vdsm configuration.

Reverting the version targeting and changing the component to mom. We will clone this for the engine part.

Comment 7 Elad 2016-04-20 10:10:46 UTC
Martin, isn't this bug depends on the fix in engine (1328731)?

Comment 8 Martin Sivák 2016-04-20 10:36:32 UTC
No, it can be tested even without the engine using a command similar to the following:

# python
from vdsm import vdscli
vmID = ""
domainID = ""
poolID = ""
imageID= ""
volumeID = "" 
ioTune = {"total_bytes_sec": 1e5}
vdscli.connect().updateVmPolicy({"vmId": vmID, "ioTune": [{"domainID": domainID, "poolID": poolID, "imageID": imageID, "volumeID": volumeID, "maximum": ioTune, "guaranteed": ioTune}]})


Obviously you have to provide all the necessary IDs from the engine of libvirt xml.

Comment 9 Elad 2016-04-20 14:52:24 UTC
Updating iotune while VM is running is not ignored. The disk device is updated successfully as seen here in the VM dumpxml:

<disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads'/>
      <source file='/rhev/data-center/694fc69a-9309-4af2-a3a3-0b73ec3bf9bc/77cc08c0-ca53-4486-92a2-ba1253b08f5d/images/63394d1f-b913-444a-8e2a-325a8c69fdbe/38457c95-43f1-445b-bf5d-dd662467f9e4'>
        <seclabel model='selinux' labelskip='yes'/>
      </source>
      <backingStore/>
      <target dev='vdb' bus='virtio'/>
      <iotune>
        <total_bytes_sec>100000</total_bytes_sec>
      </iotune>


Steps:
1) Started a VM with OS disk attached (balloon device enabled)
2) Created and attached a new disk to the VM, created FS and mounted in the guest 
3) Tested writing speed, got ~250Mb/s
4) updated the VM disk for writing at 100Kb/s (total bytes per sec) using the python script suggested in comment #8
5) Checked the VM xml and tested writing speed in the guest. Writing speed got reduced 

*Note: tested without engine*

Used:
RHEL7.2
vdsm-4.17.26-0.el7ev.noarch
mom-0.5.3-1.el7ev.noarch
libvirt-daemon-1.2.17-13.el7_2.4.x86_64
qemu-kvm-rhev-2.3.0-31.el7_2.10.x86_64

Comment 10 Martin Sivák 2016-04-20 14:54:59 UTC
Just for future reference:

All ioTune fields have to be provided and the values have to be integers (1e5 is converted to float by python).

A short snippet to update the one from comment #8:

ioTune = {"total_bytes_sec": 100000, "read_bytes_sec": 0, "write_bytes_sec": 0, "total_iops_sec": 0, "read_iops_sec": 0, "write_iops_sec": 0}

Comment 12 errata-xmlrpc 2016-04-20 16:25:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0657.html