Bug 1324919
Summary: | [z-stream clone - 3.6.5] Storage QoS is not applying on a Live VM/disk | ||
---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | rhev-integ |
Component: | mom | Assignee: | Andrej Krejcir <akrejcir> |
Status: | CLOSED ERRATA | QA Contact: | meital avital <mavital> |
Severity: | medium | Docs Contact: | |
Priority: | urgent | ||
Version: | 3.5.0 | CC: | acanan, akrejcir, amureini, dfediuck, iheim, istein, lbopf, lpeer, lsurette, mgoldboi, msivak, nashok, pcuzner, pstehlik, rbalakri, Rhev-m-bugs, sapandit, sherold, s.kieske, srevivo, usurse, ykaul |
Target Milestone: | ovirt-3.6.5 | Keywords: | ZStream |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | mom-0.5.3-1 | Doc Type: | Enhancement |
Doc Text: |
The Memory Overcommitment Manager (MOM) now knows how to read the IO Quality of Service settings from metadata and set the respective ioTune limits to a running virtual machine's disk. This feature allows proper support for disk hot plug and changes to disk QoS for an already-running virtual machine.
|
Story Points: | --- |
Clone Of: | 1201482 | Environment: | |
Last Closed: | 2016-04-20 16:25:25 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | SLA | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1201482 | ||
Bug Blocks: |
Comment 2
Elad
2016-04-13 09:19:49 UTC
Roy, can you please reset the target release and bug status as we don't have the fix in engine merged in the latest build? Thanks Moving to ASSIGNED as we the bug is not fixed in engine Moving to 3.6.6. This is fixed in mom and mom 0.5.3 has to be released or the vdsm part of the fix will break the whole vdsm configuration. Reverting the version targeting and changing the component to mom. We will clone this for the engine part. Martin, isn't this bug depends on the fix in engine (1328731)? No, it can be tested even without the engine using a command similar to the following: # python from vdsm import vdscli vmID = "" domainID = "" poolID = "" imageID= "" volumeID = "" ioTune = {"total_bytes_sec": 1e5} vdscli.connect().updateVmPolicy({"vmId": vmID, "ioTune": [{"domainID": domainID, "poolID": poolID, "imageID": imageID, "volumeID": volumeID, "maximum": ioTune, "guaranteed": ioTune}]}) Obviously you have to provide all the necessary IDs from the engine of libvirt xml. Updating iotune while VM is running is not ignored. The disk device is updated successfully as seen here in the VM dumpxml: <disk type='file' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads'/> <source file='/rhev/data-center/694fc69a-9309-4af2-a3a3-0b73ec3bf9bc/77cc08c0-ca53-4486-92a2-ba1253b08f5d/images/63394d1f-b913-444a-8e2a-325a8c69fdbe/38457c95-43f1-445b-bf5d-dd662467f9e4'> <seclabel model='selinux' labelskip='yes'/> </source> <backingStore/> <target dev='vdb' bus='virtio'/> <iotune> <total_bytes_sec>100000</total_bytes_sec> </iotune> Steps: 1) Started a VM with OS disk attached (balloon device enabled) 2) Created and attached a new disk to the VM, created FS and mounted in the guest 3) Tested writing speed, got ~250Mb/s 4) updated the VM disk for writing at 100Kb/s (total bytes per sec) using the python script suggested in comment #8 5) Checked the VM xml and tested writing speed in the guest. Writing speed got reduced *Note: tested without engine* Used: RHEL7.2 vdsm-4.17.26-0.el7ev.noarch mom-0.5.3-1.el7ev.noarch libvirt-daemon-1.2.17-13.el7_2.4.x86_64 qemu-kvm-rhev-2.3.0-31.el7_2.10.x86_64 Just for future reference: All ioTune fields have to be provided and the values have to be integers (1e5 is converted to float by python). A short snippet to update the one from comment #8: ioTune = {"total_bytes_sec": 100000, "read_bytes_sec": 0, "write_bytes_sec": 0, "total_iops_sec": 0, "read_iops_sec": 0, "write_iops_sec": 0} Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0657.html |