Andrej, Roy, Is the fix applied for both BW and IOPS? And also, just to make sure, the steps to reproduce are the ones specified in the desorption? i.e: - create a VM with the default disk profile (no limitation) - create a QoS rule with write limitation and attach it to the storage domain - while the VM is running, change disk profile to the one that includes the limitation. Limitation should be enforced.
Roy, can you please reset the target release and bug status as we don't have the fix in engine merged in the latest build? Thanks
Moving to ASSIGNED as we the bug is not fixed in engine
Moving to 3.6.6.
This is fixed in mom and mom 0.5.3 has to be released or the vdsm part of the fix will break the whole vdsm configuration. Reverting the version targeting and changing the component to mom. We will clone this for the engine part.
Martin, isn't this bug depends on the fix in engine (1328731)?
No, it can be tested even without the engine using a command similar to the following: # python from vdsm import vdscli vmID = "" domainID = "" poolID = "" imageID= "" volumeID = "" ioTune = {"total_bytes_sec": 1e5} vdscli.connect().updateVmPolicy({"vmId": vmID, "ioTune": [{"domainID": domainID, "poolID": poolID, "imageID": imageID, "volumeID": volumeID, "maximum": ioTune, "guaranteed": ioTune}]}) Obviously you have to provide all the necessary IDs from the engine of libvirt xml.
Updating iotune while VM is running is not ignored. The disk device is updated successfully as seen here in the VM dumpxml: <disk type='file' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads'/> <source file='/rhev/data-center/694fc69a-9309-4af2-a3a3-0b73ec3bf9bc/77cc08c0-ca53-4486-92a2-ba1253b08f5d/images/63394d1f-b913-444a-8e2a-325a8c69fdbe/38457c95-43f1-445b-bf5d-dd662467f9e4'> <seclabel model='selinux' labelskip='yes'/> </source> <backingStore/> <target dev='vdb' bus='virtio'/> <iotune> <total_bytes_sec>100000</total_bytes_sec> </iotune> Steps: 1) Started a VM with OS disk attached (balloon device enabled) 2) Created and attached a new disk to the VM, created FS and mounted in the guest 3) Tested writing speed, got ~250Mb/s 4) updated the VM disk for writing at 100Kb/s (total bytes per sec) using the python script suggested in comment #8 5) Checked the VM xml and tested writing speed in the guest. Writing speed got reduced *Note: tested without engine* Used: RHEL7.2 vdsm-4.17.26-0.el7ev.noarch mom-0.5.3-1.el7ev.noarch libvirt-daemon-1.2.17-13.el7_2.4.x86_64 qemu-kvm-rhev-2.3.0-31.el7_2.10.x86_64
Just for future reference: All ioTune fields have to be provided and the values have to be integers (1e5 is converted to float by python). A short snippet to update the one from comment #8: ioTune = {"total_bytes_sec": 100000, "read_bytes_sec": 0, "write_bytes_sec": 0, "total_iops_sec": 0, "read_iops_sec": 0, "write_iops_sec": 0}
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0657.html