Bug 1328731 - Storage QoS is not applying on a Live VM/disk
Summary: Storage QoS is not applying on a Live VM/disk
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.5.0
Hardware: Unspecified
OS: Unspecified
urgent
medium
Target Milestone: ovirt-4.0.4
: 4.0.1
Assignee: Roman Mohr
QA Contact: Kevin Alon Goldblatt
URL:
Whiteboard:
Depends On: 1201482
Blocks: 1346754
TreeView+ depends on / blocked
 
Reported: 2016-04-20 07:52 UTC by Martin Sivák
Modified: 2022-07-09 09:34 UTC (History)
28 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
This update ensures that Quality of Service (QoS) Storage values that are sent to the VDSM service, are used by the VDSM and Memory Overcommit Manager (MoM). The result is that QoS is live-applied on virtual machines, and all MoM-related virtual machine changes are only applied when the memory ballooning device is enabled on the virtual machine.
Clone Of: 1201482
: 1346754 (view as bug list)
Environment:
Last Closed: 2016-09-28 22:15:14 UTC
oVirt Team: SLA
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
vdsm server and engine logs (499.30 KB, application/x-gzip)
2016-05-19 16:18 UTC, Kevin Alon Goldblatt
no flags Details
vdsm server and engine logs (1.14 MB, application/x-gzip)
2016-05-19 16:30 UTC, Kevin Alon Goldblatt
no flags Details
Mom.log (2.00 MB, text/plain)
2016-05-22 12:31 UTC, Kevin Alon Goldblatt
no flags Details
vdsm server and engine logs (1.75 MB, application/x-gzip)
2016-05-22 14:05 UTC, Kevin Alon Goldblatt
no flags Details
output of 'virsh -r dumpxml <VM_NAME>' before and after disk profile change (2.62 KB, application/x-gzip)
2016-08-03 07:52 UTC, Elad
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-47384 0 None None None 2022-07-09 09:34:57 UTC
Red Hat Product Errata RHSA-2016:1967 0 normal SHIPPED_LIVE Moderate: org.ovirt.engine-root security, bug fix, and enhancement update 2016-09-29 01:02:10 UTC
oVirt gerrit 52743 0 None None None 2016-04-20 07:52:33 UTC
oVirt gerrit 52746 0 None None None 2016-04-20 07:52:33 UTC
oVirt gerrit 52748 0 None None None 2016-04-20 07:52:33 UTC
oVirt gerrit 53438 0 None None None 2016-04-20 07:52:33 UTC
oVirt gerrit 54208 0 None None None 2016-04-20 07:52:33 UTC
oVirt gerrit 55056 0 None None None 2016-04-20 07:52:33 UTC
oVirt gerrit 55820 0 None None None 2016-04-20 07:52:33 UTC
oVirt gerrit 55821 0 None None None 2016-04-20 07:52:33 UTC
oVirt gerrit 55834 0 None None None 2016-04-20 07:52:33 UTC
oVirt gerrit 55867 0 None None None 2016-04-20 07:52:33 UTC
oVirt gerrit 55899 0 None None None 2016-04-20 07:52:33 UTC
oVirt gerrit 55901 0 None None None 2016-04-20 07:52:33 UTC
oVirt gerrit 55930 0 None None None 2016-04-20 07:52:33 UTC
oVirt gerrit 55931 0 None None None 2016-04-20 07:52:33 UTC
oVirt gerrit 55943 0 None None None 2016-04-20 07:52:33 UTC
oVirt gerrit 55944 0 None None None 2016-04-20 07:52:33 UTC
oVirt gerrit 58857 0 master MERGED qos: Update Disk QoS on disk profile change 2016-07-20 10:28:42 UTC
oVirt gerrit 59128 0 ovirt-engine-4.0 MERGED qos: Update Disk QoS on disk profile change 2016-07-17 11:53:22 UTC
oVirt gerrit 60920 0 ovirt-engine-3.6 MERGED qos: Update Disk QoS on disk profile change 2016-08-08 08:27:12 UTC
oVirt gerrit 60921 0 ovirt-engine-4.0.1 ABANDONED qos: Update Disk QoS on disk profile change 2016-07-19 06:51:51 UTC

Comment 3 Martin Sivák 2016-04-20 07:55:40 UTC
This is the ovirt-engine clone for the disk live QoS update.

Comment 7 Kevin Alon Goldblatt 2016-05-19 16:09:01 UTC
Tested with the following code:
-------------------------------------------
rhevm-3.6.6.2-0.1.el6.noarch
vdsm-4.17.28-0.el7ev.noarch

Tested with the following scenario:
-------------------------------------------
1. Created a VM from a template with installed OS on a disk
2. Added 1 iscsi disk and 1 nfs disk
3. Installed a file system on both disks with "mkfs -t ext4 /dev/vda" and "mkfs -t ext4 /dev/vdc"
4. Mounted both file systems to mount points /mnt1 and /mnt2 and ran:
dd bs=1M count=100 if=/dev/zero of=/mnt1/test1 conv=fdatasync
The output was 65mb/s

dd bs=1M count=100 if=/dev/zero of=/mnt2/test1 conv=fdatasync
The output was 77mb/s

5. I added a QoS profile of 10MB/s to both disks
The profile was saved to the iscsi block disk but failed to save to the nfs disk

6. I then ran the dd writes again and the results were unchanged

Attaching logs and moving to ASSIGNED

Comment 8 Kevin Alon Goldblatt 2016-05-19 16:18:34 UTC
Created attachment 1159537 [details]
vdsm server and engine logs

Comment 9 Kevin Alon Goldblatt 2016-05-19 16:30:25 UTC
Created attachment 1159540 [details]
vdsm server and engine logs

Adding an additional vdsm log that was not included before;

Comment 10 Roman Mohr 2016-05-20 08:01:06 UTC
(In reply to Kevin Alon Goldblatt from comment #7)
> Tested with the following code:
> -------------------------------------------
> rhevm-3.6.6.2-0.1.el6.noarch
> vdsm-4.17.28-0.el7ev.noarch
> 
> Tested with the following scenario:
> -------------------------------------------
> 1. Created a VM from a template with installed OS on a disk

I see two VMs in the vdsm logs 'vmft1' and 'vmft2' which are probably created from the template you mention here. They have no ballooning device enabled:

   <memballoon model="none"/>

MOM (which does all the QoS and ballooning work) ignores currently all VMs which do not have a ballooning device. Could you also attach the MOM logs?
That is a bug and I will file one now.

There is also another bug regarding to templates in the engine. For instance when you import a VM from cinder and create a template out of it, all VMs created from that template have no ballooning device. I will file another bug report for that one.

For now make sure that the VMs have ballooning enabled (in the Ressource Allocation tab).

> 2. Added 1 iscsi disk and 1 nfs disk
> 3. Installed a file system on both disks with "mkfs -t ext4 /dev/vda" and
> "mkfs -t ext4 /dev/vdc"
> 4. Mounted both file systems to mount points /mnt1 and /mnt2 and ran:
> dd bs=1M count=100 if=/dev/zero of=/mnt1/test1 conv=fdatasync
> The output was 65mb/s
> 
> dd bs=1M count=100 if=/dev/zero of=/mnt2/test1 conv=fdatasync
> The output was 77mb/s
> 
> 5. I added a QoS profile of 10MB/s to both disks
> The profile was saved to the iscsi block disk but failed to save to the nfs
> disk
> 
> 6. I then ran the dd writes again and the results were unchanged
> 
> Attaching logs and moving to ASSIGNED

Comment 11 Kevin Alon Goldblatt 2016-05-22 12:31:13 UTC
Created attachment 1160293 [details]
Mom.log

Providing the requested mom.log

Comment 12 Kevin Alon Goldblatt 2016-05-22 13:57:53 UTC
(In reply to Roman Mohr from comment #10)
> (In reply to Kevin Alon Goldblatt from comment #7)
> > Tested with the following code:
> > -------------------------------------------
> > rhevm-3.6.6.2-0.1.el6.noarch
> > vdsm-4.17.28-0.el7ev.noarch
> > 
> > Tested with the following scenario:
> > -------------------------------------------
> > 1. Created a VM from a template with installed OS on a disk
> 
> I see two VMs in the vdsm logs 'vmft1' and 'vmft2' which are probably
> created from the template you mention here. They have no ballooning device
> enabled:
> 
>    <memballoon model="none"/>
> 
> MOM (which does all the QoS and ballooning work) ignores currently all VMs
> which do not have a ballooning device. Could you also attach the MOM logs?
> That is a bug and I will file one now.
> 
> There is also another bug regarding to templates in the engine. For instance
> when you import a VM from cinder and create a template out of it, all VMs
> created from that template have no ballooning device. I will file another
> bug report for that one.
> 
> For now make sure that the VMs have ballooning enabled (in the Ressource
> Allocation tab).
> 

I created a new VM and repeated the steps with the ballooning device enabled but now adding the disk profile to any of the disks fails. Adding new logs


> > 2. Added 1 iscsi disk and 1 nfs disk
> > 3. Installed a file system on both disks with "mkfs -t ext4 /dev/vda" and
> > "mkfs -t ext4 /dev/vdc"
> > 4. Mounted both file systems to mount points /mnt1 and /mnt2 and ran:
> > dd bs=1M count=100 if=/dev/zero of=/mnt1/test1 conv=fdatasync
> > The output was 65mb/s
> > 
> > dd bs=1M count=100 if=/dev/zero of=/mnt2/test1 conv=fdatasync
> > The output was 77mb/s
> > 
> > 5. I added a QoS profile of 10MB/s to both disks
> > The profile was saved to the iscsi block disk but failed to save to the nfs
> > disk
> > 
> > 6. I then ran the dd writes again and the results were unchanged
> > 
> > Attaching logs and moving to ASSIGNED

Comment 13 Kevin Alon Goldblatt 2016-05-22 14:03:37 UTC
Adding log info. Note ther is a 3 hour time difference between the engine and the host as follows:
Engine: Sun May 22 14:02:55 IDT 2016
Host:   Sun May 22 17:02:51 IDT 2016

From engine.log
-----------------------------------
2016-05-22 13:21:35,951 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand] (ajp-/127.0.0.1:8702-8) [65be3947] FINISH, SetVmTicketVDSCommand, log id: 390411c2
2016-05-22 13:21:35,966 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp-/127.0.0.1:8702-8) [65be3947] Correlation ID: 65be3947, Call Stack: null, Custom E
vent ID: -1, Message: User admin@internal initiated console session for VM vmft3
2016-05-22 13:21:41,645 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-29) [4ae301dd] Correlation ID: null, Call Stack: null, C
ustom Event ID: -1, Message: User admin@internal-authz is connected to VM vmft3.
2016-05-22 13:30:53,825 INFO  [org.ovirt.engine.core.bll.UpdateVmDiskCommand] (ajp-/127.0.0.1:8702-12) [2387cde5] Lock Acquired to object 'EngineLock:{exclusiveLocks='null', sharedLocks='[
deef9838-5ad9-480b-aa21-0780dd8e4071=<VM, ACTION_TYPE_FAILED_VM_IS_LOCKED>]'}'
2016-05-22 13:30:53,834 INFO  [org.ovirt.engine.core.bll.UpdateVmDiskCommand] (ajp-/127.0.0.1:8702-12) [2387cde5] Running command: UpdateVmDiskCommand internal: false. Entities affected : 
 ID: 1325882d-5cf3-4d70-8901-1d3f8d88e636 Type: DiskAction group EDIT_DISK_PROPERTIES with role type USER
2016-05-22 13:30:53,913 INFO  [org.ovirt.engine.core.bll.UpdateVmDiskCommand] (ajp-/127.0.0.1:8702-12) [2387cde5] Lock freed to object 'EngineLock:{exclusiveLocks='null', sharedLocks='[deef9838-5ad9-480b-aa21-0780dd8e4071=<VM, ACTION_TYPE_FAILED_VM_IS_LOCKED>]'}'
2016-05-22 13:30:53,930 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp-/127.0.0.1:8702-12) [2387cde5] Correlation ID: 2387cde5, Call Stack: null, Custom Event ID: -1, Message: VM vmft3 vmft3_Disk1 disk was updated by admin@internal.
2016-05-22 13:31:09,801 INFO  [org.ovirt.engine.core.bll.UpdateVmDiskCommand] (ajp-/127.0.0.1:8702-1) [6578885f] Lock Acquired to object 'EngineLock:{exclusiveLocks='null', sharedLocks='[deef9838-5ad9-480b-aa21-0780dd8e4071=<VM, ACTION_TYPE_FAILED_VM_IS_LOCKED>]'}'
2016-05-22 13:31:09,937 INFO  [org.ovirt.engine.core.bll.UpdateVmDiskCommand] (ajp-/127.0.0.1:8702-1) [6578885f] Running command: UpdateVmDiskCommand internal: false. Entities affected :  ID: 1325882d-5cf3-4d70-8901-1d3f8d88e636 Type: DiskAction group EDIT_DISK_PROPERTIES with role type USER
2016-05-22 13:31:09,998 INFO  [org.ovirt.engine.core.bll.UpdateVmDiskCommand] (ajp-/127.0.0.1:8702-1) [6578885f] Lock freed to object 'EngineLock:{exclusiveLocks='null', sharedLocks='[deef9838-5ad9-480b-aa21-0780dd8e4071=<VM, ACTION_TYPE_FAILED_VM_IS_LOCKED>]'}'
2016-05-22 13:31:10,010 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp-/127.0.0.1:8702-1) [6578885f] Correlation ID: 6578885f, Call Stack: null, Custom Event ID: -1, Message: VM vmft3 vmft3_Disk1 disk was updated by admin@internal.
2016-05-22 13:31:30,021 INFO  [org.ovirt.engine.core.bll.UpdateVmDiskCommand] (ajp-/127.0.0.1:8702-6) [3f80c9e8] Lock Acquired to object 'EngineLock:{exclusiveLocks='null', sharedLocks='[deef9838-5ad9-480b-aa21-0780dd8e4071=<VM, ACTION_TYPE_FAILED_VM_IS_LOCKED>]'}'
2016-05-22 13:31:30,165 INFO  [org.ovirt.engine.core.bll.UpdateVmDiskCommand] (ajp-/127.0.0.1:8702-6) [3f80c9e8] Running command: UpdateVmDiskCommand internal: false. Entities affected :  ID: 5b75b93a-7ac5-4075-a95a-e1a6b4152ab7 Type: DiskAction group EDIT_DISK_PROPERTIES with role type USER
2016-05-22 13:31:30,219 INFO  [org.ovirt.engine.core.bll.UpdateVmDiskCommand] (ajp-/127.0.0.1:8702-6) [3f80c9e8] Lock freed to object 'EngineLock:{exclusiveLocks='null', sharedLocks='[deef9838-5ad9-480b-aa21-0780dd8e4071=<VM, ACTION_TYPE_FAILED_VM_IS_LOCKED>]'}'
2016-05-22 13:31:30,236 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp-/127.0.0.1:8702-6) [3f80c9e8] Correlation ID: 3f80c9e8, Call Stack: null, Custom Event ID: -1, Message: VM vmft3 vmft3_Disk2 disk was updated by admin@internal.
2016-05-22 13:44:40,513 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-27) [] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: User admin@internal-authz got disconnected from VM vmft3.


From vdsm.log
---------------------
Thread-145326::DEBUG::2016-05-22 16:14:29,127::core::51::virt.vm::(__init__) vmId=`deef9838-5ad9-480b-aa21-0780dd8e4071`::Ignoring param (target, 1048576) in Balloon
Thread-145326::INFO::2016-05-22 16:14:29,157::vm::1932::virt.vm::(_run) vmId=`deef9838-5ad9-480b-aa21-0780dd8e4071`::<?xml version="1.0" encoding="utf-8"?>
<domain type="kvm" xmlns:ovirt="http://ovirt.org/vm/tune/1.0">
        <name>vmft3</name>
        <uuid>deef9838-5ad9-480b-aa21-0780dd8e4071</uuid>
        <memory>1048576</memory>
        <currentMemory>1048576</currentMemory>
        <maxMemory slots="16">4294967296</maxMemory>
        <vcpu current="1">16</vcpu>
        <devices>
                <channel type="unix">
                        <target name="com.redhat.rhevm.vdsm" type="virtio"/>
                        <source mode="bind" path="/var/lib/libvirt/qemu/channels/deef9838-5ad9-480b-aa21-0780dd8e4071.com.redhat.rhevm.vdsm"/>
                </channel>
                <channel type="unix">
                        <target name="org.qemu.guest_agent.0" type="virtio"/>
                        <source mode="bind" path="/var/lib/libvirt/qemu/channels/deef9838-5ad9-480b-aa21-0780dd8e4071.org.qemu.guest_agent.0"/>
                </channel>
                <input bus="ps2" type="mouse"/>
                <sound model="ich6">
                        <address bus="0x00" domain="0x0000" function="0x0" slot="0x04" type="pci"/>
                </sound>
                <memballoon model="virtio">
                        <address bus="0x00" domain="0x0000" function="0x0" slot="0x09" type="pci"/>
                </memballoon>
                <controller index="0" ports="16" type="virtio-serial">
                        <address bus="0x00" domain="0x0000" function="0x0" slot="0x05" type="pci"/>
                </controller>
                <video>
                        <address bus="0x00" domain="0x0000" function="0x0" slot="0x02" type="pci"/>
                        <model heads="1" ram="65536" type="qxl" vgamem="16384" vram="8192"/>
                </video>
                <graphics autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" port="-1" tlsPort="-1" type="spice">
                        <listen network="vdsm-ovirtmgmt" type="network"/>
                </graphics>
                <interface type="bridge">
                        <address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci"/>
                        <mac address="00:1a:4a:16:01:54"/>
                        <model type="virtio"/>
                        <source bridge="ovirtmgmt"/>
                        <filterref filter="vdsm-no-mac-spoofing"/>
                        <link state="up"/>
                        <bandwidth/>

Comment 14 Kevin Alon Goldblatt 2016-05-22 14:05:43 UTC
Created attachment 1160305 [details]
vdsm server and engine logs

Adding the new set of logs

Comment 15 Yaniv Kaul 2016-05-22 14:06:47 UTC
Kevin - where are the disks in the libvirt XML?

Comment 16 Roman Mohr 2016-05-23 08:33:14 UTC
(In reply to Roman Mohr from comment #10)
> (In reply to Kevin Alon Goldblatt from comment #7)
> > Tested with the following code:
> > -------------------------------------------
> > rhevm-3.6.6.2-0.1.el6.noarch
> > vdsm-4.17.28-0.el7ev.noarch
> > 
> > Tested with the following scenario:
> > -------------------------------------------
> > 1. Created a VM from a template with installed OS on a disk
> 
> I see two VMs in the vdsm logs 'vmft1' and 'vmft2' which are probably
> created from the template you mention here. They have no ballooning device
> enabled:
> 
>    <memballoon model="none"/>
> 
> MOM (which does all the QoS and ballooning work) ignores currently all VMs
> which do not have a ballooning device. Could you also attach the MOM logs?
> That is a bug and I will file one now.

https://bugzilla.redhat.com/show_bug.cgi?id=1337882


> 
> There is also another bug regarding to templates in the engine. For instance
> when you import a VM from cinder and create a template out of it, all VMs
> created from that template have no ballooning device. I will file another
> bug report for that one.
> 

https://bugzilla.redhat.com/show_bug.cgi?id=1338665

> For now make sure that the VMs have ballooning enabled (in the Ressource
> Allocation tab).
> 
> > 2. Added 1 iscsi disk and 1 nfs disk
> > 3. Installed a file system on both disks with "mkfs -t ext4 /dev/vda" and
> > "mkfs -t ext4 /dev/vdc"
> > 4. Mounted both file systems to mount points /mnt1 and /mnt2 and ran:
> > dd bs=1M count=100 if=/dev/zero of=/mnt1/test1 conv=fdatasync
> > The output was 65mb/s
> > 
> > dd bs=1M count=100 if=/dev/zero of=/mnt2/test1 conv=fdatasync
> > The output was 77mb/s
> > 
> > 5. I added a QoS profile of 10MB/s to both disks
> > The profile was saved to the iscsi block disk but failed to save to the nfs
> > disk
> > 
> > 6. I then ran the dd writes again and the results were unchanged
> > 
> > Attaching logs and moving to ASSIGNED

Comment 17 Roman Mohr 2016-05-23 08:37:11 UTC
(In reply to Kevin Alon Goldblatt from comment #12)
> (In reply to Roman Mohr from comment #10)
> > (In reply to Kevin Alon Goldblatt from comment #7)
> > > Tested with the following code:
> > > -------------------------------------------
> > > rhevm-3.6.6.2-0.1.el6.noarch
> > > vdsm-4.17.28-0.el7ev.noarch
> > > 
> > > Tested with the following scenario:
> > > -------------------------------------------
> > > 1. Created a VM from a template with installed OS on a disk
> > 
> > I see two VMs in the vdsm logs 'vmft1' and 'vmft2' which are probably
> > created from the template you mention here. They have no ballooning device
> > enabled:
> > 
> >    <memballoon model="none"/>
> > 
> > MOM (which does all the QoS and ballooning work) ignores currently all VMs
> > which do not have a ballooning device. Could you also attach the MOM logs?
> > That is a bug and I will file one now.
> > 
> > There is also another bug regarding to templates in the engine. For instance
> > when you import a VM from cinder and create a template out of it, all VMs
> > created from that template have no ballooning device. I will file another
> > bug report for that one.
> > 
> > For now make sure that the VMs have ballooning enabled (in the Ressource
> > Allocation tab).
> > 
> 
> I created a new VM and repeated the steps with the ballooning device enabled
> but now adding the disk profile to any of the disks fails. Adding new logs
> 

Sounds like https://bugzilla.redhat.com/show_bug.cgi?id=1297734.

I have exactly the same behaviour in a clean engine without any VMs at all. The disk profile selection is ignored. It is most likely not ballooning device related.

What you could do to still test if QoS is applied is to set quotas on the default disk profile and see if they are getting propagated.

> 
> > > 2. Added 1 iscsi disk and 1 nfs disk
> > > 3. Installed a file system on both disks with "mkfs -t ext4 /dev/vda" and
> > > "mkfs -t ext4 /dev/vdc"
> > > 4. Mounted both file systems to mount points /mnt1 and /mnt2 and ran:
> > > dd bs=1M count=100 if=/dev/zero of=/mnt1/test1 conv=fdatasync
> > > The output was 65mb/s
> > > 
> > > dd bs=1M count=100 if=/dev/zero of=/mnt2/test1 conv=fdatasync
> > > The output was 77mb/s
> > > 
> > > 5. I added a QoS profile of 10MB/s to both disks
> > > The profile was saved to the iscsi block disk but failed to save to the nfs
> > > disk
> > > 
> > > 6. I then ran the dd writes again and the results were unchanged
> > > 
> > > Attaching logs and moving to ASSIGNED

Comment 18 Kevin Alon Goldblatt 2016-05-23 12:04:03 UTC
(In reply to Yaniv Kaul from comment #15)
> Kevin - where are the disks in the libvirt XML?

Its not in the xml. That was only the VM,

Here's the disks:
----------------------------
vmft3_Disk1
----------------------
jsonrpc.Executor/2::DEBUG::2016-05-22 16:01:55,358::task::993::Storage.TaskManager.Task::(_decref) Task=`cc2768b1-19f7-4832-9d0d-4fcf51d40b5f`::ref 0 aborting False
jsonrpc.Executor/2::DEBUG::2016-05-22 16:01:55,359::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest) Return 'Volume.getInfo' in bridge with {'status': 'OK', 'domain': '03a84de6-1315-4
030-8525-c5789c689189', 'voltype': 'LEAF', 'description': '{"DiskAlias":"vmft3_Disk2","DiskDescription":""}', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'image': '5
b75b93a-7ac5-4075-a95a-e1a6b4152ab7', 'ctime': '1463922106', 'disktype': '2', 'legality': 'LEGAL', 'allocType': 'SPARSE', 'mtime': '0', 'apparentsize': '7516192768', 'children': [], 'pool'
: '', 'capacity': '7516192768', 'uuid': u'6a5f81de-0c25-4f81-98f4-8bea01c213b9', 'truesize': '0', 'type': 'SPARSE'}
jsonrpc.Executor/6::DEBUG::2016-05-22 16:01:56,389::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'Task.clear' in bridge with {u'taskID': u'c36adc55-1411-478e-973d-478d7be9
a831'}



vmft3_Disk2
----------------------
jsonrpc.Executor/2::DEBUG::2016-05-22 16:01:55,358::task::993::Storage.TaskManager.Task::(_decref) Task=`cc2768b1-19f7-4832-9d0d-4fcf51d40b5f`::ref 0 aborting False
jsonrpc.Executor/2::DEBUG::2016-05-22 16:01:55,359::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest) Return 'Volume.getInfo' in bridge with {'status': 'OK', 'domain': '03a84de6-1315-4
030-8525-c5789c689189', 'voltype': 'LEAF', 'description': '{"DiskAlias":"vmft3_Disk2","DiskDescription":""}', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'image': '5
b75b93a-7ac5-4075-a95a-e1a6b4152ab7', 'ctime': '1463922106', 'disktype': '2', 'legality': 'LEGAL', 'allocType': 'SPARSE', 'mtime': '0', 'apparentsize': '7516192768', 'children': [], 'pool'
: '', 'capacity': '7516192768', 'uuid': u'6a5f81de-0c25-4f81-98f4-8bea01c213b9', 'truesize': '0', 'type': 'SPARSE'}
jsonrpc.Executor/6::DEBUG::2016-05-22 16:01:56,389::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'Task.clear' in bridge with {u'taskID': u'c36adc55-1411-478e-973d-478d7be9
a831'}

Comment 19 Roman Mohr 2016-05-23 12:27:41 UTC
(In reply to Kevin Alon Goldblatt from comment #12)
> (In reply to Roman Mohr from comment #10)
> > (In reply to Kevin Alon Goldblatt from comment #7)
> > > Tested with the following code:
> > > -------------------------------------------
> > > rhevm-3.6.6.2-0.1.el6.noarch
> > > vdsm-4.17.28-0.el7ev.noarch
> > > 
> > > Tested with the following scenario:
> > > -------------------------------------------
> > > 1. Created a VM from a template with installed OS on a disk
> > 
> > I see two VMs in the vdsm logs 'vmft1' and 'vmft2' which are probably
> > created from the template you mention here. They have no ballooning device
> > enabled:
> > 
> >    <memballoon model="none"/>
> > 
> > MOM (which does all the QoS and ballooning work) ignores currently all VMs
> > which do not have a ballooning device. Could you also attach the MOM logs?
> > That is a bug and I will file one now.
> > 
> > There is also another bug regarding to templates in the engine. For instance
> > when you import a VM from cinder and create a template out of it, all VMs
> > created from that template have no ballooning device. I will file another
> > bug report for that one.
> > 
> > For now make sure that the VMs have ballooning enabled (in the Ressource
> > Allocation tab).
> > 
> 
> I created a new VM and repeated the steps with the ballooning device enabled
> but now adding the disk profile to any of the disks fails. Adding new logs
> 

I am moving this back to modified. @Kevin can you proceed testing by setting the quotas on the default disk profile? You could also use the rest api to change the disk profile.
> 
> > > 2. Added 1 iscsi disk and 1 nfs disk
> > > 3. Installed a file system on both disks with "mkfs -t ext4 /dev/vda" and
> > > "mkfs -t ext4 /dev/vdc"
> > > 4. Mounted both file systems to mount points /mnt1 and /mnt2 and ran:
> > > dd bs=1M count=100 if=/dev/zero of=/mnt1/test1 conv=fdatasync
> > > The output was 65mb/s
> > > 
> > > dd bs=1M count=100 if=/dev/zero of=/mnt2/test1 conv=fdatasync
> > > The output was 77mb/s
> > > 
> > > 5. I added a QoS profile of 10MB/s to both disks
> > > The profile was saved to the iscsi block disk but failed to save to the nfs
> > > disk
> > > 
> > > 6. I then ran the dd writes again and the results were unchanged
> > > 
> > > Attaching logs and moving to ASSIGNED

Comment 22 Elad 2016-06-07 08:44:52 UTC
Hi Roman, 

I'm trying to change the disk profile using REST but no luck so far.
I'm sending to https://engine_addr/api/vms/%vm_id%/disks/%disk_id%

Using PUT with application/xml header.
Body:

<disk>
<disk_profile id = "61a730fe-a424-4a4a-b3fa-c01f289987af">
</disk>

Response:

<usage_message>
<message>
Request syntactically incorrect. See the description below for the correct usage:


**61a730fe-a424-4a4a-b3fa-c01f289987af is the storage domain new disk profile id**

Please advise, thanks

Comment 23 Roman Mohr 2016-06-07 14:48:22 UTC
(In reply to Elad from comment #22)
> Hi Roman, 
> 
> I'm trying to change the disk profile using REST but no luck so far.
> I'm sending to https://engine_addr/api/vms/%vm_id%/disks/%disk_id%
> 
> Using PUT with application/xml header.
> Body:
> 
> <disk>
> <disk_profile id = "61a730fe-a424-4a4a-b3fa-c01f289987af">

You are missing a dash at the end

It should be 

<disk_profile id = "61a730fe-a424-4a4a-b3fa-c01f289987af"/>

its an invalid xml request.

> </disk>
> 
> Response:
> 
> <usage_message>
> <message>
> Request syntactically incorrect. See the description below for the correct
> usage:

I did a PUT with application/xml and the content

<disk>
    <disk_profile id="56c8ca7a-e22a-4538-9968-363504721690"/>
</disk>

to the endpoint and it worked. 

> 
> 
> **61a730fe-a424-4a4a-b3fa-c01f289987af is the storage domain new disk
> profile id**
> 
> Please advise, thanks

Comment 24 Roman Mohr 2016-06-07 14:51:34 UTC
(In reply to Roman Mohr from comment #23)
> (In reply to Elad from comment #22)
> > Hi Roman, 
> > 
> > I'm trying to change the disk profile using REST but no luck so far.
> > I'm sending to https://engine_addr/api/vms/%vm_id%/disks/%disk_id%
> > 
> > Using PUT with application/xml header.
> > Body:
> > 
> > <disk>
> > <disk_profile id = "61a730fe-a424-4a4a-b3fa-c01f289987af">
> 
> You are missing a dash at the end

s/dash/slash
> 
> It should be 
> 
> <disk_profile id = "61a730fe-a424-4a4a-b3fa-c01f289987af"/>
> 
> its an invalid xml request.
> 
> > </disk>
> > 
> > Response:
> > 
> > <usage_message>
> > <message>
> > Request syntactically incorrect. See the description below for the correct
> > usage:
> 
> I did a PUT with application/xml and the content
> 
> <disk>
>     <disk_profile id="56c8ca7a-e22a-4538-9968-363504721690"/>
> </disk>
> 
> to the endpoint and it worked. 
> 
> > 
> > 
> > **61a730fe-a424-4a4a-b3fa-c01f289987af is the storage domain new disk
> > profile id**
> > 
> > Please advise, thanks

Comment 25 Elad 2016-06-08 12:36:42 UTC
Tested the following:

1. Created a VM with 1 disk attached, installed OS
2. Added 2 disks, reside on NFS and iSCSI domains
3. Created FS on the 2 disks and mounted both.
4. Wrote to the disks with dd, writing speed to both was higher than 90 MB/s
5. Added a QoS rule of 10MB/s to the DC, assigned it to both domains as disks profiles
6. While the VM was running, via REST, changed the disk profile to the one assigned to the new QOS rule for both disks:

To https: https://engine_addr/api/vms/%vm_id%/disks/%disk_id% , PUT , Content-type: application/xml:

<disk>
<disk_profile id = "%disk_profile_id"/>
</disk>

Operation succeeded, verified also in the UI that the disk profile changed.

7. Wrote to the disks with dd. Writing speed was higher than 90MB/s as it was before changing to the new disk profile.

Tested using:
rhevm-3.6.7.2-0.1.el6.noarch
rhevm-restapi-3.6.7.2-0.1.el6.noarch
vdsm-4.17.30-1.el7ev.noarch
mom-0.5.3-1.el7ev.noarch
libvirt-daemon-1.2.17-13.el7_2.5.x86_64
qemu-kvm-rhev-2.3.0-31.el7_2.15.x86_64

Moving to ASSIGNED  
Attaching logs and moving to ASSIGNED

Comment 28 Elad 2016-08-02 10:17:35 UTC
Hi Roman,

I'm working on verification according to the steps comment #25

After adding the disk profile to the disk, the writing speed is still above the I specified in the QOS rule.

I'm using latest build (rhevm-4.0.2.3-0.1.el7ev.noarch)

Comment 29 Roman Mohr 2016-08-02 22:41:21 UTC
(In reply to Elad from comment #28)
> Hi Roman,
> 
> I'm working on verification according to the steps comment #25
> 
> After adding the disk profile to the disk, the writing speed is still above
> the I specified in the QOS rule.
> 

Could you run `virsh -r dumpxml VMNAME` befor and after the profile change and provide the output to the bug?
I think that you see the same thing described in bug 1201482.



> I'm using latest build (rhevm-4.0.2.3-0.1.el7ev.noarch)

Comment 30 Elad 2016-08-03 07:52:26 UTC
Created attachment 1186973 [details]
output of 'virsh -r dumpxml <VM_NAME>' before and after disk profile change

Attached output of 'virsh -r dumpxml <VM_NAME>' before and after disk profile change.

The disk profile I replaced to has a total BW limitation of 5Mb/s and it seems that this limitation is not ignored by libvirt as seen in the output of dumpxml after the disk profile change (after_changing_to_qos_profile):



               <ovirt:device name="vdb" path="/rhev/data-center/ae84d8b3-773e-46e4-983a-6dc2ca2dde37/796b1c99-e831-4dd4-9380-93762dcc35eb/images/a6eb0f73-8a69-4735-b633-c3305e58f191/6b494ea7-553b-4559-b0c5-e393114b0801">
                        <ovirt:maximum>
                                <ovirt:total_bytes_sec>4194304</ovirt:total_bytes_sec>
                                <ovirt:total_iops_sec>0</ovirt:total_iops_sec>
                                <ovirt:read_bytes_sec>0</ovirt:read_bytes_sec>
                                <ovirt:read_iops_sec>0</ovirt:read_iops_sec>
                                <ovirt:write_bytes_sec>0</ovirt:write_bytes_sec>
                                <ovirt:write_iops_sec>0</ovirt:write_iops_sec>
                        </ovirt:maximum>

Comment 31 Roman Mohr 2016-08-10 07:41:50 UTC
Putting this to modified. We have to wait for Bug 1201482 because of a new mom issue. After that is released we can proceed with QA for this one.

Comment 33 Elad 2016-08-22 07:21:06 UTC
Roy, Roman, this bug should not be ON_QA, please move to ASSIGNED or RELEASE_PENDING since it's blocking the 4.0 release.

Comment 36 Kevin Alon Goldblatt 2016-09-07 13:45:14 UTC
Tested with the following code:
----------------------------------------
rhevm-4.0.4-0.1.el7ev.noarch
vdsm-4.18.12-1.el7ev.x86_64

Tested with the following scenario:

Steps to Reproduce:
1. Created a VM with disks, started the VM and wrote to the disk with 'dd' as described in the description
dd bs=1M count=100 if=/dev/zero of=/100Ma2 conv=fdatasync
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 1.64193 s, 63.9 MB/s


2. Added a new profile via the Data Centre tab

3. Added this profile to the Domain Controller via the Storage tab

4. Via the VM tab select a disk from the disk tab in the VM and pressed edit and changed the profile to the newly created profile of 10MB write limit

5. Wrote to the disk again but the limit is NOT APPLIED!
 dd bs=1M count=100 if=/dev/zero of=/100Ma5 conv=fdatasync
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 10.3228 s, 10.2 MB/s


The new Qos profile of 10mb writes was successfully applied




Actual results:
The new Qos profile of 10mb writes was successfully applied

Expected results:



Moving to VERIFIED!

Comment 38 errata-xmlrpc 2016-09-28 22:15:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-1967.html


Note You need to log in before you can comment on or make changes to this bug.