Bug 1346754 - [z-stream clone - 3.6.8] Storage QoS is not applying on a Live VM/disk
Summary: [z-stream clone - 3.6.8] Storage QoS is not applying on a Live VM/disk
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.5.0
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: ovirt-3.6.9
: 3.6.9
Assignee: Roman Mohr
QA Contact: Kevin Alon Goldblatt
URL:
Whiteboard:
Depends On: 1201482 1328731
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-06-15 09:57 UTC by rhev-integ
Modified: 2022-07-09 08:50 UTC (History)
28 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1328731
Environment:
Last Closed: 2016-09-21 18:04:07 UTC
oVirt Team: SLA
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
vdsm, mom, server and engine logs (2.15 MB, application/x-gzip)
2016-08-29 16:49 UTC, Kevin Alon Goldblatt
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-47372 0 None None None 2022-07-09 08:50:51 UTC
Red Hat Product Errata RHSA-2016:1929 0 normal SHIPPED_LIVE Moderate: Red Hat Virtualization Manager (RHV) bug fix 3.6.9 2016-09-21 21:57:10 UTC
oVirt gerrit 60920 0 ovirt-engine-3.6 MERGED qos: Update Disk QoS on disk profile change 2016-08-08 08:27:12 UTC

Comment 3 Kevin Alon Goldblatt 2016-08-29 16:33:14 UTC
Tested with the following code:
-------------------------------------
rhevm-3.6.9-0.1.el6.noarch
vdsm-4.17.34-1.el7ev.noarch


Verified with the following scenario:
-------------------------------------
Steps to reproduce:
Tested with the following scenario:
1. Created a VM with disks, started the VM and wrote to the disk with 'dd' as described in the description
dd bs=1M count=100 if=/dev/zero of=/mnt/1/aaa conv=fdatasync
105mb written in 90mb/s

2. Added a new profile via the Data Centre tab

3. Added this profile to the Domain Controller via the Storage tab

4. Via the VM tab select a disk from the disk tab in the VM and pressed edit and changed the profile to the newly created profile of 10MB write limit

5. Wrote to the disk again but the limit is NOT APPLIED!
dd bs=1M count=100 if=/dev/zero of=/100Mb conv=fdatasync
10bmb written in 86 MB/s


6. After Shutting down the VM and Starting it again run the write operation again. Now it works fine.
dd bs=1M count=100 if=/dev/zero of=/100Mf conv=fdatasync
10bmb written in 10.5 MB/s


MOVING TO ASSIGNED!

Comment 4 Kevin Alon Goldblatt 2016-08-29 16:49:13 UTC
Created attachment 1195437 [details]
vdsm, mom, server and engine logs

Adding logs

Comment 5 Doron Fediuck 2016-08-31 10:44:06 UTC
What is the mom  version you used?

Comment 6 Kevin Alon Goldblatt 2016-09-05 14:01:53 UTC
(In reply to Doron Fediuck from comment #5)
> What is the mom  version you used?

mom-0.5.5-1.el7ev.noarch

Comment 7 Roman Mohr 2016-09-06 11:50:15 UTC
(In reply to Kevin Alon Goldblatt from comment #6)
> (In reply to Doron Fediuck from comment #5)
> > What is the mom  version you used?
> 
> mom-0.5.5-1.el7ev.noarch

mom 0.5.5 does not contain patch [1] which is the last of the patch series from Bug 1201482.

The issue here is that when you start the VM without or the unlimited default disk profile, mom will not pick up the live changes from libvirt/vdsm.

To see if this is the issue, assign a disk profile which is not completely unlimited to a VM while it is down. After the VM is running, verify that it works. Then switch the profile and do verify with dd if the new values are taken. If this scenario works, then the only thing missing is a new mom release.

[1] https://gerrit.ovirt.org/#/c/61947/

Comment 8 Gil Klein 2016-09-07 08:07:04 UTC
(In reply to Roman Mohr from comment #7)
> (In reply to Kevin Alon Goldblatt from comment #6)
> > (In reply to Doron Fediuck from comment #5)
> > > What is the mom  version you used?
> > 
> > mom-0.5.5-1.el7ev.noarch
> 
> mom 0.5.5 does not contain patch [1] which is the last of the patch series
> from Bug 1201482.
But this is the last mom that was published to QE.

Please See: 
http://bob.eng.lab.tlv.redhat.com/builds/3.6/3.6.9-1/el7/3.6.9-1_version_info.html 

It clearly shows: 0.5.5-1.el7ev.noarch

I also can not find a new mom erratum in the 3.6.9 errata batch:
https://errata.devel.redhat.com/batches/60

Can you please check why was it missed?

> 
> The issue here is that when you start the VM without or the unlimited
> default disk profile, mom will not pick up the live changes from
> libvirt/vdsm.
> 
> To see if this is the issue, assign a disk profile which is not completely
> unlimited to a VM while it is down. After the VM is running, verify that it
> works. Then switch the profile and do verify with dd if the new values are
> taken. If this scenario works, then the only thing missing is a new mom
> release.
> 
> [1] https://gerrit.ovirt.org/#/c/61947/

Comment 9 Martin Sivák 2016-09-07 10:31:50 UTC
The remaining issue with hoplug not working when no QoS was defined during VM start will disappear when mom-0.5.6 is released.

Comment 10 Gil Klein 2016-09-07 12:40:12 UTC
(In reply to Martin Sivák from comment #9)
> The remaining issue with hoplug not working when no QoS was defined during
> VM start will disappear when mom-0.5.6 is released.
What's the ETA for mom-0.5.6 to be released?
Would it make it to 3.6.9 or should we retarget this verification to 3.6.10 ?

Comment 11 Gil Klein 2016-09-07 12:47:57 UTC
Moving back to MODIFIED till we get a mom-0.5.6 erratum + d/s build for QE to consume

Comment 12 Martin Sivák 2016-09-08 15:27:42 UTC
Gil, mom-0.5.6 was released as part of yesterday's builds (both 3.6.9 and 4.0.4) so you should be fine.

Comment 14 Kevin Alon Goldblatt 2016-09-15 13:32:48 UTC
Tested with the following code:
----------------------------------------
rhevm-3.6.9.1-0.1.1.el6.noarch
vdsm-4.17.35-1.el7ev.noarch

Tested with the following scenario:

Steps to Reproduce:
Steps to reproduce:
Tested with the following scenario:
1. Created a VM with disks, started the VM and wrote to the disk with 'dd' as described in the description
dd bs=1M count=100 if=/dev/zero of=/mnt/1/aaa conv=fdatasync
105mb written in 96mb/s

2. Added a new profile via the Data Centre tab

3. Added this profile to the Domain Controller via the Storage tab

4. Via the VM tab select a disk from the disk tab in the VM and pressed edit and changed the profile to the newly created profile of 10MB write limit

5. Wrote to the disk again and the limit is NOW APPLIED!
dd bs=1M count=100 if=/dev/zero of=/100Mb conv=fdatasync
10bmb written in 10 MB/s




Actual results:
QoS limit is NOW APPLIED!

Expected results:ap



Moving to VERIFIED!

Comment 16 errata-xmlrpc 2016-09-21 18:04:07 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-1929.html


Note You need to log in before you can comment on or make changes to this bug.