Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1346754 - [z-stream clone - 3.6.8] Storage QoS is not applying on a Live VM/disk
[z-stream clone - 3.6.8] Storage QoS is not applying on a Live VM/disk
Status: CLOSED ERRATA
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine (Show other bugs)
3.5.0
Unspecified Unspecified
urgent Severity high
: ovirt-3.6.9
: 3.6.9
Assigned To: Roman Mohr
Kevin Alon Goldblatt
: ZStream
Depends On: 1201482 1328731
Blocks:
  Show dependency treegraph
 
Reported: 2016-06-15 05:57 EDT by rhev-integ
Modified: 2016-09-21 14:04 EDT (History)
28 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1328731
Environment:
Last Closed: 2016-09-21 14:04:07 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: SLA
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
vdsm, mom, server and engine logs (2.15 MB, application/x-gzip)
2016-08-29 12:49 EDT, Kevin Alon Goldblatt
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 60920 ovirt-engine-3.6 MERGED qos: Update Disk QoS on disk profile change 2016-08-08 04:27 EDT
Red Hat Product Errata RHSA-2016:1929 normal SHIPPED_LIVE Moderate: Red Hat Virtualization Manager (RHV) bug fix 3.6.9 2016-09-21 17:57:10 EDT

  None (edit)
Comment 3 Kevin Alon Goldblatt 2016-08-29 12:33:14 EDT
Tested with the following code:
-------------------------------------
rhevm-3.6.9-0.1.el6.noarch
vdsm-4.17.34-1.el7ev.noarch


Verified with the following scenario:
-------------------------------------
Steps to reproduce:
Tested with the following scenario:
1. Created a VM with disks, started the VM and wrote to the disk with 'dd' as described in the description
dd bs=1M count=100 if=/dev/zero of=/mnt/1/aaa conv=fdatasync
105mb written in 90mb/s

2. Added a new profile via the Data Centre tab

3. Added this profile to the Domain Controller via the Storage tab

4. Via the VM tab select a disk from the disk tab in the VM and pressed edit and changed the profile to the newly created profile of 10MB write limit

5. Wrote to the disk again but the limit is NOT APPLIED!
dd bs=1M count=100 if=/dev/zero of=/100Mb conv=fdatasync
10bmb written in 86 MB/s


6. After Shutting down the VM and Starting it again run the write operation again. Now it works fine.
dd bs=1M count=100 if=/dev/zero of=/100Mf conv=fdatasync
10bmb written in 10.5 MB/s


MOVING TO ASSIGNED!
Comment 4 Kevin Alon Goldblatt 2016-08-29 12:49 EDT
Created attachment 1195437 [details]
vdsm, mom, server and engine logs

Adding logs
Comment 5 Doron Fediuck 2016-08-31 06:44:06 EDT
What is the mom  version you used?
Comment 6 Kevin Alon Goldblatt 2016-09-05 10:01:53 EDT
(In reply to Doron Fediuck from comment #5)
> What is the mom  version you used?

mom-0.5.5-1.el7ev.noarch
Comment 7 Roman Mohr 2016-09-06 07:50:15 EDT
(In reply to Kevin Alon Goldblatt from comment #6)
> (In reply to Doron Fediuck from comment #5)
> > What is the mom  version you used?
> 
> mom-0.5.5-1.el7ev.noarch

mom 0.5.5 does not contain patch [1] which is the last of the patch series from Bug 1201482.

The issue here is that when you start the VM without or the unlimited default disk profile, mom will not pick up the live changes from libvirt/vdsm.

To see if this is the issue, assign a disk profile which is not completely unlimited to a VM while it is down. After the VM is running, verify that it works. Then switch the profile and do verify with dd if the new values are taken. If this scenario works, then the only thing missing is a new mom release.

[1] https://gerrit.ovirt.org/#/c/61947/
Comment 8 Gil Klein 2016-09-07 04:07:04 EDT
(In reply to Roman Mohr from comment #7)
> (In reply to Kevin Alon Goldblatt from comment #6)
> > (In reply to Doron Fediuck from comment #5)
> > > What is the mom  version you used?
> > 
> > mom-0.5.5-1.el7ev.noarch
> 
> mom 0.5.5 does not contain patch [1] which is the last of the patch series
> from Bug 1201482.
But this is the last mom that was published to QE.

Please See: 
http://bob.eng.lab.tlv.redhat.com/builds/3.6/3.6.9-1/el7/3.6.9-1_version_info.html 

It clearly shows: 0.5.5-1.el7ev.noarch

I also can not find a new mom erratum in the 3.6.9 errata batch:
https://errata.devel.redhat.com/batches/60

Can you please check why was it missed?

> 
> The issue here is that when you start the VM without or the unlimited
> default disk profile, mom will not pick up the live changes from
> libvirt/vdsm.
> 
> To see if this is the issue, assign a disk profile which is not completely
> unlimited to a VM while it is down. After the VM is running, verify that it
> works. Then switch the profile and do verify with dd if the new values are
> taken. If this scenario works, then the only thing missing is a new mom
> release.
> 
> [1] https://gerrit.ovirt.org/#/c/61947/
Comment 9 Martin Sivák 2016-09-07 06:31:50 EDT
The remaining issue with hoplug not working when no QoS was defined during VM start will disappear when mom-0.5.6 is released.
Comment 10 Gil Klein 2016-09-07 08:40:12 EDT
(In reply to Martin Sivák from comment #9)
> The remaining issue with hoplug not working when no QoS was defined during
> VM start will disappear when mom-0.5.6 is released.
What's the ETA for mom-0.5.6 to be released?
Would it make it to 3.6.9 or should we retarget this verification to 3.6.10 ?
Comment 11 Gil Klein 2016-09-07 08:47:57 EDT
Moving back to MODIFIED till we get a mom-0.5.6 erratum + d/s build for QE to consume
Comment 12 Martin Sivák 2016-09-08 11:27:42 EDT
Gil, mom-0.5.6 was released as part of yesterday's builds (both 3.6.9 and 4.0.4) so you should be fine.
Comment 14 Kevin Alon Goldblatt 2016-09-15 09:32:48 EDT
Tested with the following code:
----------------------------------------
rhevm-3.6.9.1-0.1.1.el6.noarch
vdsm-4.17.35-1.el7ev.noarch

Tested with the following scenario:

Steps to Reproduce:
Steps to reproduce:
Tested with the following scenario:
1. Created a VM with disks, started the VM and wrote to the disk with 'dd' as described in the description
dd bs=1M count=100 if=/dev/zero of=/mnt/1/aaa conv=fdatasync
105mb written in 96mb/s

2. Added a new profile via the Data Centre tab

3. Added this profile to the Domain Controller via the Storage tab

4. Via the VM tab select a disk from the disk tab in the VM and pressed edit and changed the profile to the newly created profile of 10MB write limit

5. Wrote to the disk again and the limit is NOW APPLIED!
dd bs=1M count=100 if=/dev/zero of=/100Mb conv=fdatasync
10bmb written in 10 MB/s




Actual results:
QoS limit is NOW APPLIED!

Expected results:ap



Moving to VERIFIED!
Comment 16 errata-xmlrpc 2016-09-21 14:04:07 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-1929.html

Note You need to log in before you can comment on or make changes to this bug.