Bug 1651649 - [downstream clone - 4.2.8] Cannot set number of IO threads via the UI
Summary: [downstream clone - 4.2.8] Cannot set number of IO threads via the UI
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: unspecified
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ovirt-4.2.8
: ---
Assignee: Andrej Krejcir
QA Contact: meital avital
URL:
Whiteboard:
Depends On: 1651255
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-11-20 13:44 UTC by RHV bug bot
Modified: 2019-08-28 13:17 UTC (History)
10 users (show)

Fixed In Version: ovirt-engine-4.2.8.1
Doc Type: Enhancement
Doc Text:
This release allows the number of I/O threads to be set in the Administration Portal VM dialog. This enhancement complements the existing REST API to set the number of I/O threads, allowing users the option to use either the REST API or the Administration Portal to set the number of I/O threads.
Clone Of: 1651255
Environment:
Last Closed: 2019-01-22 12:44:51 UTC
oVirt Team: Virt
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0121 None None None 2019-01-22 12:44:59 UTC
oVirt gerrit 94272 master MERGED webadmin: Add textbox to choose the number of IO threads 2020-01-23 13:39:55 UTC
oVirt gerrit 94274 master MERGED webadmin: VM popup tabs validation is not overridden 2020-01-23 13:39:55 UTC
oVirt gerrit 94458 master ABANDONED core: Change Blank template type to Server 2020-01-23 13:39:55 UTC
oVirt gerrit 94760 master MERGED webadmin: Do not disable I/O threads when setting VM type to Desktop 2020-01-23 13:39:55 UTC
oVirt gerrit 95525 ovirt-engine-4.2 MERGED webadmin: Add textbox to choose the number of IO threads 2020-01-23 13:39:55 UTC
oVirt gerrit 95526 ovirt-engine-4.2 MERGED webadmin: VM popup tabs validation is not overridden 2020-01-23 13:39:56 UTC
oVirt gerrit 95527 ovirt-engine-4.2 MERGED webadmin: Do not disable I/O threads when setting VM type to Desktop 2020-01-23 13:39:56 UTC

Description RHV bug bot 2018-11-20 13:44:39 UTC
+++ This bug is a downstream clone. The original bug is: +++
+++   bug 1651255 +++
======================================================================

+++ This bug is an upstream to downstream clone. The original bug is: +++
+++   bug 1592990 +++
======================================================================

Description of problem:
In rhev/oVirt 4.1 there was possible to enable I/O threads for a VM and then there was a field to define how many I/O threads.

In 4.2 this option is gone. 

We've had good performance gains when adding more than 1 IO thread for IO intensive vms, such as Katello/Satellite 6. 

Using the API to set nr of IO threads is cumbersome. Please add the option for selecting amount of IO threads back to the webui.

Version-Release number of selected component (if applicable):
4.2

How reproducible:
every time

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

(Originally by sigbjorn)

(Originally by rhv-bugzilla-bot)

Comment 1 RHV bug bot 2018-11-20 13:44:45 UTC
To be noted that as it is now (not possible to specify inside the GUI the number of I/O threads), in my opinion the documentation is a bit misleading here:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/html/virtual_machine_management_guide/editing_io_threads
Reading the guide, it gave me the idea that it is possible to change the number, even if the default and recommended (upon which kinds of tested scenarios?) one is to leave the value of "1"

(Originally by gianluca.cecchi)

(Originally by rhv-bugzilla-bot)

Comment 7 RHV bug bot 2018-11-20 13:45:05 UTC
why?

(Originally by gianluca.cecchi)

(Originally by rhv-bugzilla-bot)

Comment 8 RHV bug bot 2018-11-20 13:45:10 UTC
Yes please explain why you so arrogantly closed the request without even an explanation why.

(Originally by sigbjorn)

(Originally by rhv-bugzilla-bot)

Comment 9 RHV bug bot 2018-11-20 13:45:14 UTC
It makes the UI simpler, and ensure people don't set it to ridiculous values. 
In most cases apparently a single IO thread is enough.
For exceptional cases where more are beneficial there's REST API.

If you believe you have a good widespread example of VM/workload where more have sense then please share that. Also patches are welcome, of course, if you can keep the UI simple to use.

(Originally by michal.skrivanek)

(Originally by rhv-bugzilla-bot)

Comment 10 RHV bug bot 2018-11-20 13:45:18 UTC
(In reply to Gianluca Cecchi from comment #1)
> To be noted that as it is now (not possible to specify inside the GUI the
> number of I/O threads), in my opinion the documentation is a bit misleading

thanks! cloned to a doc bug

(Originally by michal.skrivanek)

(Originally by rhv-bugzilla-bot)

Comment 11 RHV bug bot 2018-11-20 13:45:21 UTC
I can try to re-setup a test env using Oracle RDBMS and HammerDB as stresstest utility, showing you performance number changes.
But I remember that when having 3 data virtual disks to store Oracle datafiles, it gave a performance gain between 10% and 15% in I/O performances, assigning 1 I/O thread for every virtual disk during the TPC-C environment creation, using 20 concurrent virtual users to create 200 stores of the tpcc db user.

I think that a clear tool-tip and a warning about the improper use of the setting would be sufficient.
I think also that typically the webadmin gui will be used by a supposed expert admin guy with seniority level.
Eventually you can setup so that the parameter is editable only by a user with sort of particular admin rights, but it is not desirable to "penalize" the others forcing them to use REST/API for this.
Just my 2 eurocent ;-)

(Originally by gianluca.cecchi)

(Originally by rhv-bugzilla-bot)

Comment 12 RHV bug bot 2018-11-20 13:45:25 UTC
Also, you could put a constraint so that you don't allow to set a number that is > of the total number of virtual disks of the VM

(Originally by gianluca.cecchi)

(Originally by rhv-bugzilla-bot)

Comment 13 RHV bug bot 2018-11-20 13:45:30 UTC
I agree that a tooltip and warning would be sufficient as the webadmin gui is supposed to be used by expert admin guys and not novice end users.

We've seen the storage io wait time to down when increasing the amount of io threads for a vm with several disks and lots of io. 

Satellite 6 is an example, using 2 different databases and a lot of file system activity going on the same time. We seperate databases from filsystems in different LVM volume groups on different disks assigned to the Satellite VM. Adding more than 1 io thread was required to bring the iowait down, going from double digit % iowait to the current 0.14% iowait.

This has proven to be true for other disk intensive workloads as well.

Please bring back the ability to specify the amount of io threads in the webadmin ui. 

Thanks.

(Originally by sigbjorn)

(Originally by rhv-bugzilla-bot)

Comment 14 RHV bug bot 2018-11-20 13:45:34 UTC
I guess you can take 5eb925f50c39 and partially revert that, with a different UI part. Or make it a predefined property instead.

(Originally by michal.skrivanek)

(Originally by rhv-bugzilla-bot)

Comment 18 RHV bug bot 2018-11-20 13:45:48 UTC
Verified:
vdsm-4.30.1-25.gitce9e416.el7.x86_64
ovirt-engine-4.3.0-0.0.master.20181023141116.gitc92ccb5.el7.noarch

(Originally by Meital Avital)

(Originally by rhv-bugzilla-bot)

Comment 20 meital avital 2018-12-17 09:34:43 UTC
Verified on: 4.2.8.1-0.1.el7ev

Comment 22 errata-xmlrpc 2019-01-22 12:44:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0121

Comment 23 Daniel Gur 2019-08-28 13:13:32 UTC
sync2jira

Comment 24 Daniel Gur 2019-08-28 13:17:45 UTC
sync2jira


Note You need to log in before you can comment on or make changes to this bug.