Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1651255

Summary: Cannot set number of IO threads via the UI
Product: Red Hat Enterprise Virtualization Manager Reporter: RHV bug bot <rhv-bugzilla-bot>
Component: ovirt-engineAssignee: Andrej Krejcir <akrejcir>
Status: CLOSED ERRATA QA Contact: meital avital <mavital>
Severity: medium Docs Contact:
Priority: unspecified    
Version: unspecifiedCC: abpatil, bugs, gianluca.cecchi, mavital, michal.skrivanek, rbarry, Rhev-m-bugs, trichard
Target Milestone: ovirt-4.3.0Keywords: Rebase, Reopened, ZStream
Target Release: 4.3.0Flags: lsvaty: testing_plan_complete-
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ovirt-engine-4.3.0_alpha Doc Type: Enhancement
Doc Text:
You can now set the number of IO threads in the new/edit VM dialog in the Administration Portal, instead of just the REST API.
Story Points: ---
Clone Of: 1592990
: 1651649 (view as bug list) Environment:
Last Closed: 2019-05-08 12:39:01 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Virt RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1592990    
Bug Blocks: 1651649    

Description RHV bug bot 2018-11-19 14:35:17 UTC
+++ This bug is an upstream to downstream clone. The original bug is: +++
+++   bug 1592990 +++
======================================================================

Description of problem:
In rhev/oVirt 4.1 there was possible to enable I/O threads for a VM and then there was a field to define how many I/O threads.

In 4.2 this option is gone. 

We've had good performance gains when adding more than 1 IO thread for IO intensive vms, such as Katello/Satellite 6. 

Using the API to set nr of IO threads is cumbersome. Please add the option for selecting amount of IO threads back to the webui.

Version-Release number of selected component (if applicable):
4.2

How reproducible:
every time

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

(Originally by sigbjorn)

Comment 1 RHV bug bot 2018-11-19 14:35:25 UTC
To be noted that as it is now (not possible to specify inside the GUI the number of I/O threads), in my opinion the documentation is a bit misleading here:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/html/virtual_machine_management_guide/editing_io_threads
Reading the guide, it gave me the idea that it is possible to change the number, even if the default and recommended (upon which kinds of tested scenarios?) one is to leave the value of "1"

(Originally by gianluca.cecchi)

Comment 6 RHV bug bot 2018-11-19 14:35:41 UTC
why?

(Originally by gianluca.cecchi)

Comment 7 RHV bug bot 2018-11-19 14:35:47 UTC
Yes please explain why you so arrogantly closed the request without even an explanation why.

(Originally by sigbjorn)

Comment 8 RHV bug bot 2018-11-19 14:35:51 UTC
It makes the UI simpler, and ensure people don't set it to ridiculous values. 
In most cases apparently a single IO thread is enough.
For exceptional cases where more are beneficial there's REST API.

If you believe you have a good widespread example of VM/workload where more have sense then please share that. Also patches are welcome, of course, if you can keep the UI simple to use.

(Originally by michal.skrivanek)

Comment 9 RHV bug bot 2018-11-19 14:35:56 UTC
(In reply to Gianluca Cecchi from comment #1)
> To be noted that as it is now (not possible to specify inside the GUI the
> number of I/O threads), in my opinion the documentation is a bit misleading

thanks! cloned to a doc bug

(Originally by michal.skrivanek)

Comment 10 RHV bug bot 2018-11-19 14:36:01 UTC
I can try to re-setup a test env using Oracle RDBMS and HammerDB as stresstest utility, showing you performance number changes.
But I remember that when having 3 data virtual disks to store Oracle datafiles, it gave a performance gain between 10% and 15% in I/O performances, assigning 1 I/O thread for every virtual disk during the TPC-C environment creation, using 20 concurrent virtual users to create 200 stores of the tpcc db user.

I think that a clear tool-tip and a warning about the improper use of the setting would be sufficient.
I think also that typically the webadmin gui will be used by a supposed expert admin guy with seniority level.
Eventually you can setup so that the parameter is editable only by a user with sort of particular admin rights, but it is not desirable to "penalize" the others forcing them to use REST/API for this.
Just my 2 eurocent ;-)

(Originally by gianluca.cecchi)

Comment 11 RHV bug bot 2018-11-19 14:36:06 UTC
Also, you could put a constraint so that you don't allow to set a number that is > of the total number of virtual disks of the VM

(Originally by gianluca.cecchi)

Comment 12 RHV bug bot 2018-11-19 14:36:10 UTC
I agree that a tooltip and warning would be sufficient as the webadmin gui is supposed to be used by expert admin guys and not novice end users.

We've seen the storage io wait time to down when increasing the amount of io threads for a vm with several disks and lots of io. 

Satellite 6 is an example, using 2 different databases and a lot of file system activity going on the same time. We seperate databases from filsystems in different LVM volume groups on different disks assigned to the Satellite VM. Adding more than 1 io thread was required to bring the iowait down, going from double digit % iowait to the current 0.14% iowait.

This has proven to be true for other disk intensive workloads as well.

Please bring back the ability to specify the amount of io threads in the webadmin ui. 

Thanks.

(Originally by sigbjorn)

Comment 13 RHV bug bot 2018-11-19 14:36:15 UTC
I guess you can take 5eb925f50c39 and partially revert that, with a different UI part. Or make it a predefined property instead.

(Originally by michal.skrivanek)

Comment 17 RHV bug bot 2018-11-19 14:36:30 UTC
Verified:
vdsm-4.30.1-25.gitce9e416.el7.x86_64
ovirt-engine-4.3.0-0.0.master.20181023141116.gitc92ccb5.el7.noarch

(Originally by Meital Avital)

Comment 19 meital avital 2018-11-28 10:02:51 UTC
Verified again on:
ovirt-engine-4.3.0-0.2.master.20181121071050.gita8fcd23.el7.noarch

Comment 21 errata-xmlrpc 2019-05-08 12:39:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:1085

Comment 22 Daniel Gur 2019-08-28 13:11:40 UTC
sync2jira

Comment 23 Daniel Gur 2019-08-28 13:15:53 UTC
sync2jira