Description of problem: In rhev/oVirt 4.1 there was possible to enable I/O threads for a VM and then there was a field to define how many I/O threads. In 4.2 this option is gone. We've had good performance gains when adding more than 1 IO thread for IO intensive vms, such as Katello/Satellite 6. Using the API to set nr of IO threads is cumbersome. Please add the option for selecting amount of IO threads back to the webui. Version-Release number of selected component (if applicable): 4.2 How reproducible: every time Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
To be noted that as it is now (not possible to specify inside the GUI the number of I/O threads), in my opinion the documentation is a bit misleading here: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/html/virtual_machine_management_guide/editing_io_threads Reading the guide, it gave me the idea that it is possible to change the number, even if the default and recommended (upon which kinds of tested scenarios?) one is to leave the value of "1"
why?
Yes please explain why you so arrogantly closed the request without even an explanation why.
It makes the UI simpler, and ensure people don't set it to ridiculous values. In most cases apparently a single IO thread is enough. For exceptional cases where more are beneficial there's REST API. If you believe you have a good widespread example of VM/workload where more have sense then please share that. Also patches are welcome, of course, if you can keep the UI simple to use.
(In reply to Gianluca Cecchi from comment #1) > To be noted that as it is now (not possible to specify inside the GUI the > number of I/O threads), in my opinion the documentation is a bit misleading thanks! cloned to a doc bug
I can try to re-setup a test env using Oracle RDBMS and HammerDB as stresstest utility, showing you performance number changes. But I remember that when having 3 data virtual disks to store Oracle datafiles, it gave a performance gain between 10% and 15% in I/O performances, assigning 1 I/O thread for every virtual disk during the TPC-C environment creation, using 20 concurrent virtual users to create 200 stores of the tpcc db user. I think that a clear tool-tip and a warning about the improper use of the setting would be sufficient. I think also that typically the webadmin gui will be used by a supposed expert admin guy with seniority level. Eventually you can setup so that the parameter is editable only by a user with sort of particular admin rights, but it is not desirable to "penalize" the others forcing them to use REST/API for this. Just my 2 eurocent ;-)
Also, you could put a constraint so that you don't allow to set a number that is > of the total number of virtual disks of the VM
I agree that a tooltip and warning would be sufficient as the webadmin gui is supposed to be used by expert admin guys and not novice end users. We've seen the storage io wait time to down when increasing the amount of io threads for a vm with several disks and lots of io. Satellite 6 is an example, using 2 different databases and a lot of file system activity going on the same time. We seperate databases from filsystems in different LVM volume groups on different disks assigned to the Satellite VM. Adding more than 1 io thread was required to bring the iowait down, going from double digit % iowait to the current 0.14% iowait. This has proven to be true for other disk intensive workloads as well. Please bring back the ability to specify the amount of io threads in the webadmin ui. Thanks.
I guess you can take 5eb925f50c39 and partially revert that, with a different UI part. Or make it a predefined property instead.
Verified: vdsm-4.30.1-25.gitce9e416.el7.x86_64 ovirt-engine-4.3.0-0.0.master.20181023141116.gitc92ccb5.el7.noarch
QE verification bot: the bug was verified upstream
This bugzilla is included in oVirt 4.2.8 release, published on January 22nd 2019. Since the problem described in this bug report should be resolved in oVirt 4.2.8 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.