Bug 1688090 - [RFE] Provide a way to customize the poll-max-ns property of each iothread
Summary: [RFE] Provide a way to customize the poll-max-ns property of each iothread
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: libvirt
Version: 8.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: 8.0
Assignee: Libvirt Maintainers
QA Contact: gaojianan
URL:
Whiteboard:
: 1545732 (view as bug list)
Depends On: 1545732
Blocks: 1477664
TreeView+ depends on / blocked
 
Reported: 2019-03-13 05:57 UTC by Han Han
Modified: 2023-09-07 19:49 UTC (History)
22 users (show)

Fixed In Version: libvirt-5.0.0-1.el8
Doc Type: Enhancement
Doc Text:
Clone Of: 1545732
Environment:
Last Closed: 2019-05-29 16:05:30 UTC
Type: Feature Request
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:1293 0 None None None 2019-05-29 16:05:42 UTC

Comment 2 gaojianan 2019-03-28 08:31:47 UTC
Verified on:
libvirt-5.0.0-7.virtcov.el8.x86_64
qemu-kvm-3.1.0-20.module+el8+2888+cdc893a8.x86_64

1. Start a VM with following disk xml:
 <iothreads>3</iothreads>
  <iothreadids>
    <iothread id='1'/>
    <iothread id='4'/>
    <iothread id='2'/>
  </iothreadids>

2.Start the guest and check iothread parameters
# virsh start rhel8.0
# virsh domstats rhel8.0 --iothread 
Domain: 'rhel8.0'
  iothread.count=3
  iothread.4.poll-max-ns=32768
  iothread.4.poll-grow=0
  iothread.4.poll-shrink=0
  iothread.1.poll-max-ns=32768
  iothread.1.poll-grow=0
  iothread.1.poll-shrink=0
  iothread.2.poll-max-ns=32768
  iothread.2.poll-grow=0
  iothread.2.poll-shrink=0

3.Configure the following parameters for iothread 1 and check whether it takes effect
# virsh iothreadset rhel8.0 1 --poll-max-ns 2147483647 --poll-grow 2147483647 --poll-shrink 2147483647

# virsh domstats rhel8.0 --iothread 
Domain: 'rhel8.0'
  iothread.count=3
  iothread.4.poll-max-ns=32768
  iothread.4.poll-grow=0
  iothread.4.poll-shrink=0
  iothread.1.poll-max-ns=2147483647
  iothread.1.poll-grow=2147483647
  iothread.1.poll-shrink=2147483647
  iothread.2.poll-max-ns=32768
  iothread.2.poll-grow=0
  iothread.2.poll-shrink=0

log info:
 2019-03-27 07:43:44.337+0000: 8284: debug : qemuMonitorJSONCommandWithFd:304 : Send command '{"execute":"qom-set","arguments":{"path":"/objects/iothread1","property":"poll-max-ns","value":2147483647},"id":"libvirt-21"}' for write with FD -1
 2019-03-27 07:43:44.337+0000: 8284: info : qemuMonitorSend:1081 : QEMU_MONITOR_SEND_MSG: mon=0x7f8b08027120 msg={"execute":"qom-set","arguments":{"path":"/objects/iothread1","property":"poll-max-ns","value":2147483647},"id":"libvirt-21"}

For other negative test,have been included in our new cases

If we can modify the status to verified? Because if there is something wrong with the performance,it shouldn't be a problem with libvirt.

Comment 3 John Ferlan 2019-03-28 21:36:30 UTC
The task was just to allow the values to changes. Understanding how that impacts performance is the more difficult technical detail to describe. Details for that are mostly left to those that understand QEMU IOThreads and the polling values. It's really dependent upon workload and sizing - difficult to just here use these numbers.

Comment 4 Xuesong Zhang 2019-03-29 08:23:12 UTC
(In reply to John Ferlan from comment #3)
> The task was just to allow the values to changes. Understanding how that
> impacts performance is the more difficult technical detail to describe.
> Details for that are mostly left to those that understand QEMU IOThreads and
> the polling values. It's really dependent upon workload and sizing -
> difficult to just here use these numbers.

Thanks for your answer John, we'd like to needinfo the qemu devel to check what's their opinion since it seems no qemu BZ to track the live update of poll-max-ns, we'd like to add a testing in qemu layer for the performance testing.

Hi, Stefan,

As we mentioned above, the function of live update poll-max-ns is supported and working well in libvirt currently, so we change the BZ status to verified now. But I'm not sure if we need to have a performance testing for the live update poll-max-ns in qemu layer?


This feature is support in qemu from RHEL7.4, and it seems the performance is not improved so much from your comments[1].
So, how do you think about it now? do we need to add a performance testing in qemu layer? If yes, would you please give us some data for reference to design the performance testing scenarios? Thx.



[1] https://bugzilla.redhat.com/show_bug.cgi?id=1404303#c7
"so it's unlikely that much higher numbers will improve performance.
Most users should not need to set the poll-max-ns parameter."

Comment 5 Stefan Hajnoczi 2019-04-03 15:49:20 UTC
(In reply to Xuesong Zhang from comment #4)
> As we mentioned above, the function of live update poll-max-ns is supported
> and working well in libvirt currently, so we change the BZ status to
> verified now. But I'm not sure if we need to have a performance testing for
> the live update poll-max-ns in qemu layer?

If you want to test that poll-max-ns has an effect on I/O request latency you could use null_blk with the completion_nsec= parameter and fio settings that fall within the poll-max-ns threshold.

It would detect when poll-max-ns breaks or changes behavior, resulting in significantly different performance results.

It doesn't seem like a high-priority test case to me though.

Comment 6 Xuesong Zhang 2019-04-04 02:38:01 UTC
*** Bug 1545732 has been marked as a duplicate of this bug. ***

Comment 13 errata-xmlrpc 2019-05-29 16:05:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:1293


Note You need to log in before you can comment on or make changes to this bug.