Bug 1688090

Summary: [RFE] Provide a way to customize the poll-max-ns property of each iothread
Product: Red Hat Enterprise Linux Advanced Virtualization Reporter: Han Han <hhan>
Component: libvirtAssignee: Libvirt Maintainers <libvirt-maint>
Status: CLOSED ERRATA QA Contact: gaojianan <jgao>
Severity: medium Docs Contact:
Priority: medium    
Version: 8.0CC: berrange, coli, dyuan, dzheng, fgarciad, gveitmic, hhan, jdenemar, jferlan, jiyan, jsuchane, knoel, laine, ldelouw, lhuang, michal.skrivanek, mkalinin, mtessun, slopezpa, stefanha, xuzhang, yafu
Target Milestone: rcKeywords: Automation, FutureFeature
Target Release: 8.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: libvirt-5.0.0-1.el8 Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: 1545732 Environment:
Last Closed: 2019-05-29 16:05:30 UTC Type: Feature Request
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1545732    
Bug Blocks: 1477664    

Comment 2 gaojianan 2019-03-28 08:31:47 UTC
Verified on:
libvirt-5.0.0-7.virtcov.el8.x86_64
qemu-kvm-3.1.0-20.module+el8+2888+cdc893a8.x86_64

1. Start a VM with following disk xml:
 <iothreads>3</iothreads>
  <iothreadids>
    <iothread id='1'/>
    <iothread id='4'/>
    <iothread id='2'/>
  </iothreadids>

2.Start the guest and check iothread parameters
# virsh start rhel8.0
# virsh domstats rhel8.0 --iothread 
Domain: 'rhel8.0'
  iothread.count=3
  iothread.4.poll-max-ns=32768
  iothread.4.poll-grow=0
  iothread.4.poll-shrink=0
  iothread.1.poll-max-ns=32768
  iothread.1.poll-grow=0
  iothread.1.poll-shrink=0
  iothread.2.poll-max-ns=32768
  iothread.2.poll-grow=0
  iothread.2.poll-shrink=0

3.Configure the following parameters for iothread 1 and check whether it takes effect
# virsh iothreadset rhel8.0 1 --poll-max-ns 2147483647 --poll-grow 2147483647 --poll-shrink 2147483647

# virsh domstats rhel8.0 --iothread 
Domain: 'rhel8.0'
  iothread.count=3
  iothread.4.poll-max-ns=32768
  iothread.4.poll-grow=0
  iothread.4.poll-shrink=0
  iothread.1.poll-max-ns=2147483647
  iothread.1.poll-grow=2147483647
  iothread.1.poll-shrink=2147483647
  iothread.2.poll-max-ns=32768
  iothread.2.poll-grow=0
  iothread.2.poll-shrink=0

log info:
 2019-03-27 07:43:44.337+0000: 8284: debug : qemuMonitorJSONCommandWithFd:304 : Send command '{"execute":"qom-set","arguments":{"path":"/objects/iothread1","property":"poll-max-ns","value":2147483647},"id":"libvirt-21"}' for write with FD -1
 2019-03-27 07:43:44.337+0000: 8284: info : qemuMonitorSend:1081 : QEMU_MONITOR_SEND_MSG: mon=0x7f8b08027120 msg={"execute":"qom-set","arguments":{"path":"/objects/iothread1","property":"poll-max-ns","value":2147483647},"id":"libvirt-21"}

For other negative test,have been included in our new cases

If we can modify the status to verified? Because if there is something wrong with the performance,it shouldn't be a problem with libvirt.

Comment 3 John Ferlan 2019-03-28 21:36:30 UTC
The task was just to allow the values to changes. Understanding how that impacts performance is the more difficult technical detail to describe. Details for that are mostly left to those that understand QEMU IOThreads and the polling values. It's really dependent upon workload and sizing - difficult to just here use these numbers.

Comment 4 Xuesong Zhang 2019-03-29 08:23:12 UTC
(In reply to John Ferlan from comment #3)
> The task was just to allow the values to changes. Understanding how that
> impacts performance is the more difficult technical detail to describe.
> Details for that are mostly left to those that understand QEMU IOThreads and
> the polling values. It's really dependent upon workload and sizing -
> difficult to just here use these numbers.

Thanks for your answer John, we'd like to needinfo the qemu devel to check what's their opinion since it seems no qemu BZ to track the live update of poll-max-ns, we'd like to add a testing in qemu layer for the performance testing.

Hi, Stefan,

As we mentioned above, the function of live update poll-max-ns is supported and working well in libvirt currently, so we change the BZ status to verified now. But I'm not sure if we need to have a performance testing for the live update poll-max-ns in qemu layer?


This feature is support in qemu from RHEL7.4, and it seems the performance is not improved so much from your comments[1].
So, how do you think about it now? do we need to add a performance testing in qemu layer? If yes, would you please give us some data for reference to design the performance testing scenarios? Thx.



[1] https://bugzilla.redhat.com/show_bug.cgi?id=1404303#c7
"so it's unlikely that much higher numbers will improve performance.
Most users should not need to set the poll-max-ns parameter."

Comment 5 Stefan Hajnoczi 2019-04-03 15:49:20 UTC
(In reply to Xuesong Zhang from comment #4)
> As we mentioned above, the function of live update poll-max-ns is supported
> and working well in libvirt currently, so we change the BZ status to
> verified now. But I'm not sure if we need to have a performance testing for
> the live update poll-max-ns in qemu layer?

If you want to test that poll-max-ns has an effect on I/O request latency you could use null_blk with the completion_nsec= parameter and fio settings that fall within the poll-max-ns threshold.

It would detect when poll-max-ns breaks or changes behavior, resulting in significantly different performance results.

It doesn't seem like a high-priority test case to me though.

Comment 6 Xuesong Zhang 2019-04-04 02:38:01 UTC
*** Bug 1545732 has been marked as a duplicate of this bug. ***

Comment 13 errata-xmlrpc 2019-05-29 16:05:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:1293