Bug 1827724

Summary: qemu-storage-daemon: add 100% CPU polling mode
Product: Red Hat Enterprise Linux 9 Reporter: Stefan Hajnoczi <stefanha>
Component: qemu-kvmAssignee: Stefan Hajnoczi <stefanha>
qemu-kvm sub component: Storage QA Contact: qing.wang <qinwang>
Status: CLOSED WONTFIX Docs Contact:
Severity: medium    
Priority: high CC: chayang, coli, jinzhao, juzhang, kwolf, virt-maint, yama
Version: 9.0Keywords: FutureFeature, Triaged
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-10-24 07:26:51 UTC Type: Feature Request
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1901323    
Bug Blocks:    

Description Stefan Hajnoczi 2020-04-24 16:04:46 UTC
QEMU's event loop waits for file descriptors or timers to become ready and this may cause the process to yield and the physical CPU to enter a low-power state.  Waking up has a high latency and results in poor I/O emulation performance.

QEMU's existing adaptive polling mode was designed for the 1 QEMU process per guest architecture.  When using qemu-storage-daemon (especially with the nvme:// block driver) there may be 1 system-wide process for all guests.

Implement a 100% CPU polling mode so that a physical CPU can be dedicated to handling storage emulation for all guests.  This ensures that I/O latency is always optimal and that I/O emulation never yields.

This requires changes to QEMU's aio_poll() and related event loop infrastructure.

I have throwaway patches that do this but they need to be cleaned up and benchmarked for upstream submission.

This will allow qemu-storage-daemon to be used in an SPDK-style configuration.

Comment 4 John Ferlan 2021-09-08 21:38:55 UTC
Move RHEL-AV bugs to RHEL9. If necessary to resolve in RHEL8, then clone to the current RHEL8 release.

Comment 6 RHEL Program Management 2021-10-24 07:26:51 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.