Bug 2067126
Summary: | Allow memory prealloc from multiple threads | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Linux 8 | Reporter: | Nils Koenig <nkoenig> | |
Component: | libvirt | Assignee: | Michal Privoznik <mprivozn> | |
Status: | CLOSED ERRATA | QA Contact: | liang cong <lcong> | |
Severity: | high | Docs Contact: | Jiri Herrmann <jherrman> | |
Priority: | medium | |||
Version: | 8.6 | CC: | dzheng, jdenemar, jherrman, jmario, jsuchane, lmen, mprivozn, mtessun, toneata, virt-maint, yafu, yalzhang, ymankad, yuhuang | |
Target Milestone: | rc | Keywords: | FutureFeature, Triaged, Upstream, ZStream | |
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | libvirt-8.0.0-6.module+el8.7.0+15026+c30823f5 | Doc Type: | Enhancement | |
Doc Text: |
.VM memory preallocation using multiple threads
You can now define multiple CPU threads for virtual machine (VM) memory allocation in the domain XML configuration, for example as follows:
----
<memoryBacking>
<allocation threads='8'/>
</memoryBacking>
----
This ensures that more than one thread is used for allocating memory pages when starting a VM. As a result, VMs with multiple allocation threads configured start significantly faster, especially if the VMs has large amounts of RAM assigned and backed by hugepages.
|
Story Points: | --- | |
Clone Of: | ||||
: | 2075569 (view as bug list) | Environment: | ||
Last Closed: | 2022-11-08 09:19:55 UTC | Type: | Feature Request | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 2064194 | |||
Bug Blocks: | 2075569 |
Comment 7
Michal Privoznik
2022-04-04 12:35:35 UTC
Preverified with scratch build on rhel8.7: libvirt-8.0.0-6.el8_rc.af6ccc6655.x86_64 qemu-kvm-6.2.0-9.module+el8.7.0+14737+6552dcb8.x86_64 Test steps: 1. Setup host hugepages sysctl vm.nr_hugepages=1024 2. Define domain xml with memorybacking with below: <memoryBacking> <hugepages/> <allocation threads='8'/> </memoryBacking> 3. Start the guest vm 4. Check the guest vm can boot up and works well. 5. Check the qemu cmdline, that multithread pass through libvirt layer to qemu: object {"qom-type":"memory-backend-file","id":"pc.ram","mem-path":"/dev/hugepages/libvirt/qemu/3-vm1","x-use-canonical-path-for-ramblock-id":false,"prealloc":true,"prealloc-threads":8,"size":2147483648} 6. Also checked other scenarios, and did not find issue: 6.1 no huge page 6.2 no multi-thread setting 6.3 working with memfd,anonymous,file source 6.4 working with immdiate, ondemand mode 6.5 negative thread setting with minus, zero, not number, too big number, null 6.6 working with big thread number 6.7 virsh define --validate verified with: libvirt-8.0.0-6.module+el8.7.0+15026+c30823f5.x86_64 qemu-kvm-6.2.0-9.module+el8.7.0+14737+6552dcb8.x86_64 Test steps: 1. Setup host hugepages sysctl vm.nr_hugepages=1024 2. Define domain xml with memorybacking with below: <memoryBacking> <hugepages/> <allocation threads='4'/> </memoryBacking> 3. Start the guest vm 4. Check the guest vm can boot up and works well. 5. Check the qemu cmdline, that multithread pass through libvirt layer to qemu: -object {"qom-type":"memory-backend-file","id":"pc.ram","mem-path":"/dev/hugepages/libvirt/qemu/17-vm1","x-use-canonical-path-for-ramblock-id":false,"prealloc":true,"prealloc-threads":4,"size":2147483648} 6. Also checked other scenarios, and did not find issue: 6.1 no huge page 6.2 no multi-thread setting 6.3 working with memfd,anonymous,file source 6.4 working with immdiate, ondemand mode 6.5 negative thread setting with minus, zero, not number, too big number, null 6.6 working with big thread number 6.7 virsh define --validate Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Low: virt:rhel and virt-devel:rhel security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:7472 |