Bug 2014487
Summary: | [RHEL9] Enable virtio-mem as tech-preview on x86-64 - libvirt | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Linux 9 | Reporter: | David Hildenbrand <dhildenb> | |
Component: | libvirt | Assignee: | Michal Privoznik <mprivozn> | |
libvirt sub component: | General | QA Contact: | Jing Qi <jinqi> | |
Status: | CLOSED ERRATA | Docs Contact: | Jiri Herrmann <jherrman> | |
Severity: | unspecified | |||
Priority: | unspecified | CC: | gfialova, jdenemar, jherrman, jsuchane, lcheng, lcong, lmen, mprivozn, pkrempa, virt-maint, xuzhang, yanghliu | |
Version: | 9.0 | Keywords: | AutomationBackLog, Triaged | |
Target Milestone: | rc | |||
Target Release: | --- | |||
Hardware: | x86_64 | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | libvirt-7.9.0-1.el9 | Doc Type: | Technology Preview | |
Doc Text: |
.`virtio-mem` is now available on AMD64, Intel 64, and ARM 64
As a Technology Preview, RHEL 9 introduces the `virtio-mem` feature on AMD64, Intel 64, and ARM 64 systems. Using `virtio-mem` makes it possible to dynamically add or remove host memory in virtual machines (VMs).
To use `virtio-mem`, define `virtio-mem` memory devices in the XML configuration of a VM and use the `virsh update-memory-device` command to request memory device size changes while the VM is running. To see the current memory size exposed by such memory devices to a running VM, view the XML configuration of the VM.
Note, however, that `virtio-mem` currently does not work on VMs that use a Windows operating system.
|
Story Points: | --- | |
Clone Of: | ||||
: | 2047271 (view as bug list) | Environment: | ||
Last Closed: | 2022-05-17 12:45:49 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | 7.9.0 | |
Embargoed: | ||||
Bug Depends On: | 2014484, 2014492, 2047271, 2048435 | |||
Bug Blocks: | 2014457, 2047797 |
Description
David Hildenbrand
2021-10-15 11:42:00 UTC
Upstreamed in libvirt after: v7.8.0-9-gf931cb7f21 Tested with libvirt upstream v7.9.0-rc1-3-g775de86975 & qemu-kvm-6.1.0-9.fc36.x86_64 1. Start guest with virtio-mem device <maxMemory slots=’16’ unit=’KiB’>8388608</maxMemory> <memory unit=’KiB’>2097152</memory> … <cpu> <numa> <cell id='0' cpus='0' memory='1048576' unit='KiB' discard='yes'/> </numa> </cpu> ... <memory model='virtio-mem'> <source> <nodemask>0</nodemask> <pagesize unit='KiB'>2048</pagesize> </source> <target> <size unit='KiB'>4194304</size> <node>0</node> <block unit='KiB'>2048</block> <requested unit='KiB'>1048576</requested> </target> <alias name='virtiomem0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </memory> # virsh start pc Domain 'pc' started 2. Update the requested-size # virsh update-memory-device pc --requested-size 4GiB 3. # virsh dumpxml pc <memory model='virtio-mem'> <source> <nodemask>0</nodemask> <pagesize unit='KiB'>2048</pagesize> </source> <target> <size unit='KiB'>4194304</size> <node>0</node> <block unit='KiB'>2048</block> <requested unit='KiB'>4194304</requested> <current unit='KiB'>4194304</current> ==> check the current memory size </target> <alias name='virtiomem0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </memory> 3. Hot plug a virtio-mem device mem.xml- <memory model='virtio-mem'> <source> <nodemask>0</nodemask> <pagesize unit='KiB'>2048</pagesize> </source> <target> <size unit='KiB'>2097152</size> <node>0</node> <block unit='KiB'>2048</block> <requested unit='KiB'>1048576</requested> <current unit='KiB'>1048576</current> </target> </memory> # virsh attach-device pc mem.xml Device attached successfully Verified with libvirt-8.0.0-1.el9.x86_64 & qemu-kvm-6.2.0-4.el9.x86_64 & kernel version 5.14.0-47.el9.x86_64 - S1: Start VM with virtio-mem device Steps: 1.Config vm xml with one virtio-mem device <maxMemory slots='16' unit='KiB'>10485760</maxMemory> <memory unit='KiB'>5373952</memory> <currentMemory unit='KiB'>5373952</currentMemory> ... <cpu mode='host-model' check='partial'> <feature policy='disable' name='vmx'/> <numa> <cell id='0' cpus='0-1' memory='5242880' unit='KiB'/> </numa> </cpu> ... <memory model='virtio-mem'> <source> <nodemask>0</nodemask> <pagesize unit='KiB'>2048</pagesize> </source> <target> <size unit='KiB'>131072</size> <node>0</node> <block unit='KiB'>2048</block> <requested unit='KiB'>131072</requested> </target> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </memory> 2. Start vm # virsh start rhel9 Domain 'rhel9' started 3. virsh dumpxml rhel9 and below is partial xml - <maxMemory slots='16' unit='KiB'>10485760</maxMemory> <memory unit='KiB'>5373952</memory> <currentMemory unit='KiB'>5373952</currentMemory> ... <memory model='virtio-mem'> <source> <nodemask>0</nodemask> <pagesize unit='KiB'>2048</pagesize> </source> <target> <size unit='KiB'>131072</size> <node>0</node> <block unit='KiB'>2048</block> <requested unit='KiB'>131072</requested> <current unit='KiB'>131072</current> </target> <alias name='virtiomem0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </memory> 4. # virsh dommemstat rhel9 actual 5242880 last_update 0 rss 516260 S2: Attach a virtio-mem device Steps: 5. virsh attach-device rhel9 virtio-mem Device attached successfully virtio-mem.xml - <memory model='virtio-mem'> <source> <nodemask>0</nodemask> <pagesize unit='KiB'>2048</pagesize> </source> <target> <size unit='KiB'>131072</size> <node>0</node> <block unit='KiB'>2048</block> <requested unit='KiB'>81920</requested> <current unit='KiB'>90112</current> </target> </memory> 6.virsh dumpxml rhel9 and partial xml - <memory unit='KiB'>5505024</memory> <currentMemory unit='KiB'>5373952</currentMemory> ... <memory model='virtio-mem'> <source> <nodemask>0</nodemask> <pagesize unit='KiB'>2048</pagesize> </source> <target> <size unit='KiB'>131072</size> <node>0</node> <block unit='KiB'>2048</block> <requested unit='KiB'>81920</requested> <current unit='KiB'>0</current> </target> <alias name='virtiomem2'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </memory> From the test above, the virtio-mem device can be attached, but seems it doesn't work with that slot. Since I tried below scenario 7.start the vm with adding above virtio-mem device in domain xml- virsh start rhel9 error: Failed to start domain 'rhel9' error: internal error: qemu unexpectedly closed the monitor: 2022-01-27T03:05:09.934604Z qemu-kvm: -device virtio-mem-pci,node=0,block-size=2097152,requested-size=83886080,memdev=memvirtiomem1,id=virtiomem1,bus=pci.1,addr=0x0: Bus 'pci.1' not found 8. Changed virtio-mem device address to another one and it can work well. <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> 9. Removed above device from domain xml and tried to attach it after vm start - virsh attach-device rhel9 m.xml error: Failed to attach device from m.xml error: XML error: The device at PCI address 0000:00:03.0 requires hotplug capability, but the PCI controller with index='0' doesn't support hotplug So, can you please help to confirm if the attach virtio-mem device works as expected? (In reply to Jing Qi from comment #6) > Verified with libvirt-8.0.0-1.el9.x86_64 & qemu-kvm-6.2.0-4.el9.x86_64 & > kernel version 5.14.0-47.el9.x86_64 - > So, can you please help to confirm if the attach virtio-mem device works as > expected? Yeah, the failure is not expected. But it looks like command line arguments ordering problem. I mean, when I configure virtio-mem to be on bus='0x01' the following cmd line is generated: qemu-system-x86_64 -name guest=gentoo,debug-threads=on -S -object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-1-gentoo/master-key.aes"}' -machine pc-i440fx-7.0,usb=off,dump-guest-core=off \ ... -object '{"qom-type":"memory-backend-file","id":"memua-virtiomem","mem-path":"/hugepages2M/libvirt/qemu/1-gentoo","reserve":false,"size":4294967296}' -device '{"driver":"virtio-mem-pci","node":0,"block-size":2097152,"memdev":"memua-virtiomem","prealloc":true,"id":"ua-virtiomem","bus":"pci.0","addr":"0x6"}' -object '{"qom-type":"memory-backend-ram","id":"memua-virtiomem2","reserve":false,"size":4294967296}' -device '{"driver":"virtio-mem-pci","node":0,"block-size":2097152,"memdev":"memua-virtiomem2","id":"ua-virtiomem2","bus":"pci.1","addr":"0x9"}' ... -device '{"driver":"pci-bridge","chassis_nr":1,"id":"pci.1","bus":"pci.0","addr":"0x9"}' -device '{"driver":"piix3-usb-uhci","id":"usb","bus":"pci.0","addr":"0x1.0x2"}' -device '{"driver":"lsi","id":"scsi0","bus":"pci.0","addr":"0x5"}' -device '{"driver":"virtio-serial-pci","id":"virtio-serial0","bus":"pci.0","addr":"0x7"}' Therefore, when QEMU starts up and see the first virtio-mem-pci device ("id":"ua-virtiomem") it will just create it and continue to the next one (ua-virtiomem2) where it sees "pci.1" bus which does not exist at that point yet. The bus is created (well would be) a few arguments later. Let me see if simple reorder fixes the problem (and think of all the implications). Alright, after looking at the code, virtio-pmem suffers from the same issue. Therefore, let's track it in a different bug. Tested S2 in comment 6 with version- libvirt-daemon-8.0.0-4.el9.x86_64 & qemu-kvm-6.2.0-7.el9.x86_64 S2.Attach device and dumpxml to check the memory device is attached correctly #virsh attach-device rhel_i virtio-mem.xml Device attached successfully virtio-mem.xml - <memory model='virtio-mem'> <source> <nodemask>0</nodemask> <pagesize unit='KiB'>2048</pagesize> </source> <target> <size unit='KiB'>131072</size> <node>0</node> <block unit='KiB'>2048</block> <requested unit='KiB'>81920</requested> <current unit='KiB'>81920</current> </target> <alias name='virtiomem1'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </memory> virsh dumpxml - <memory unit='KiB'>5505024</memory> <currentMemory unit='KiB'>5455872</currentMemory> .... <memory model='virtio-mem'> <source> <nodemask>0</nodemask> <pagesize unit='KiB'>2048</pagesize> </source> <target> <size unit='KiB'>131072</size> <node>0</node> <block unit='KiB'>2048</block> <requested unit='KiB'>81920</requested> <current unit='KiB'>81920</current> </target> <alias name='virtiomem1'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </memory> S3. Migrate vm with below virtio-mem device from rhel9 to rhel9 <memory model='virtio-mem'> <target> <size unit='KiB'>131072</size> <node>0</node> <block unit='KiB'>2048</block> <requested unit='KiB'>81920</requested> <current unit='KiB'>81920</current> </target> <alias name='virtiomem1'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </memory> #virsh migrate rhel_i qemu+ssh://**.redhat.com/system --live --p2p It can succeed. But there is still a bug 2048022 related to hugepage. Mark it to verified according to Comment 10 (In reply to Jing Qi from comment #10) > Tested S2 in comment 6 with version- libvirt-daemon-8.0.0-4.el9.x86_64 & > qemu-kvm-6.2.0-7.el9.x86_64 > > S2.Attach device and dumpxml to check the memory device is attached correctly > > #virsh attach-device rhel_i virtio-mem.xml > Device attached successfully > > virtio-mem.xml - > > > <memory model='virtio-mem'> > <source> > <nodemask>0</nodemask> > <pagesize unit='KiB'>2048</pagesize> > </source> > <target> > <size unit='KiB'>131072</size> > <node>0</node> > <block unit='KiB'>2048</block> > <requested unit='KiB'>81920</requested> > <current unit='KiB'>81920</current> > </target> > <alias name='virtiomem1'/> > <address type='pci' domain='0x0000' bus='0x01' slot='0x00' > function='0x0'/> > </memory> > > virsh dumpxml - > <memory unit='KiB'>5505024</memory> > <currentMemory unit='KiB'>5455872</currentMemory> > > .... > <memory model='virtio-mem'> > <source> > <nodemask>0</nodemask> > <pagesize unit='KiB'>2048</pagesize> > </source> > <target> > <size unit='KiB'>131072</size> > <node>0</node> > <block unit='KiB'>2048</block> > <requested unit='KiB'>81920</requested> > <current unit='KiB'>81920</current> > </target> > <alias name='virtiomem1'/> > <address type='pci' domain='0x0000' bus='0x01' slot='0x00' > function='0x0'/> > </memory> > > S3. Migrate vm with below virtio-mem device from rhel9 to rhel9 > > <memory model='virtio-mem'> > <target> > <size unit='KiB'>131072</size> > <node>0</node> > <block unit='KiB'>2048</block> > <requested unit='KiB'>81920</requested> > <current unit='KiB'>81920</current> > </target> > <alias name='virtiomem1'/> > <address type='pci' domain='0x0000' bus='0x01' slot='0x00' > function='0x0'/> > </memory> > > > #virsh migrate rhel_i qemu+ssh://**.redhat.com/system --live --p2p > > > It can succeed. But there is still a bug 2048022 related to hugepage. Clone qemu-kvm BZ 2048022 to libvirt BZ TestOnly BZ 2054134, will track these issues in BZ 2054134 separately. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (new packages: libvirt), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:2390 |