RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2014487 - [RHEL9] Enable virtio-mem as tech-preview on x86-64 - libvirt
Summary: [RHEL9] Enable virtio-mem as tech-preview on x86-64 - libvirt
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: libvirt
Version: 9.0
Hardware: x86_64
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Michal Privoznik
QA Contact: Jing Qi
Jiri Herrmann
URL:
Whiteboard:
Depends On: 2014484 2014492 2047271 2048435
Blocks: 2014457 2047797
TreeView+ depends on / blocked
 
Reported: 2021-10-15 11:42 UTC by David Hildenbrand
Modified: 2023-10-31 17:49 UTC (History)
12 users (show)

Fixed In Version: libvirt-7.9.0-1.el9
Doc Type: Technology Preview
Doc Text:
.`virtio-mem` is now available on AMD64, Intel 64, and ARM 64 As a Technology Preview, RHEL 9 introduces the `virtio-mem` feature on AMD64, Intel 64, and ARM 64 systems. Using `virtio-mem` makes it possible to dynamically add or remove host memory in virtual machines (VMs). To use `virtio-mem`, define `virtio-mem` memory devices in the XML configuration of a VM and use the `virsh update-memory-device` command to request memory device size changes while the VM is running. To see the current memory size exposed by such memory devices to a running VM, view the XML configuration of the VM. Note, however, that `virtio-mem` currently does not work on VMs that use a Windows operating system.
Clone Of:
: 2047271 (view as bug list)
Environment:
Last Closed: 2022-05-17 12:45:49 UTC
Type: Bug
Target Upstream Version: 7.9.0
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-99975 0 None None None 2021-10-15 11:44:38 UTC
Red Hat Product Errata RHBA-2022:2390 0 None None None 2022-05-17 12:46:14 UTC

Description David Hildenbrand 2021-10-15 11:42:00 UTC
We want to enable virtio-mem as tech-preview in RHEL9.0 on x86-64.

The libvirt support was just recently merged upstream, for example via commit
f931cb7f216b ("conf: Introduce virtio-mem <memory/> model") and will be part of the v7.9.0 release.

Comment 1 Peter Krempa 2021-10-18 07:49:14 UTC
Upstreamed in libvirt after:

v7.8.0-9-gf931cb7f21

Comment 2 Jing Qi 2021-11-02 09:54:06 UTC
Tested with libvirt upstream v7.9.0-rc1-3-g775de86975 & qemu-kvm-6.1.0-9.fc36.x86_64


1. Start guest with virtio-mem device 
<maxMemory slots=’16’ unit=’KiB’>8388608</maxMemory>
<memory unit=’KiB’>2097152</memory>
…
<cpu> 
    <numa>
      <cell id='0' cpus='0' memory='1048576' unit='KiB' discard='yes'/>
    </numa>
  </cpu>
...

     <memory model='virtio-mem'>
      <source>
        <nodemask>0</nodemask>
        <pagesize unit='KiB'>2048</pagesize>
      </source>
      <target>
        <size unit='KiB'>4194304</size>
        <node>0</node>
        <block unit='KiB'>2048</block>
        <requested unit='KiB'>1048576</requested>
      </target>
      <alias name='virtiomem0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </memory>

# virsh start pc
Domain 'pc' started

2. Update the requested-size  
# virsh update-memory-device pc  --requested-size 4GiB

3. # virsh dumpxml  pc   
    
<memory model='virtio-mem'>
      <source>
        <nodemask>0</nodemask>
        <pagesize unit='KiB'>2048</pagesize>
      </source>
      <target>
        <size unit='KiB'>4194304</size>
        <node>0</node>
        <block unit='KiB'>2048</block>
        <requested unit='KiB'>4194304</requested>
        <current unit='KiB'>4194304</current>      ==> check the current memory size
      </target>
      <alias name='virtiomem0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </memory>

3. Hot plug a virtio-mem device

   mem.xml-
   <memory model='virtio-mem'>
      <source>
        <nodemask>0</nodemask>
        <pagesize unit='KiB'>2048</pagesize>
      </source>
      <target>
        <size unit='KiB'>2097152</size>
        <node>0</node>
        <block unit='KiB'>2048</block>
        <requested unit='KiB'>1048576</requested>
        <current unit='KiB'>1048576</current>
      </target>
      </memory>
    
# virsh attach-device pc mem.xml
Device attached successfully

Comment 6 Jing Qi 2022-01-27 03:22:40 UTC
Verified with libvirt-8.0.0-1.el9.x86_64 & qemu-kvm-6.2.0-4.el9.x86_64 & kernel version 5.14.0-47.el9.x86_64 -

S1: Start VM with virtio-mem device

Steps:
1.Config vm xml with one virtio-mem device 

 <maxMemory slots='16' unit='KiB'>10485760</maxMemory>
  <memory unit='KiB'>5373952</memory>
  <currentMemory unit='KiB'>5373952</currentMemory>
  ...
  <cpu mode='host-model' check='partial'>
    <feature policy='disable' name='vmx'/>
    <numa>
      <cell id='0' cpus='0-1' memory='5242880' unit='KiB'/>
    </numa>
  </cpu>
  ...
   <memory model='virtio-mem'>
      <source>
        <nodemask>0</nodemask>
        <pagesize unit='KiB'>2048</pagesize>
      </source>
      <target>
        <size unit='KiB'>131072</size>
        <node>0</node>
        <block unit='KiB'>2048</block>
        <requested unit='KiB'>131072</requested>
      </target>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </memory>
2. Start vm
   # virsh start rhel9
   Domain 'rhel9' started
3. virsh dumpxml rhel9 and below is partial xml -

<maxMemory slots='16' unit='KiB'>10485760</maxMemory>
  <memory unit='KiB'>5373952</memory>
  <currentMemory unit='KiB'>5373952</currentMemory>
...
<memory model='virtio-mem'>
      <source>
        <nodemask>0</nodemask>
        <pagesize unit='KiB'>2048</pagesize>
      </source>
      <target>
        <size unit='KiB'>131072</size>
        <node>0</node>
        <block unit='KiB'>2048</block>
        <requested unit='KiB'>131072</requested>
        <current unit='KiB'>131072</current>
      </target>
      <alias name='virtiomem0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </memory>

4. # virsh dommemstat  rhel9
actual 5242880
last_update 0
rss 516260

S2: Attach a virtio-mem device 
Steps:
5. virsh attach-device rhel9 virtio-mem 
Device attached successfully

 virtio-mem.xml -
  <memory model='virtio-mem'>
      <source>
        <nodemask>0</nodemask>
        <pagesize unit='KiB'>2048</pagesize>
      </source>
      <target>
        <size unit='KiB'>131072</size>
        <node>0</node>
        <block unit='KiB'>2048</block>
        <requested unit='KiB'>81920</requested>
        <current unit='KiB'>90112</current>
      </target>
    </memory>
6.virsh dumpxml rhel9 and partial xml -
      <memory unit='KiB'>5505024</memory>
  <currentMemory unit='KiB'>5373952</currentMemory>
 ...
   <memory model='virtio-mem'>
      <source>
        <nodemask>0</nodemask>
        <pagesize unit='KiB'>2048</pagesize>
      </source>
      <target>
        <size unit='KiB'>131072</size>
        <node>0</node>
        <block unit='KiB'>2048</block>
        <requested unit='KiB'>81920</requested>
        <current unit='KiB'>0</current>
      </target>
      <alias name='virtiomem2'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </memory>


From the test above, the virtio-mem device can be attached, but seems it doesn't work with that slot. Since I tried below scenario

7.start the vm with adding above virtio-mem device in domain xml-
 virsh start rhel9
error: Failed to start domain 'rhel9'
error: internal error: qemu unexpectedly closed the monitor: 2022-01-27T03:05:09.934604Z qemu-kvm: -device virtio-mem-pci,node=0,block-size=2097152,requested-size=83886080,memdev=memvirtiomem1,id=virtiomem1,bus=pci.1,addr=0x0: Bus 'pci.1' not found

8. Changed virtio-mem device address to another one and it can work well. 
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>

9. Removed above device from domain xml and tried to attach it after vm start -
 
virsh attach-device rhel9 m.xml
error: Failed to attach device from m.xml
error: XML error: The device at PCI address 0000:00:03.0 requires hotplug capability, but the PCI controller with index='0' doesn't support hotplug

So, can you please help to confirm if the attach virtio-mem device works as expected?

Comment 7 Michal Privoznik 2022-01-27 10:06:07 UTC
(In reply to Jing Qi from comment #6)
> Verified with libvirt-8.0.0-1.el9.x86_64 & qemu-kvm-6.2.0-4.el9.x86_64 &
> kernel version 5.14.0-47.el9.x86_64 -

> So, can you please help to confirm if the attach virtio-mem device works as
> expected?

Yeah, the failure is not expected. But it looks like command line arguments ordering problem. I mean, when I configure virtio-mem to be on bus='0x01' the following cmd line is generated:

qemu-system-x86_64
-name guest=gentoo,debug-threads=on
-S
-object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-1-gentoo/master-key.aes"}'
-machine pc-i440fx-7.0,usb=off,dump-guest-core=off \
...
-object '{"qom-type":"memory-backend-file","id":"memua-virtiomem","mem-path":"/hugepages2M/libvirt/qemu/1-gentoo","reserve":false,"size":4294967296}'
-device '{"driver":"virtio-mem-pci","node":0,"block-size":2097152,"memdev":"memua-virtiomem","prealloc":true,"id":"ua-virtiomem","bus":"pci.0","addr":"0x6"}'
-object '{"qom-type":"memory-backend-ram","id":"memua-virtiomem2","reserve":false,"size":4294967296}'
-device '{"driver":"virtio-mem-pci","node":0,"block-size":2097152,"memdev":"memua-virtiomem2","id":"ua-virtiomem2","bus":"pci.1","addr":"0x9"}'
...
-device '{"driver":"pci-bridge","chassis_nr":1,"id":"pci.1","bus":"pci.0","addr":"0x9"}'
-device '{"driver":"piix3-usb-uhci","id":"usb","bus":"pci.0","addr":"0x1.0x2"}'
-device '{"driver":"lsi","id":"scsi0","bus":"pci.0","addr":"0x5"}'
-device '{"driver":"virtio-serial-pci","id":"virtio-serial0","bus":"pci.0","addr":"0x7"}'

Therefore, when QEMU starts up and see the first virtio-mem-pci device ("id":"ua-virtiomem") it will just create it and continue to the next one (ua-virtiomem2) where it sees "pci.1" bus which does not exist at that point yet. The bus is created (well would be) a few arguments later. Let me see if simple reorder fixes the problem (and think of all the implications).

Comment 9 Michal Privoznik 2022-01-27 13:40:00 UTC
Alright, after looking at the code, virtio-pmem suffers from the same issue. Therefore, let's track it in a different bug.

Comment 10 Jing Qi 2022-02-11 07:02:36 UTC
Tested S2 in comment 6 with version- libvirt-daemon-8.0.0-4.el9.x86_64 & qemu-kvm-6.2.0-7.el9.x86_64

S2.Attach device and dumpxml to check the memory device is attached correctly

#virsh attach-device rhel_i  virtio-mem.xml
Device attached successfully

virtio-mem.xml -


  <memory model='virtio-mem'>
      <source>
        <nodemask>0</nodemask>
        <pagesize unit='KiB'>2048</pagesize>
      </source>
      <target>
        <size unit='KiB'>131072</size>
        <node>0</node>
        <block unit='KiB'>2048</block>
        <requested unit='KiB'>81920</requested>
        <current unit='KiB'>81920</current>
      </target>
      <alias name='virtiomem1'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </memory>

 virsh dumpxml - 
 <memory unit='KiB'>5505024</memory>
  <currentMemory unit='KiB'>5455872</currentMemory>

....
    <memory model='virtio-mem'>
      <source>
        <nodemask>0</nodemask>
        <pagesize unit='KiB'>2048</pagesize>
      </source>
      <target>
        <size unit='KiB'>131072</size>
        <node>0</node>
        <block unit='KiB'>2048</block>
        <requested unit='KiB'>81920</requested>
        <current unit='KiB'>81920</current>
      </target>
      <alias name='virtiomem1'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </memory>
 
S3. Migrate vm with below virtio-mem device from rhel9 to rhel9

<memory model='virtio-mem'>
      <target>
        <size unit='KiB'>131072</size>
        <node>0</node>
        <block unit='KiB'>2048</block>
        <requested unit='KiB'>81920</requested>
        <current unit='KiB'>81920</current>
      </target>
      <alias name='virtiomem1'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </memory>
 

 #virsh migrate rhel_i qemu+ssh://**.redhat.com/system --live --p2p


It can succeed. But there is still a bug 2048022 related to hugepage.

Comment 11 Jing Qi 2022-02-14 07:15:01 UTC
Mark it to verified according to Comment 10

Comment 12 Xuesong Zhang 2022-02-14 09:27:04 UTC
(In reply to Jing Qi from comment #10)
> Tested S2 in comment 6 with version- libvirt-daemon-8.0.0-4.el9.x86_64 &
> qemu-kvm-6.2.0-7.el9.x86_64
> 
> S2.Attach device and dumpxml to check the memory device is attached correctly
> 
> #virsh attach-device rhel_i  virtio-mem.xml
> Device attached successfully
> 
> virtio-mem.xml -
> 
> 
>   <memory model='virtio-mem'>
>       <source>
>         <nodemask>0</nodemask>
>         <pagesize unit='KiB'>2048</pagesize>
>       </source>
>       <target>
>         <size unit='KiB'>131072</size>
>         <node>0</node>
>         <block unit='KiB'>2048</block>
>         <requested unit='KiB'>81920</requested>
>         <current unit='KiB'>81920</current>
>       </target>
>       <alias name='virtiomem1'/>
>       <address type='pci' domain='0x0000' bus='0x01' slot='0x00'
> function='0x0'/>
>     </memory>
> 
>  virsh dumpxml - 
>  <memory unit='KiB'>5505024</memory>
>   <currentMemory unit='KiB'>5455872</currentMemory>
> 
> ....
>     <memory model='virtio-mem'>
>       <source>
>         <nodemask>0</nodemask>
>         <pagesize unit='KiB'>2048</pagesize>
>       </source>
>       <target>
>         <size unit='KiB'>131072</size>
>         <node>0</node>
>         <block unit='KiB'>2048</block>
>         <requested unit='KiB'>81920</requested>
>         <current unit='KiB'>81920</current>
>       </target>
>       <alias name='virtiomem1'/>
>       <address type='pci' domain='0x0000' bus='0x01' slot='0x00'
> function='0x0'/>
>     </memory>
>  
> S3. Migrate vm with below virtio-mem device from rhel9 to rhel9
> 
> <memory model='virtio-mem'>
>       <target>
>         <size unit='KiB'>131072</size>
>         <node>0</node>
>         <block unit='KiB'>2048</block>
>         <requested unit='KiB'>81920</requested>
>         <current unit='KiB'>81920</current>
>       </target>
>       <alias name='virtiomem1'/>
>       <address type='pci' domain='0x0000' bus='0x01' slot='0x00'
> function='0x0'/>
>     </memory>
>  
> 
>  #virsh migrate rhel_i qemu+ssh://**.redhat.com/system --live --p2p
> 
> 
> It can succeed. But there is still a bug 2048022 related to hugepage.

Clone qemu-kvm BZ 2048022 to libvirt BZ TestOnly BZ 2054134, will track these issues in BZ 2054134 separately.

Comment 18 errata-xmlrpc 2022-05-17 12:45:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (new packages: libvirt), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:2390


Note You need to log in before you can comment on or make changes to this bug.