RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2044172 - [RHEL9] Enable virtio-mem as tech-preview on ARM64 libvirt
Summary: [RHEL9] Enable virtio-mem as tech-preview on ARM64 libvirt
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: libvirt
Version: 9.1
Hardware: aarch64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Michal Privoznik
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 2044162
Blocks: 1924294 2047797
TreeView+ depends on / blocked
 
Reported: 2022-01-24 07:28 UTC by Guowen Shan
Modified: 2023-02-16 22:21 UTC (History)
10 users (show)

Fixed In Version: libvirt-8.3.0-1.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-02-16 22:21:09 UTC
Type: Feature Request
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-109307 0 None None None 2022-01-24 07:31:26 UTC

Description Guowen Shan 2022-01-24 07:28:16 UTC
This is opened to track the virtio-mem enablement work on ARM64. There are
two separate bugs tracking the preparatory works for QEMU and guest kernel:

Bug 2044162 [RHEL9] Enable virtio-mem as tech-preview on ARM64 QEMU
Bug 2044155 [RHEL9] Enable virtio-mem as tech-preview on ARM64 kernel

Comment 1 Guowen Shan 2022-04-06 01:10:55 UTC
This depends on the work to enable virtio-mem in QEMU, which is tracked
by bug 2044162

Comment 2 Luiz Capitulino 2022-04-12 20:16:40 UTC
Gavin, David, Michal,

Is virtio-mem support in libvirt arch specific? Does ARM need any special treatment? If not, then maybe this BZ should be TestOnly?

Comment 3 Michal Privoznik 2022-04-13 07:27:39 UTC
Good point. There's nothing arch specific in libvirt. And quick glance over qemu code does not show any arch specific signs either. Making this TestOnly then.

Comment 4 David Hildenbrand 2022-04-13 10:49:08 UTC
At least nothing specific for virtio-mem-pci I think, so it should be fine for aarch64. Once we want to support s390x via virtio-mem-ccw, we'll need libvirt extensions.

Comment 5 Guowen Shan 2022-05-27 02:16:24 UTC
Michael, could you help to set ITM/DTM so that QA can schedule time to
verify? The depedent patchsets have been merged.

  Bug 2044162 - [RHEL9.1] Enable virtio-mem as tech-preview on ARM64 QEMU
  (Fixed in qemu-kvm-7.0.0-2.el9)
  Bug 2044155 - [RHEL9.1] Enable virtio-mem as tech-preview on ARM64 kernel
  (Fixed in kernel-5.14.0-99.el9)

Thanks,
Gavin

Comment 6 Michal Privoznik 2022-05-27 06:00:37 UTC
Perfect, so QEMU part is all done then. Since this is TestOnly bug, it can be switched to ON_QA. Let me do that.

Comment 7 Yiding Liu (Fujitsu) 2022-06-02 06:00:29 UTC
Env:

Host:
libvirt-8.3.0-1.el9.aarch64
qemu-kvm-7.0.0-4.el9.aarch64
kernel-5.14.0-101.el9.aarch64

Guest:
kernel-5.14.0-101.el9.aarch64

Preparation:
Add kernel option "memhp_default_state=online_movable" 
to guest kernel line before test.
```
[root@localhost ~]# grubby --info=/boot/vmlinuz-5.14.0-101.el9.aarch64
index=0
kernel="/boot/vmlinuz-5.14.0-101.el9.aarch64"
args="ro console=tty0 console=ttyS0,115200 reboot=pci biosdevname=0 crashkernel=2G-:448M rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap memhp_default_state=online_movable"
root="/dev/mapper/rhel-root"
initrd="/boot/initramfs-5.14.0-101.el9.aarch64.img"
title="Red Hat Enterprise Linux (5.14.0-101.el9.aarch64) 9.1 (Plow)"
id="68e17a52352b4af48625845bda73c680-5.14.0-101.el9.aarch64"
```

Case 1. Basic test with virtio-mem devices
s1. Boot guest with two numa nodes and two virtio-mem devices:
```
  <maxMemory slots='32' unit='KiB'>20971520</maxMemory>
  <memory unit='KiB'>20971520</memory>
  <currentMemory unit='KiB'>7340032</currentMemory>
  <vcpu placement='static'>4</vcpu>
  <numatune>
    <memory mode='strict' nodeset='0-3'/>
  </numatune>
---snip---
  <cpu mode='host-passthrough' check='none'>
    <topology sockets='1' dies='1' cores='4' threads='1'/>
    <feature policy='require' name='sve'/>
    <numa>
      <cell id='0' cpus='0-1' memory='2097152' unit='KiB' memAccess='shared'/>
      <cell id='1' cpus='2-3' memory='2097152' unit='KiB' memAccess='shared'/>
    </numa>
  </cpu>
---snip---
  <devices>
---snip---
    <memory model='virtio-mem'>
      <target>
        <size unit='KiB'>8388608</size>
        <node>0</node>
        <block unit='KiB'>2048</block>
        <requested unit='KiB'>1048576</requested>
        <current unit='KiB'>1048576</current>
      </target>
      <alias name='virtiomem0'/>
      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
    </memory>
    <memory model='virtio-mem'>
      <target>
        <size unit='KiB'>8388608</size>
        <node>1</node>
        <block unit='KiB'>2048</block>
        <requested unit='KiB'>2097152</requested>
        <current unit='KiB'>2097152</current>
      </target>
      <alias name='virtiomem1'/>
      <address type='pci' domain='0x0000' bus='0x09' slot='0x00' function='0x0'/>
    </memory>
  </devices>
```

s2. Start Guest and check qemu cmdline, memory-devices
```
-device {"driver":"virtio-mem-pci","node":0,"block-size":2097152,"requested-size":1073741824,"memdev":"memvirtiomem0","id":"virtiomem0","bus":"pci.8","addr":"0x0"} -object {"qom-type":"memory-backend-file","id":"memvirtiomem1","mem-path":"/dev/shm/libvirt/qemu/2-fj-kvm-vm/virtiomem1","share":true,"reserve":false,"size":8589934592,"host-nodes":[0,1,2,3],"policy":"bind"} -device {"driver":"virtio-mem-pci","node":1,"block-size":2097152,"requested-size":2147483648,"memdev":"memvirtiomem1","id":"virtiomem1","bus":"pci.9","addr":"0x0"} 
```
```
# virsh qemu-monitor-command --hmp fj-kvm-vm info memory-devices
Memory device [virtio-mem]: "virtiomem0"
  memaddr: 0x140000000
  node: 0
  requested-size: 1073741824
  size: 1073741824
  max-size: 8589934592
  block-size: 2097152
  memdev: /objects/memvirtiomem0
Memory device [virtio-mem]: "virtiomem1"
  memaddr: 0x340000000
  node: 1
  requested-size: 2147483648
  size: 2147483648
  max-size: 8589934592
  block-size: 2097152
  memdev: /objects/memvirtiomem1
```

s3. Check guest numa info
```
# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1
node 0 size: 2996 MB
node 0 free: 2821 MB
node 1 cpus: 2 3
node 1 size: 3532 MB
node 1 free: 3330 MB
node distances:
node   0   1 
  0:  10  20 
  1:  20  10 
```

s4. Resize virtio-mem, grow both to maximum

# virsh update-memory-device fj-kvm-vm --live --alias virtiomem0 --requested-size 8G

# virsh update-memory-device fj-kvm-vm --live --alias virtiomem1 --requested-size 8G

# virsh qemu-monitor-command --hmp fj-kvm-vm info memory-devices
Memory device [virtio-mem]: "virtiomem0"
  memaddr: 0x140000000
  node: 0
  requested-size: 8589934592
  size: 8589934592
  max-size: 8589934592
  block-size: 2097152
  memdev: /objects/memvirtiomem0
Memory device [virtio-mem]: "virtiomem1"
  memaddr: 0x340000000
  node: 1
  requested-size: 8589934592
  size: 8589934592
  max-size: 8589934592
  block-size: 2097152
  memdev: /objects/memvirtiomem1

s5. Check guest memoryinfo
[root@localhost ~]# free -h
               total        used        free      shared  buff/cache   available
Mem:            19Gi       481Mi        18Gi       8.0Mi       139Mi        18Gi
Swap:          1.0Gi          0B       1.0Gi
[root@localhost ~]# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1
node 0 size: 10164 MB
node 0 free: 9863 MB
node 1 cpus: 2 3
node 1 size: 9676 MB
node 1 free: 9356 MB
node distances:
node   0   1 
  0:  10  20 
  1:  20  10 

s6. Use the hot added memory by 'numactl -m <NUMA NODE> <application>'
for two purposes: (a) the hot added memory can be accessed. (b)
the data on the hot added memory can be migrated in the scenario
of hot remove (shrink).

[root@localhost ~]# mkdir -p /tmp/numa_test
[root@localhost ~]# numactl -m 0 dd if=/dev/urandom of=/tmp/numa_test/test bs=1k count=5242880
5242880+0 records in
5242880+0 records out
5368709120 bytes (5.4 GB, 5.0 GiB) copied, 106.959 s, 50.2 MB/s
[root@localhost ~]# rm -rf /tmp/numa_test/*
[root@localhost ~]# numactl -m 1 dd if=/dev/urandom of=/tmp/numa_test/test bs=1k count=5242880
5242880+0 records in
5242880+0 records out
5368709120 bytes (5.4 GB, 5.0 GiB) copied, 98.4798 s, 54.5 MB/s


# shrink virt-mem device to 0
# virsh update-memory-device fj-kvm-vm --live --alias virtiomem0 --requested-size 0G
# virsh update-memory-device fj-kvm-vm --live --alias virtiomem1 --requested-size 0G
# virsh qemu-monitor-command --hmp fj-kvm-vm info memory-devices
Memory device [virtio-mem]: "virtiomem0"
  memaddr: 0x140000000
  node: 0
  requested-size: 0
  size: 0
  max-size: 8589934592
  block-size: 2097152
  memdev: /objects/memvirtiomem0
Memory device [virtio-mem]: "virtiomem1"
  memaddr: 0x340000000
  node: 1
  requested-size: 0
  size: 0
  max-size: 8589934592
  block-size: 2097152
  memdev: /objects/memvirtiomem1

root@localhost ~]# numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1
node 0 size: 2026 MB
node 0 free: 1880 MB
node 1 cpus: 2 3
node 1 size: 1447 MB
node 1 free: 1288 MB
node distances:
node   0   1 
  0:  10  20 
  1:  20  10 
[root@localhost ~]# free -h
               total        used        free      shared  buff/cache   available
Mem:           3.4Gi       189Mi       3.1Gi       8.0Mi       112Mi       3.0Gi
Swap:          1.0Gi          0B       1.0Gi

Comment 8 Yiding Liu (Fujitsu) 2022-06-02 06:36:20 UTC
Case2. Test hotplug virtio-mem device
Preparation: Same as above but remove all virto-mem devices from guest.

s1. Start guest

s2. prepare 2 virtio-mem device xml
# cat virtio-mem0.xml 
<memory model='virtio-mem'>
  <target>
    <size unit='KiB'>8388608</size>
    <node>0</node>
    <block unit='KiB'>2048</block>
    <requested unit='KiB'>1048576</requested>
    <current unit='KiB'>1048576</current>
  </target>
  <alias name='virtiomem0'/>
</memory>
# cat virtio-mem1.xml 
<memory model='virtio-mem'>
  <target>
    <size unit='KiB'>8388608</size>
    <node>1</node>
    <block unit='KiB'>2048</block>
    <requested unit='KiB'>1048576</requested>
    <current unit='KiB'>1048576</current>
  </target>
  <alias name='virtiomem1'/>
</memory>

s3. Attach virtio-mem devices
# virsh attach-device fj-kvm-vm --file virtio-mem0.xml --persistent
Device attached successfully

# virsh attach-device fj-kvm-vm --file virtio-mem1.xml --persistent
Device attached successfully

# virsh qemu-monitor-command --hmp fj-kvm-vm info memory-devices
Memory device [virtio-mem]: "virtiomem0"
  memaddr: 0x140000000
  node: 0
  requested-size: 1073741824
  size: 1073741824
  max-size: 8589934592
  block-size: 2097152
  memdev: /objects/memvirtiomem0
Memory device [virtio-mem]: "virtiomem1"
  memaddr: 0x340000000
  node: 1
  requested-size: 1073741824
  size: 1073741824
  max-size: 8589934592
  block-size: 2097152
  memdev: /objects/memvirtiomem1

# guest dmesg info
---snip---
[  496.605819] pcieport 0000:00:01.7: pciehp: Slot(0-7): Attention button pressed
[  496.608639] pcieport 0000:00:01.7: pciehp: Slot(0-7) Powering on due to button press
[  496.611702] pcieport 0000:00:01.7: pciehp: Slot(0-7): Card present
[  496.614165] pcieport 0000:00:01.7: pciehp: Slot(0-7): Link Up
[  496.772990] pci 0000:08:00.0: [1af4:1058] type 00 class 0x00ff00
[  496.775269] pci 0000:08:00.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit pref]
[  496.779390] pci 0000:08:00.0: BAR 4: assigned [mem 0x8000e00000-0x8000e03fff 64bit pref]
[  496.781849] pcieport 0000:00:01.7: PCI bridge to [bus 08]
[  496.783585] pcieport 0000:00:01.7:   bridge window [io  0x8000-0x8fff]
[  496.791015] pcieport 0000:00:01.7:   bridge window [mem 0x10e00000-0x10ffffff]
[  496.796624] pcieport 0000:00:01.7:   bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]
[  496.806395] virtio-pci 0000:08:00.0: enabling device (0000 -> 0002)
[  496.834460] virtio_mem virtio7: start address: 0x140000000
[  496.836065] virtio_mem virtio7: region size: 0x200000000
[  496.837585] virtio_mem virtio7: device block size: 0x200000
[  496.839184] virtio_mem virtio7: nid: 0
[  496.840278] virtio_mem virtio7: memory block size: 0x8000000
[  496.841926] virtio_mem virtio7: subblock size: 0x1000000
[  496.843777] TECH PREVIEW: virtio_mem may not be fully supported.
               Please review provided documentation for limitations.
[  496.847419] virtio_mem virtio7: plugged size: 0x0
[  496.848772] virtio_mem virtio7: requested size: 0x40000000
[  496.886942] Built 2 zonelists, mobility grouping on.  Total pages: 892517
[  496.888953] Policy zone: Normal
[  501.778392] pcieport 0000:00:02.0: pciehp: Slot(0-8): Attention button pressed
[  501.781314] pcieport 0000:00:02.0: pciehp: Slot(0-8) Powering on due to button press
[  501.784912] pcieport 0000:00:02.0: pciehp: Slot(0-8): Card present
[  501.787276] pcieport 0000:00:02.0: pciehp: Slot(0-8): Link Up
[  501.943002] pci 0000:09:00.0: [1af4:1058] type 00 class 0x00ff00
[  501.945271] pci 0000:09:00.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit pref]
[  501.949366] pci 0000:09:00.0: BAR 4: assigned [mem 0x8001000000-0x8001003fff 64bit pref]
[  501.951838] pcieport 0000:00:02.0: PCI bridge to [bus 09]
[  501.953620] pcieport 0000:00:02.0:   bridge window [io  0x9000-0x9fff]
[  501.960109] pcieport 0000:00:02.0:   bridge window [mem 0x11000000-0x111fffff]
[  501.965271] pcieport 0000:00:02.0:   bridge window [mem 0x8001000000-0x80011fffff 64bit pref]
[  501.974220] virtio-pci 0000:09:00.0: enabling device (0000 -> 0002)
[  501.983759] virtio_mem virtio8: start address: 0x340000000
[  501.985337] virtio_mem virtio8: region size: 0x200000000
[  501.986858] virtio_mem virtio8: device block size: 0x200000
[  501.988450] virtio_mem virtio8: nid: 1
[  501.989763] virtio_mem virtio8: memory block size: 0x8000000
[  501.991410] virtio_mem virtio8: subblock size: 0x1000000
[  501.993524] virtio_mem virtio8: plugged size: 0x0
[  501.994884] virtio_mem virtio8: requested size: 0x40000000
[  502.042476] Built 2 zonelists, mobility grouping on.  Total pages: 1154662
[  502.044819] Policy zone: Normal


s4. Resize virtio-mem, grow both to maximum
# virsh update-memory-device fj-kvm-vm --live --alias virtiomem0 --requested-size 8G

# virsh update-memory-device fj-kvm-vm --live --alias virtiomem1 --requested-size 8G

# virsh qemu-monitor-command --hmp fj-kvm-vm info memory-devices
Memory device [virtio-mem]: "virtiomem0"
  memaddr: 0x140000000
  node: 0
  requested-size: 8589934592
  size: 8589934592
  max-size: 8589934592
  block-size: 2097152
  memdev: /objects/memvirtiomem0
Memory device [virtio-mem]: "virtiomem1"
  memaddr: 0x340000000
  node: 1
  requested-size: 8589934592
  size: 8589934592
  max-size: 8589934592
  block-size: 2097152
  memdev: /objects/memvirtiomem1

[root@localhost ~]# free -h
               total        used        free      shared  buff/cache   available
Mem:            19Gi       481Mi        18Gi       8.0Mi       220Mi        18Gi
Swap:          1.0Gi          0B       1.0Gi
[root@localhost ~]# numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1
node 0 size: 10164 MB
node 0 free: 9810 MB
node 1 cpus: 2 3
node 1 size: 9676 MB
node 1 free: 9328 MB
node distances:
node   0   1 
  0:  10  20 
  1:  20  10 


s5. Use the hot added memory by 'numactl -m <NUMA NODE> <application>'
for two purposes: (a) the hot added memory can be accessed. (b)
the data on the hot added memory can be migrated in the scenario
of hot remove (shrink).

[root@localhost ~]# numactl -m 0 dd if=/dev/urandom of=/tmp/numa_test/test bs=1k count=5242880
5242880+0 records in
5242880+0 records out
5368709120 bytes (5.4 GB, 5.0 GiB) copied, 104.339 s, 51.5 MB/s
[root@localhost ~]# rm -rf /tmp/numa_test/*
[root@localhost ~]# numactl -m 1 dd if=/dev/urandom of=/tmp/numa_test/test bs=1k count=5242880
5242880+0 records in
5242880+0 records out
5368709120 bytes (5.4 GB, 5.0 GiB) copied, 105.022 s, 51.1 MB/s
[root@localhost ~]# memhog -r10 4G --membind 0
[root@localhost ~]# memhog -r10 4G --membind 1

# virsh update-memory-device fj-kvm-vm --live --alias virtiomem0 --requested-size 0G

# virsh update-memory-device fj-kvm-vm --live --alias virtiomem1 --requested-size 0G

# virsh qemu-monitor-command --hmp fj-kvm-vm info memory-devices
Memory device [virtio-mem]: "virtiomem0"
  memaddr: 0x140000000
  node: 0
  requested-size: 0
  size: 134217728
  max-size: 8589934592
  block-size: 2097152
  memdev: /objects/memvirtiomem0
Memory device [virtio-mem]: "virtiomem1"
  memaddr: 0x340000000
  node: 1
  requested-size: 0
  size: 4697620480
  max-size: 8589934592
  block-size: 2097152
  memdev: /objects/memvirtiomem1

[root@localhost ~]# free -h
               total        used        free      shared  buff/cache   available
Mem:           3.4Gi       227Mi       3.0Gi       8.0Mi       140Mi       3.0Gi
Swap:          1.0Gi          0B       1.0Gi
[root@localhost ~]# numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1
node 0 size: 1988 MB
node 0 free: 1762 MB
node 1 cpus: 2 3
node 1 size: 1484 MB
node 1 free: 1365 MB
node distances:
node   0   1 
  0:  10  20 
  1:  20  10

Comment 9 Yiding Liu (Fujitsu) 2022-06-02 06:56:42 UTC
Case3. Test with virtio-mem + vIOMMU

Emm... I don't know how to set virtio-mem + iommu in libvirt xml.

I tried common way to protest virtio-mem with iommu like
```
153     <memory model='virtio-mem'>
154       <target>
155         <size unit='KiB'>8388608</size>
156         <node>0</node>
157         <block unit='KiB'>2048</block>
158         <requested unit='KiB'>1048576</requested>
159       </target>
160       <driver iommu='on'>
161       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
162     </memory>
```
But it can't pass libvirt xml format check. Does the libvirt support set iommu
for memory device?

Comment 10 Yiding Liu (Fujitsu) 2022-06-02 06:58:54 UTC
(In reply to Yiding Liu (Fujitsu) from comment #9)
> Case3. Test with virtio-mem + vIOMMU
> 
> Emm... I don't know how to set virtio-mem + iommu in libvirt xml.
> 

s///g(In reply to Yiding Liu (Fujitsu) from comment #9)
> Case3. Test with virtio-mem + vIOMMU
> 
> Emm... I don't know how to set virtio-mem + iommu in libvirt xml.
> 
> I tried common way to protest virtio-mem with iommu like

s/protest/protect/g

> ```
> 153     <memory model='virtio-mem'>
> 154       <target>
> 155         <size unit='KiB'>8388608</size>
> 156         <node>0</node>
> 157         <block unit='KiB'>2048</block>
> 158         <requested unit='KiB'>1048576</requested>
> 159       </target>
> 160       <driver iommu='on'>
> 161       <address type='pci' domain='0x0000' bus='0x08' slot='0x00'
> function='0x0'/>
> 162     </memory>
> ```
> But it can't pass libvirt xml format check. Does the libvirt support set
> iommu
> for memory device?


The basic functions of virtio-mem works, so set it as VERIFIED.

Comment 11 Michal Privoznik 2022-06-02 13:36:27 UTC
(In reply to Yiding Liu (Fujitsu) from comment #9)

> But it can't pass libvirt xml format check. Does the libvirt support set
> iommu
> for memory device?

No. I haven't implemented that. Is that needed?

Comment 12 Yiding Liu (Fujitsu) 2022-06-06 01:37:39 UTC
(In reply to Michal Privoznik from comment #11)
> (In reply to Yiding Liu (Fujitsu) from comment #9)
> 
> > But it can't pass libvirt xml format check. Does the libvirt support set
> > iommu
> > for memory device?
> 
> No. I haven't implemented that. Is that needed?

Hi, Michal.

The qemu-kvm has supported it,  so I think it is needed in libvirt.
BTW. Fujitsu has no requirement for it. Any priority is ok for us if you will implement it in libvirt.

@Gavin, what do you think.

Comment 13 Yiding Liu (Fujitsu) 2022-06-08 03:03:20 UTC
Case3. virtio-iommu + virtio-mem test

I checked upstream libvirt and the upstream also doesn't support set iommu
for virtio-mem device.



1. Add virtio iommu  + virtio-mem device
```
    <memory model='virtio-mem'>
      <target>         
        <size unit='KiB'>8388608</size>
        <node>0</node>
        <block unit='KiB'>2048</block>
        <requested unit='KiB'>1048576</requested>
      </target>        
      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
    </memory>          
    <memory model='virtio-mem'>
      <target>         
        <size unit='KiB'>8388608</size>
        <node>1</node>
        <block unit='KiB'>2048</block>
        <requested unit='KiB'>2097152</requested>
      </target>        
      <address type='pci' domain='0x0000' bus='0x09' slot='0x00' function='0x0'/>
    </memory>          
    <iommu model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </iommu>    

```

2. Start guest
Check dmesg and pci device
```
[root@localhost ~]# lspci
--snip---
08:00.0 Unclassified device [00ff]: Red Hat, Inc. Device 1058 (rev 01)
09:00.0 Unclassified device [00ff]: Red Hat, Inc. Device 1058 (rev 01)

[root@localhost ~]# dmesg | grep iommu
---snip---
[    3.333485] virtio-pci 0000:08:00.0: Adding to iommu group 1
[    3.342953] virtio-pci 0000:09:00.0: Adding to iommu group 2
```

3. Test virtio-mem deive
```
[root@hpe-apollo80-01-n01 ~]# virsh qemu-monitor-command --hmp fj-kvm-vm info memory-devices
Memory device [virtio-mem]: "virtiomem0"
  memaddr: 0x140000000
  node: 0
  requested-size: 1073741824
  size: 1073741824
  max-size: 8589934592
  block-size: 2097152
  memdev: /objects/memvirtiomem0
Memory device [virtio-mem]: "virtiomem1"
  memaddr: 0x340000000
  node: 1
  requested-size: 2147483648
  size: 2147483648
  max-size: 8589934592
  block-size: 2097152
  memdev: /objects/memvirtiomem1


[root@hpe-apollo80-01-n01 ~]# virsh update-memory-device fj-kvm-vm --live --alias virtiomem0 --requested-size 8G

[root@hpe-apollo80-01-n01 ~]# virsh update-memory-device fj-kvm-vm --live --alias virtiomem1 --requested-size 8G

[root@localhost ~]# numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1
node 0 size: 3034 MB
node 0 free: 2817 MB
node 1 cpus: 2 3
node 1 size: 3495 MB
node 1 free: 3340 MB
node distances:
node   0   1 
  0:  10  20 
  1:  20  10 
[root@localhost ~]# free -h
               total        used        free      shared  buff/cache   available
Mem:            19Gi       514Mi        18Gi       8.0Mi       141Mi        18Gi
Swap:          1.0Gi          0B       1.0Gi
[root@localhost ~]# mkdir -p /tmp/numa_test
[root@localhost ~]# numactl -m 0 dd if=/dev/urandom of=/tmp/numa_test/test bs=1k count=5242880


5242880+0 records in
5242880+0 records out
5368709120 bytes (5.4 GB, 5.0 GiB) copied, 110.641 s, 48.5 MB/s
[root@localhost ~]# 
[root@localhost ~]# 
[root@localhost ~]# rm -rf /tmp/numa_test/*
[root@localhost ~]# numactl -m 1 dd if=/dev/urandom of=/tmp/numa_test/test bs=1k count=5242880
5242880+0 records in
5242880+0 records out
5368709120 bytes (5.4 GB, 5.0 GiB) copied, 107.677 s, 49.9 MB/s
[root@localhost ~]# memhog -r10 4G --membind 0
[root@localhost ~]# memhog -r10 4G --membind 1

[root@hpe-apollo80-01-n01 ~]# virsh update-memory-device fj-kvm-vm --live --alias virtiomem0 --requested-size 0G

[root@hpe-apollo80-01-n01 ~]# virsh update-memory-device fj-kvm-vm --live --alias virtiomem1 --requested-size 0G

[root@hpe-apollo80-01-n01 ~]# virsh qemu-monitor-command --hmp fj-kvm-vm info memory-devices
Memory device [virtio-mem]: "virtiomem0"
  memaddr: 0x140000000
  node: 0
  requested-size: 0
  size: 2147483648
  max-size: 8589934592
  block-size: 2097152
  memdev: /objects/memvirtiomem0
Memory device [virtio-mem]: "virtiomem1"
  memaddr: 0x340000000
  node: 1
  requested-size: 0
  size: 5368709120
  max-size: 8589934592
  block-size: 2097152
  memdev: /objects/memvirtiomem1

[root@localhost ~]# free -h
               total        used        free      shared  buff/cache   available
Mem:           3.6Gi       212Mi       2.6Gi       8.0Mi       742Mi       3.1Gi
Swap:          1.0Gi          0B       1.0Gi
[root@localhost ~]# numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1
node 0 size: 2202 MB
node 0 free: 2001 MB
node 1 cpus: 2 3
node 1 size: 1447 MB
node 1 free: 693 MB
node distances:
node   0   1 
  0:  10  20 
  1:  20  10 

```

Comment 14 Guowen Shan 2022-06-08 09:23:07 UTC
Yiding, thanks for your further testing and checking. The result
is as expected from my mind. As we discussed in today's meeting,
lets create another separate bug, to support virtio-iommu for
virtio-mem device, if Michal agrees. Michael, please let Yiding
know your preference.

Comment 15 Michal Privoznik 2022-06-08 15:16:45 UTC
(In reply to Guowen Shan from comment #14)
> Yiding, thanks for your further testing and checking. The result
> is as expected from my mind. As we discussed in today's meeting,
> lets create another separate bug, to support virtio-iommu for
> virtio-mem device, if Michal agrees. Michael, please let Yiding
> know your preference.

Yep, I agree. The basic functionality is in RHEL-9 and that's what this bug reflects.

Comment 16 Yiding Liu (Fujitsu) 2022-06-09 02:18:13 UTC
https://bugzilla.redhat.com/show_bug.cgi?id=2095091(In reply to Michal Privoznik from comment #15)
> (In reply to Guowen Shan from comment #14)
> > Yiding, thanks for your further testing and checking. The result
> > is as expected from my mind. As we discussed in today's meeting,
> > lets create another separate bug, to support virtio-iommu for
> > virtio-mem device, if Michal agrees. Michael, please let Yiding
> > know your preference.
> 
> Yep, I agree. The basic functionality is in RHEL-9 and that's what this bug
> reflects.

Done.
https://bugzilla.redhat.com/show_bug.cgi?id=2095091

Comment 23 Yash Mankad 2023-02-16 22:21:09 UTC
Closing as CURRENTRELEASE as RHEL 9.1 GA'ed in November 2022


Note You need to log in before you can comment on or make changes to this bug.