Bug 2143838

Summary: vdpa device and vfio device should not share the locked memory
Product: Red Hat Enterprise Linux 9 Reporter: yalzhang <yalzhang>
Component: libvirtAssignee: Jonathon Jongsma <jjongsma>
libvirt sub component: Networking QA Contact: yalzhang <yalzhang>
Status: CLOSED ERRATA Docs Contact:
Severity: unspecified    
Priority: unspecified CC: dzheng, jdenemar, jjongsma, jsuchane, lmen, lvivier, virt-maint, yicui
Version: 9.2Keywords: Triaged
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: libvirt-8.10.0-1.el9 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-05-09 07:27:15 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version: 8.10.0
Embargoed:

Description yalzhang@redhat.com 2022-11-18 03:34:05 UTC
Description of problem:
vdpa device and vfio device should not share the locked memory

Version-Release number of selected component (if applicable):
libvirt-8.9.0-2.el9.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Start vm with a vfio device(the vm has no vIOMMU setting), and check the memlock limit:
# virsh dumpxml rhel | grep -i currentmemory
  <currentMemory unit='KiB'>2097152</currentMemory>

# virsh domiflist rhel
 Interface   Type      Source   Model   MAC
-----------------------------------------------------------
 -           hostdev   -        -       52:54:00:fb:2c:b4

# virsh start rhel
Domain 'rhel' started

Check the memlock limit, it is expected "1G + current memory"
# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 3221225472 3221225472 bytes

2. Hotplug a vdpa interface, and check the memlock limit:
# cat vdpa1.xml
<interface type='vdpa'>
      <mac address='00:11:22:33:44:11'/>
      <source dev='/dev/vhost-vdpa-1'/>
      <model type='virtio'/>
      <driver queues='8'/>
    </interface>

# virsh attach-device rhel vdpa1.xml
Device attached successfully

Check the memlock limit do not update, still keep as 3G
# virsh domiflist rhel
 Interface   Type      Source              Model    MAC
-----------------------------------------------------------------------
 -           hostdev   -        -       52:54:00:fb:2c:b4
 -           vdpa      /dev/vhost-vdpa-1   virtio   00:11:22:33:44:11

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 3221225472 3221225472 bytes

Actual results:
In step 2, after hotplug a vdpa interface, the memlock limit do not update

Expected results:
In step 2, the memlock should update to 5G

Additional info:
It is confirmed on bug 2111317#c31

Comment 1 Jonathon Jongsma 2022-11-18 16:47:07 UTC
fix posted upstream: https://listman.redhat.com/archives/libvir-list/2022-November/235825.html

Comment 2 Jaroslav Suchanek 2022-11-28 12:27:44 UTC
Fixed by,

commit 2a2d5860435909f5619725a6c29583db90aa789b
Author:     Jonathon Jongsma <jjongsma>
AuthorDate: Thu Nov 17 12:15:23 2022 -0600
Commit:     Jonathon Jongsma <jjongsma>
CommitDate: Mon Nov 21 15:37:41 2022 -0600

    qemu: fix memlock without vIOMMU
    
    When there is no vIOMMU, vfio devices don't need to lock the entire guest
    memory per-device, but they still need to lock the entire guest memory to
    share between all vfio devices. This memory accounting is not shared
    with vDPA devices, so it should be added to the memlock limit separately.
    
    Commit 8d5704e2 added support for multiple vfio/vdpa devices but
    calculated the limits incorrectly when there were both vdpa and vfio
    devices and no vIOMMU. In this case, the memory lock limit was not
    increased separately for the vfio devices.
    
    Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2143838
    
    Signed-off-by: Jonathon Jongsma <jjongsma>
    Reviewed-by: Laine Stump <laine>

v8.9.0-277-g2a2d586043

Comment 3 yalzhang@redhat.com 2022-12-02 07:38:28 UTC
Test on libvirt-8.10.0-1.el9.x86_64 with the scenario in comment 0, the result is as expected.

1. Start vm with hostdev interface(without iommu device):
# virsh domiflist rhel 
 Interface   Type      Source              Model    MAC
-----------------------------------------------------------------------
 -           hostdev   -                   -        52:54:00:b7:2c:01

2. check the memlock limit, it's expected: 
# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 3221225472 3221225472 bytes

3. hotplug a vdpa interface
# virsh attach-device rhel vdpa0.xml  
Device attached successfully

# virsh domiflist rhel 
 Interface   Type      Source              Model    MAC
-----------------------------------------------------------------------
 -           hostdev   -                   -        52:54:00:b7:2c:01
 -           vdpa      /dev/vhost-vdpa-0   virtio   00:11:22:33:44:00

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 5368709120 5368709120 bytes

Comment 6 yalzhang@redhat.com 2022-12-08 09:46:39 UTC
Test on libvirt-8.10.0-2.el9.x86_64:

1. start vm with vdpa interface and 2G memory:
# virsh start rhel 
Domain 'rhel' started

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 3221225472 3221225472 bytes

2. hotplug a hostdev interface and check the memlock limit, it is expected:
# virsh attach-interface rhel hostdev --managed 0000:3b:02.0
Interface attached successfully

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 5368709120 5368709120 bytes

3. hotplug one more hostdev interface and check the memlock limit, it is expected:
# virsh attach-interface rhel hostdev --managed 0000:3b:02.1
Interface attached successfully

# virsh domiflist rhel 
 Interface   Type      Source              Model    MAC
-----------------------------------------------------------------------
 -           vdpa      /dev/vhost-vdpa-1   virtio   00:11:22:33:44:11
 -           hostdev   -                   -        52:54:00:7f:05:3c
 -           hostdev   -                   -        52:54:00:f2:d6:10

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 5368709120 5368709120 bytes

4. hotplug one more vdpa interface, the result is expected:
# virsh attach-device rhel vdpa2.xml 
Device attached successfully

# virsh domiflist rhel 
 Interface   Type      Source              Model    MAC
-----------------------------------------------------------------------
 -           vdpa      /dev/vhost-vdpa-1   virtio   00:11:22:33:44:11
 -           hostdev   -                   -        52:54:00:7f:05:3c
 -           hostdev   -                   -        52:54:00:f2:d6:10
 -           vdpa      /dev/vhost-vdpa-2   virtio   00:11:22:33:44:22

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 7516192768 7516192768 bytes

Comment 7 yalzhang@redhat.com 2022-12-08 09:53:13 UTC
Start vm with vdpa and hostdev interface and check the result is as expected:

1. Set vm with 2G memory and:
# virsh dumpxml rhel 
......
<interface type="vdpa">
  <mac address="00:11:22:33:44:11"/>
  <source dev="/dev/vhost-vdpa-1"/>
  <model type="virtio"/>
  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
</interface>
......
<hostdev mode="subsystem" type="pci" managed="yes">
  <source>
    <address domain="0x0000" bus="0x3b" slot="0x02" function="0x2"/>
  </source>
  <address type="pci" domain="0x0000" bus="0x08" slot="0x00" function="0x0"/>
</hostdev>
.....

2. Start the vm and check the memlock limit, the result is as expected:
# virsh start rhel 
Domain 'rhel' started

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 5368709120 5368709120 bytes

Comment 8 Laurent Vivier 2023-01-23 15:33:01 UTC
What is the status of this BZ?
Should it be closed?

Comment 9 Jonathon Jongsma 2023-01-23 16:45:09 UTC
The issue is already fixed in the noted package version and has been verified by QE. As far as I know, the errata system will automatically close the bug when the package is shipped.

Comment 11 errata-xmlrpc 2023-05-09 07:27:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (libvirt bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:2171