RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2143838 - vdpa device and vfio device should not share the locked memory
Summary: vdpa device and vfio device should not share the locked memory
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: libvirt
Version: 9.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Jonathon Jongsma
QA Contact: yalzhang@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-11-18 03:34 UTC by yalzhang@redhat.com
Modified: 2023-05-09 08:09 UTC (History)
8 users (show)

Fixed In Version: libvirt-8.10.0-1.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-05-09 07:27:15 UTC
Type: Bug
Target Upstream Version: 8.10.0
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker LIBVIRTAT-13872 0 None None None 2023-01-17 08:37:16 UTC
Red Hat Issue Tracker RHELPLAN-139878 0 None None None 2022-11-18 03:43:02 UTC
Red Hat Product Errata RHBA-2023:2171 0 None None None 2023-05-09 07:27:30 UTC

Description yalzhang@redhat.com 2022-11-18 03:34:05 UTC
Description of problem:
vdpa device and vfio device should not share the locked memory

Version-Release number of selected component (if applicable):
libvirt-8.9.0-2.el9.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Start vm with a vfio device(the vm has no vIOMMU setting), and check the memlock limit:
# virsh dumpxml rhel | grep -i currentmemory
  <currentMemory unit='KiB'>2097152</currentMemory>

# virsh domiflist rhel
 Interface   Type      Source   Model   MAC
-----------------------------------------------------------
 -           hostdev   -        -       52:54:00:fb:2c:b4

# virsh start rhel
Domain 'rhel' started

Check the memlock limit, it is expected "1G + current memory"
# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 3221225472 3221225472 bytes

2. Hotplug a vdpa interface, and check the memlock limit:
# cat vdpa1.xml
<interface type='vdpa'>
      <mac address='00:11:22:33:44:11'/>
      <source dev='/dev/vhost-vdpa-1'/>
      <model type='virtio'/>
      <driver queues='8'/>
    </interface>

# virsh attach-device rhel vdpa1.xml
Device attached successfully

Check the memlock limit do not update, still keep as 3G
# virsh domiflist rhel
 Interface   Type      Source              Model    MAC
-----------------------------------------------------------------------
 -           hostdev   -        -       52:54:00:fb:2c:b4
 -           vdpa      /dev/vhost-vdpa-1   virtio   00:11:22:33:44:11

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 3221225472 3221225472 bytes

Actual results:
In step 2, after hotplug a vdpa interface, the memlock limit do not update

Expected results:
In step 2, the memlock should update to 5G

Additional info:
It is confirmed on bug 2111317#c31

Comment 1 Jonathon Jongsma 2022-11-18 16:47:07 UTC
fix posted upstream: https://listman.redhat.com/archives/libvir-list/2022-November/235825.html

Comment 2 Jaroslav Suchanek 2022-11-28 12:27:44 UTC
Fixed by,

commit 2a2d5860435909f5619725a6c29583db90aa789b
Author:     Jonathon Jongsma <jjongsma>
AuthorDate: Thu Nov 17 12:15:23 2022 -0600
Commit:     Jonathon Jongsma <jjongsma>
CommitDate: Mon Nov 21 15:37:41 2022 -0600

    qemu: fix memlock without vIOMMU
    
    When there is no vIOMMU, vfio devices don't need to lock the entire guest
    memory per-device, but they still need to lock the entire guest memory to
    share between all vfio devices. This memory accounting is not shared
    with vDPA devices, so it should be added to the memlock limit separately.
    
    Commit 8d5704e2 added support for multiple vfio/vdpa devices but
    calculated the limits incorrectly when there were both vdpa and vfio
    devices and no vIOMMU. In this case, the memory lock limit was not
    increased separately for the vfio devices.
    
    Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2143838
    
    Signed-off-by: Jonathon Jongsma <jjongsma>
    Reviewed-by: Laine Stump <laine>

v8.9.0-277-g2a2d586043

Comment 3 yalzhang@redhat.com 2022-12-02 07:38:28 UTC
Test on libvirt-8.10.0-1.el9.x86_64 with the scenario in comment 0, the result is as expected.

1. Start vm with hostdev interface(without iommu device):
# virsh domiflist rhel 
 Interface   Type      Source              Model    MAC
-----------------------------------------------------------------------
 -           hostdev   -                   -        52:54:00:b7:2c:01

2. check the memlock limit, it's expected: 
# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 3221225472 3221225472 bytes

3. hotplug a vdpa interface
# virsh attach-device rhel vdpa0.xml  
Device attached successfully

# virsh domiflist rhel 
 Interface   Type      Source              Model    MAC
-----------------------------------------------------------------------
 -           hostdev   -                   -        52:54:00:b7:2c:01
 -           vdpa      /dev/vhost-vdpa-0   virtio   00:11:22:33:44:00

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 5368709120 5368709120 bytes

Comment 6 yalzhang@redhat.com 2022-12-08 09:46:39 UTC
Test on libvirt-8.10.0-2.el9.x86_64:

1. start vm with vdpa interface and 2G memory:
# virsh start rhel 
Domain 'rhel' started

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 3221225472 3221225472 bytes

2. hotplug a hostdev interface and check the memlock limit, it is expected:
# virsh attach-interface rhel hostdev --managed 0000:3b:02.0
Interface attached successfully

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 5368709120 5368709120 bytes

3. hotplug one more hostdev interface and check the memlock limit, it is expected:
# virsh attach-interface rhel hostdev --managed 0000:3b:02.1
Interface attached successfully

# virsh domiflist rhel 
 Interface   Type      Source              Model    MAC
-----------------------------------------------------------------------
 -           vdpa      /dev/vhost-vdpa-1   virtio   00:11:22:33:44:11
 -           hostdev   -                   -        52:54:00:7f:05:3c
 -           hostdev   -                   -        52:54:00:f2:d6:10

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 5368709120 5368709120 bytes

4. hotplug one more vdpa interface, the result is expected:
# virsh attach-device rhel vdpa2.xml 
Device attached successfully

# virsh domiflist rhel 
 Interface   Type      Source              Model    MAC
-----------------------------------------------------------------------
 -           vdpa      /dev/vhost-vdpa-1   virtio   00:11:22:33:44:11
 -           hostdev   -                   -        52:54:00:7f:05:3c
 -           hostdev   -                   -        52:54:00:f2:d6:10
 -           vdpa      /dev/vhost-vdpa-2   virtio   00:11:22:33:44:22

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 7516192768 7516192768 bytes

Comment 7 yalzhang@redhat.com 2022-12-08 09:53:13 UTC
Start vm with vdpa and hostdev interface and check the result is as expected:

1. Set vm with 2G memory and:
# virsh dumpxml rhel 
......
<interface type="vdpa">
  <mac address="00:11:22:33:44:11"/>
  <source dev="/dev/vhost-vdpa-1"/>
  <model type="virtio"/>
  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
</interface>
......
<hostdev mode="subsystem" type="pci" managed="yes">
  <source>
    <address domain="0x0000" bus="0x3b" slot="0x02" function="0x2"/>
  </source>
  <address type="pci" domain="0x0000" bus="0x08" slot="0x00" function="0x0"/>
</hostdev>
.....

2. Start the vm and check the memlock limit, the result is as expected:
# virsh start rhel 
Domain 'rhel' started

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 5368709120 5368709120 bytes

Comment 8 Laurent Vivier 2023-01-23 15:33:01 UTC
What is the status of this BZ?
Should it be closed?

Comment 9 Jonathon Jongsma 2023-01-23 16:45:09 UTC
The issue is already fixed in the noted package version and has been verified by QE. As far as I know, the errata system will automatically close the bug when the package is shipped.

Comment 11 errata-xmlrpc 2023-05-09 07:27:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (libvirt bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:2171


Note You need to log in before you can comment on or make changes to this bug.