Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
(In reply to Rick Barry from comment #2)
> Jean-Tsung, can you provide a core dump or stack trace? Is the problem
> reproducible?
It is 100% reproducible as long as you have two hostdev interfaces. Please check my guest's xml file.
vfio dma mapping failures, but you've already got:
<memtune>
<hard_limit unit='KiB'>16777216</hard_limit>
</memtune>
Where VM memory is:
<memory unit='KiB'>8388608</memory>
<currentMemory unit='KiB'>8388608</currentMemory>
If you increase the hard_limit further does the issue go away? This might just be the know issue in bz1619734. libvirt typically sets the locked memory limit of a non-viommu VM to RAM + 1G whereas it's set to exactly 2x RAM here. We expect to need 1x per assigned device with viommu, but we're missing that "fudge factor". Suggest hard_limit of at least 17825792.
(In reply to Alex Williamson from comment #7)
> vfio dma mapping failures, but you've already got:
>
> <memtune>
> <hard_limit unit='KiB'>16777216</hard_limit>
> </memtune>
>
> Where VM memory is:
>
> <memory unit='KiB'>8388608</memory>
> <currentMemory unit='KiB'>8388608</currentMemory>
>
> If you increase the hard_limit further does the issue go away? This might
> just be the know issue in bz1619734. libvirt typically sets the locked
> memory limit of a non-viommu VM to RAM + 1G whereas it's set to exactly 2x
> RAM here. We expect to need 1x per assigned device with viommu, but we're
> missing that "fudge factor". Suggest hard_limit of at least 17825792.
Yes, with 17825792 the guest is now running with two iommu SRIOV interfaces. I'll run testpmd traffic test next.
I believe the immediate issue is resolve by manually increasing the hard limit for the VM to account for the duplicate locked memory per assigned device. Marking as duplicate of bug 1619734 which aims to provide a more optimal solution for this issue.
*** This bug has been marked as a duplicate of bug 1619734 ***