Bug 1718124
Summary: | [qemu] qemu-kvm core dumped while starting up an iommu guest with dual SRIOV interfaces | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 8 | Reporter: | Jean-Tsung Hsiao <jhsiao> |
Component: | qemu-kvm | Assignee: | Alex Williamson <alex.williamson> |
Status: | CLOSED DUPLICATE | QA Contact: | Jean-Tsung Hsiao <jhsiao> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | --- | CC: | chayang, ctrautma, jhsiao, juzhang, kzhang, peterx, pezhang, rbalakri, ribarry, tli, virt-maint |
Target Milestone: | rc | ||
Target Release: | 8.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-07-09 19:10:52 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Jean-Tsung Hsiao
2019-06-07 01:58:24 UTC
guest's xml file: http://pastebin.test.redhat.com/769741 Jean-Tsung, can you provide a core dump or stack trace? Is the problem reproducible? (In reply to Rick Barry from comment #2) > Jean-Tsung, can you provide a core dump or stack trace? Is the problem > reproducible? Where to find that ? (In reply to Rick Barry from comment #2) > Jean-Tsung, can you provide a core dump or stack trace? Is the problem > reproducible? It is 100% reproducible as long as you have two hostdev interfaces. Please check my guest's xml file. Anything in the libvirt log? /var/log/libvirt/qemu/master.log (In reply to Alex Williamson from comment #5) > Anything in the libvirt log? /var/log/libvirt/qemu/master.log Yes, good stuff here: http://pastebin.test.redhat.com/769947 vfio dma mapping failures, but you've already got: <memtune> <hard_limit unit='KiB'>16777216</hard_limit> </memtune> Where VM memory is: <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> If you increase the hard_limit further does the issue go away? This might just be the know issue in bz1619734. libvirt typically sets the locked memory limit of a non-viommu VM to RAM + 1G whereas it's set to exactly 2x RAM here. We expect to need 1x per assigned device with viommu, but we're missing that "fudge factor". Suggest hard_limit of at least 17825792. (In reply to Alex Williamson from comment #7) > vfio dma mapping failures, but you've already got: > > <memtune> > <hard_limit unit='KiB'>16777216</hard_limit> > </memtune> > > Where VM memory is: > > <memory unit='KiB'>8388608</memory> > <currentMemory unit='KiB'>8388608</currentMemory> > > If you increase the hard_limit further does the issue go away? This might > just be the know issue in bz1619734. libvirt typically sets the locked > memory limit of a non-viommu VM to RAM + 1G whereas it's set to exactly 2x > RAM here. We expect to need 1x per assigned device with viommu, but we're > missing that "fudge factor". Suggest hard_limit of at least 17825792. Yes, with 17825792 the guest is now running with two iommu SRIOV interfaces. I'll run testpmd traffic test next. I believe the immediate issue is resolve by manually increasing the hard limit for the VM to account for the duplicate locked memory per assigned device. Marking as duplicate of bug 1619734 which aims to provide a more optimal solution for this issue. *** This bug has been marked as a duplicate of bug 1619734 *** |