RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1718124 - [qemu] qemu-kvm core dumped while starting up an iommu guest with dual SRIOV interfaces
Summary: [qemu] qemu-kvm core dumped while starting up an iommu guest with dual SRIOV ...
Keywords:
Status: CLOSED DUPLICATE of bug 1619734
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: qemu-kvm
Version: ---
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 8.0
Assignee: Alex Williamson
QA Contact: Jean-Tsung Hsiao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-06-07 01:58 UTC by Jean-Tsung Hsiao
Modified: 2019-07-09 19:10 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-07-09 19:10:52 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Jean-Tsung Hsiao 2019-06-07 01:58:24 UTC
Description of problem: [qemu] qemu-kvm core dumped while starting up an iommu guest with dual SRIOV interfaces

[root@netqe7 ~]# tail -f /var/log/messages | grep qemu 

Jun  6 21:31:48 netqe7 systemd-machined[1187]: New machine qemu-7-master.
Jun  6 21:31:48 netqe7 systemd[1]: Started Virtual Machine qemu-7-master.
Jun  6 21:31:51 netqe7 systemd-coredump[5948]: Resource limits disable core dumping for process 5896 (qemu-kvm).
Jun  6 21:31:51 netqe7 systemd-coredump[5948]: Process 5896 (qemu-kvm) of user 107 dumped core.
Jun  6 21:31:51 netqe7 libvirtd[1330]: 2019-06-07 01:31:51.564+0000: 1330: error : qemuMonitorIO:718 : internal error: End of file from qemu monitor
Jun  6 21:31:51 netqe7 systemd-machined[1187]: Machine qemu-7-master terminated.


Version-Release number of selected component (if applicable):
[root@netqe7 ~]# rpm -qa | grep -i qemu
qemu-kvm-2.12.0-63.module+el8+2833+c7d6d092.x86_64
qemu-img-2.12.0-63.module+el8+2833+c7d6d092.x86_64
qemu-kvm-block-curl-2.12.0-63.module+el8+2833+c7d6d092.x86_64
qemu-kvm-block-iscsi-2.12.0-63.module+el8+2833+c7d6d092.x86_64
libvirt-daemon-driver-qemu-4.5.0-23.module+el8+2800+2d311f65.x86_64
qemu-kvm-common-2.12.0-63.module+el8+2833+c7d6d092.x86_64
ipxe-roms-qemu-20181214-1.git133f4c47.el8.noarch
qemu-kvm-block-rbd-2.12.0-63.module+el8+2833+c7d6d092.x86_64
qemu-kvm-block-gluster-2.12.0-63.module+el8+2833+c7d6d092.x86_64
qemu-kvm-block-ssh-2.12.0-63.module+el8+2833+c7d6d092.x86_64
qemu-kvm-core-2.12.0-63.module+el8+2833+c7d6d092.x86_64
[root@netqe7 ~]# 

[root@netqe7 ~]# uname -a
Linux netqe7.knqe.lab.eng.bos.redhat.com 4.18.0-80.el8.x86_64 #1 SMP Wed Mar 13 12:02:46 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
[root@netqe7 ~]#

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Jean-Tsung Hsiao 2019-06-07 02:00:20 UTC
guest's xml file: http://pastebin.test.redhat.com/769741

Comment 2 Rick Barry 2019-06-07 14:05:02 UTC
Jean-Tsung, can you provide a core dump or stack trace? Is the problem reproducible?

Comment 3 Jean-Tsung Hsiao 2019-06-07 14:13:32 UTC
(In reply to Rick Barry from comment #2)
> Jean-Tsung, can you provide a core dump or stack trace? Is the problem
> reproducible?

Where to find that ?

Comment 4 Jean-Tsung Hsiao 2019-06-07 14:18:52 UTC
(In reply to Rick Barry from comment #2)
> Jean-Tsung, can you provide a core dump or stack trace? Is the problem
> reproducible?

It is 100% reproducible as long as you have two hostdev interfaces. Please check my guest's xml file.

Comment 5 Alex Williamson 2019-06-07 14:32:45 UTC
Anything in the libvirt log?  /var/log/libvirt/qemu/master.log

Comment 6 Jean-Tsung Hsiao 2019-06-07 15:07:26 UTC
(In reply to Alex Williamson from comment #5)
> Anything in the libvirt log?  /var/log/libvirt/qemu/master.log

Yes, good stuff here: http://pastebin.test.redhat.com/769947

Comment 7 Alex Williamson 2019-06-07 16:00:57 UTC
vfio dma mapping failures, but you've already got:

  <memtune>
    <hard_limit unit='KiB'>16777216</hard_limit>
  </memtune>

Where VM memory is:

  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>

If you increase the hard_limit further does the issue go away?  This might just be the know issue in bz1619734.  libvirt typically sets the locked memory limit of a non-viommu VM to RAM + 1G whereas it's set to exactly 2x RAM here. We expect to need 1x per assigned device with viommu, but we're missing that "fudge factor".  Suggest hard_limit of at least 17825792.

Comment 8 Jean-Tsung Hsiao 2019-06-07 16:26:27 UTC
(In reply to Alex Williamson from comment #7)
> vfio dma mapping failures, but you've already got:
> 
>   <memtune>
>     <hard_limit unit='KiB'>16777216</hard_limit>
>   </memtune>
> 
> Where VM memory is:
> 
>   <memory unit='KiB'>8388608</memory>
>   <currentMemory unit='KiB'>8388608</currentMemory>
> 
> If you increase the hard_limit further does the issue go away?  This might
> just be the know issue in bz1619734.  libvirt typically sets the locked
> memory limit of a non-viommu VM to RAM + 1G whereas it's set to exactly 2x
> RAM here. We expect to need 1x per assigned device with viommu, but we're
> missing that "fudge factor".  Suggest hard_limit of at least 17825792.

Yes, with 17825792 the guest is now running with two iommu SRIOV interfaces. I'll run testpmd traffic test next.

Comment 9 Alex Williamson 2019-07-09 19:10:52 UTC
I believe the immediate issue is resolve by manually increasing the hard limit for the VM to account for the duplicate locked memory per assigned device.  Marking as duplicate of bug 1619734 which aims to provide a more optimal solution for this issue.

*** This bug has been marked as a duplicate of bug 1619734 ***


Note You need to log in before you can comment on or make changes to this bug.