Bug 1718124

Summary: [qemu] qemu-kvm core dumped while starting up an iommu guest with dual SRIOV interfaces
Product: Red Hat Enterprise Linux 8 Reporter: Jean-Tsung Hsiao <jhsiao>
Component: qemu-kvmAssignee: Alex Williamson <alex.williamson>
Status: CLOSED DUPLICATE QA Contact: Jean-Tsung Hsiao <jhsiao>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: ---CC: chayang, ctrautma, jhsiao, juzhang, kzhang, peterx, pezhang, rbalakri, ribarry, tli, virt-maint
Target Milestone: rc   
Target Release: 8.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-07-09 19:10:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Jean-Tsung Hsiao 2019-06-07 01:58:24 UTC
Description of problem: [qemu] qemu-kvm core dumped while starting up an iommu guest with dual SRIOV interfaces

[root@netqe7 ~]# tail -f /var/log/messages | grep qemu 

Jun  6 21:31:48 netqe7 systemd-machined[1187]: New machine qemu-7-master.
Jun  6 21:31:48 netqe7 systemd[1]: Started Virtual Machine qemu-7-master.
Jun  6 21:31:51 netqe7 systemd-coredump[5948]: Resource limits disable core dumping for process 5896 (qemu-kvm).
Jun  6 21:31:51 netqe7 systemd-coredump[5948]: Process 5896 (qemu-kvm) of user 107 dumped core.
Jun  6 21:31:51 netqe7 libvirtd[1330]: 2019-06-07 01:31:51.564+0000: 1330: error : qemuMonitorIO:718 : internal error: End of file from qemu monitor
Jun  6 21:31:51 netqe7 systemd-machined[1187]: Machine qemu-7-master terminated.


Version-Release number of selected component (if applicable):
[root@netqe7 ~]# rpm -qa | grep -i qemu
qemu-kvm-2.12.0-63.module+el8+2833+c7d6d092.x86_64
qemu-img-2.12.0-63.module+el8+2833+c7d6d092.x86_64
qemu-kvm-block-curl-2.12.0-63.module+el8+2833+c7d6d092.x86_64
qemu-kvm-block-iscsi-2.12.0-63.module+el8+2833+c7d6d092.x86_64
libvirt-daemon-driver-qemu-4.5.0-23.module+el8+2800+2d311f65.x86_64
qemu-kvm-common-2.12.0-63.module+el8+2833+c7d6d092.x86_64
ipxe-roms-qemu-20181214-1.git133f4c47.el8.noarch
qemu-kvm-block-rbd-2.12.0-63.module+el8+2833+c7d6d092.x86_64
qemu-kvm-block-gluster-2.12.0-63.module+el8+2833+c7d6d092.x86_64
qemu-kvm-block-ssh-2.12.0-63.module+el8+2833+c7d6d092.x86_64
qemu-kvm-core-2.12.0-63.module+el8+2833+c7d6d092.x86_64
[root@netqe7 ~]# 

[root@netqe7 ~]# uname -a
Linux netqe7.knqe.lab.eng.bos.redhat.com 4.18.0-80.el8.x86_64 #1 SMP Wed Mar 13 12:02:46 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
[root@netqe7 ~]#

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Jean-Tsung Hsiao 2019-06-07 02:00:20 UTC
guest's xml file: http://pastebin.test.redhat.com/769741

Comment 2 Rick Barry 2019-06-07 14:05:02 UTC
Jean-Tsung, can you provide a core dump or stack trace? Is the problem reproducible?

Comment 3 Jean-Tsung Hsiao 2019-06-07 14:13:32 UTC
(In reply to Rick Barry from comment #2)
> Jean-Tsung, can you provide a core dump or stack trace? Is the problem
> reproducible?

Where to find that ?

Comment 4 Jean-Tsung Hsiao 2019-06-07 14:18:52 UTC
(In reply to Rick Barry from comment #2)
> Jean-Tsung, can you provide a core dump or stack trace? Is the problem
> reproducible?

It is 100% reproducible as long as you have two hostdev interfaces. Please check my guest's xml file.

Comment 5 Alex Williamson 2019-06-07 14:32:45 UTC
Anything in the libvirt log?  /var/log/libvirt/qemu/master.log

Comment 6 Jean-Tsung Hsiao 2019-06-07 15:07:26 UTC
(In reply to Alex Williamson from comment #5)
> Anything in the libvirt log?  /var/log/libvirt/qemu/master.log

Yes, good stuff here: http://pastebin.test.redhat.com/769947

Comment 7 Alex Williamson 2019-06-07 16:00:57 UTC
vfio dma mapping failures, but you've already got:

  <memtune>
    <hard_limit unit='KiB'>16777216</hard_limit>
  </memtune>

Where VM memory is:

  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>

If you increase the hard_limit further does the issue go away?  This might just be the know issue in bz1619734.  libvirt typically sets the locked memory limit of a non-viommu VM to RAM + 1G whereas it's set to exactly 2x RAM here. We expect to need 1x per assigned device with viommu, but we're missing that "fudge factor".  Suggest hard_limit of at least 17825792.

Comment 8 Jean-Tsung Hsiao 2019-06-07 16:26:27 UTC
(In reply to Alex Williamson from comment #7)
> vfio dma mapping failures, but you've already got:
> 
>   <memtune>
>     <hard_limit unit='KiB'>16777216</hard_limit>
>   </memtune>
> 
> Where VM memory is:
> 
>   <memory unit='KiB'>8388608</memory>
>   <currentMemory unit='KiB'>8388608</currentMemory>
> 
> If you increase the hard_limit further does the issue go away?  This might
> just be the know issue in bz1619734.  libvirt typically sets the locked
> memory limit of a non-viommu VM to RAM + 1G whereas it's set to exactly 2x
> RAM here. We expect to need 1x per assigned device with viommu, but we're
> missing that "fudge factor".  Suggest hard_limit of at least 17825792.

Yes, with 17825792 the guest is now running with two iommu SRIOV interfaces. I'll run testpmd traffic test next.

Comment 9 Alex Williamson 2019-07-09 19:10:52 UTC
I believe the immediate issue is resolve by manually increasing the hard limit for the VM to account for the duplicate locked memory per assigned device.  Marking as duplicate of bug 1619734 which aims to provide a more optimal solution for this issue.

*** This bug has been marked as a duplicate of bug 1619734 ***