RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1552978 - qemu crash during throughput testing over ovs+dpdk+vhost-user
Summary: qemu crash during throughput testing over ovs+dpdk+vhost-user
Keywords:
Status: CLOSED DUPLICATE of bug 1547940
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Maxime Coquelin
QA Contact: Pei Zhang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-03-08 01:14 UTC by Pei Zhang
Modified: 2018-08-10 09:59 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-08-10 09:59:11 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Pei Zhang 2018-03-08 01:14:52 UTC
Description of problem:
qemu crash when doing nfv throughput testing, this scenario was basic "Guest with ovs+dpdk+vhost-user".

qemu crash when dpdk's testpmd is running in guest and also is forwarding packets from MoonGen.

Version-Release number of selected component (if applicable):
tuned-2.9.0-1.el7.noarch
libvirt-3.9.0-14.el7.x86_64
qemu-kvm-rhev-2.10.0-21.el7.x86_64
kernel-3.10.0-858.el7.x86_64
openvswitch-2.9.0-3.el7fdp.x86_64
dpdk-17.11-7.el7.x86_64


How reproducible:
1/3 with automation, hard to reproduce manually.


Steps to Reproduce:
1. Install rhel7.5

2. Install packages: kernel/qemu-kvm-rhev/libvirt/dpdk/openvswitch/tuned

3. Setup hugepage and add iommu_pt=on intel_iommu=on to kernel line.

4. Boot ovs

5. Boot VM

6. Start dpdk's testpmd in VM

7. Start MoonGen in another host

8. After several minutes, qemu crash.

# cat /var/log/libvirt/qemu/rhel7.5_nonrt.log 
..
Bad ram offset 23ffae002
2018-03-07 15:29:38.066+0000: shutting down, reason=crashed

# abrt-cli list
id affeb990418ee3d3a415267a118806d6b8d7a30a
reason:         qemu-kvm killed by SIGABRT
time:           Wed 07 Mar 2018 10:29:36 AM EST
cmdline:        /usr/libexec/qemu-kvm -name guest=rhel7.5_nonrt,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-2-rhel7.5_nonrt/master-key.aes -machine pc-q35-rhel7.5.0,accel=kvm,usb=off,vmport=off,dump-guest-core=off,kernel_irqchip=split -cpu host,tsc-deadline=on,pmu=off -m 8192 -realtime mlock=on -smp 6,sockets=6,cores=1,threads=1 -object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu/2-rhel7.5_nonrt,share=yes,size=8589934592,host-nodes=1,policy=bind -numa node,nodeid=0,cpus=0-5,memdev=ram-node0 -uuid dc7b1b36-2218-11e8-99bb-1866dae6e104 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-2-rhel7.5_nonrt/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -global ICH9-LPC.disable_s3=1 -global ICH9-LPC.disable_s4=1 -boot strict=on -device intel-iommu,intremap=on,caching-mode=on,device-iotlb=on -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,addr=0x2 -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x3 -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x4 -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x5 -device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x6 -device pcie-root-port,port=0x15,chassis=6,id=pci.6,bus=pcie.0,addr=0x7 -drive file=/home/images_nfv-virt-rt-kvm/rhel7.5_nonrt.qcow2,format=qcow2,if=none,id=drive-virtio-disk0,cache=none,aio=threads -device virtio-blk-pci,scsi=off,iommu_platform=on,ats=on,bus=pci.1,addr=0x0,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=88:66:da:5f:dd:01,bus=pci.2,addr=0x0 -chardev socket,id=charnet1,path=/tmp/vhostuser0.sock,server -netdev vhost-user,chardev=charnet1,queues=2,id=hostnet1 -device virtio-net-pci,mq=on,vectors=6,rx_queue_size=1024,netdev=hostnet1,id=net1,mac=88:66:da:5f:dd:02,bus=pci.3,addr=0x0,iommu_platform=on,ats=on -chardev socket,id=charnet2,path=/tmp/vhostuser1.sock,server -netdev vhost-user,chardev=charnet2,queues=2,id=hostnet2 -device virtio-net-pci,mq=on,vectors=6,rx_queue_size=1024,netdev=hostnet2,id=net2,mac=88:66:da:5f:dd:03,bus=pci.4,addr=0x0,iommu_platform=on,ats=on -spice port=5900,addr=0.0.0.0,disable-ticketing,image-compression=off,seamless-migration=on -device cirrus-vga,id=video0,bus=pcie.0,addr=0x1 -device virtio-balloon-pci,id=balloon0,bus=pci.5,addr=0x0 -msg timestamp=on
package:        qemu-kvm-rhev-2.10.0-21.el7
uid:            0 (root)
count:          1
Directory:      /var/spool/abrt/ccpp-2018-03-07-10:29:36-3429
Run 'abrt-cli report /var/spool/abrt/ccpp-2018-03-07-10:29:36-3429' for creating a case in Red Hat Customer Portal

The crash file will provided in next Comment.

Actual results:
qemu crash.


Expected results:
qemu should not crash.


Additional info:
1. This issue was found by automation, however it's very hard to reproduce manually. 

2. I'm not sure if it's same issue of below bug:
Bug 1547940 - Sometimes qemu crash with "Bad ram offset 23ffae002" in pvp live migration testing

Reference:
[1]
# ovs-vsctl show
56eff5be-de64-48ce-abb4-db28aa76b154
    Bridge "ovsbr1"
        Port "dpdk1"
            Interface "dpdk1"
                type: dpdk
                options: {dpdk-devargs="0000:81:00.1", n_rxq="2", n_txq="2"}
        Port "vhost-user1"
            Interface "vhost-user1"
                type: dpdkvhostuserclient
                options: {vhost-server-path="/tmp/vhostuser1.sock"}
        Port "ovsbr1"
            Interface "ovsbr1"
                type: internal
    Bridge "ovsbr0"
        Port "vhost-user0"
            Interface "vhost-user0"
                type: dpdkvhostuserclient
                options: {vhost-server-path="/tmp/vhostuser0.sock"}
        Port "ovsbr0"
            Interface "ovsbr0"
                type: internal
        Port "dpdk0"
            Interface "dpdk0"
                type: dpdk
                options: {dpdk-devargs="0000:81:00.0", n_rxq="2", n_txq="2"}

Comment 3 Pei Zhang 2018-08-03 14:45:07 UTC
This issue can still be reproduced with qemu-kvm-rhev-2.12.0-9.el7.x86_64:

coredump info please refer to:

http://pastebin.test.redhat.com/626998

Comment 4 Maxime Coquelin 2018-08-10 09:59:11 UTC
Marking as duplicate of bug 1547940, as the backtrace is identical and so
are the virtio device configuration (vIOMMU enabled).

*** This bug has been marked as a duplicate of bug 1547940 ***


Note You need to log in before you can comment on or make changes to this bug.