Bug 1421620

Summary: [Q35] qemu core dump when there have 25 switch layers
Product: Red Hat Enterprise Linux 7 Reporter: jinchen
Component: qemu-kvm-rhevAssignee: Marcel Apfelbaum <marcel>
Status: CLOSED DUPLICATE QA Contact: Virtualization Bugs <virt-bugs>
Severity: high Docs Contact:
Priority: unspecified    
Version: 7.4CC: chayang, jinzhao, juzhang, knoel, marcel, virt-maint
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-02-14 10:01:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description jinchen 2017-02-13 09:38:25 UTC
Description of problem:
  qemu core dump when there have 25 switch layers

Version-Release number of selected component (if applicable):
  kernel-3.10.0-563.el7.x86_64
  qemu-kvm-rhev-2.8.0-4.el7.x86_64

How reproducible:
  3/3

Steps to Reproduce:
1.Boot guest with qemu command 

Actual results:
  qemu core dump

core dump file:
  #0  0x00007ff850a2b1d7 in raise () at /lib64/libc.so.6
  #1  0x00007ff850a2c8c8 in abort () at /lib64/libc.so.6
  #2  0x00007ff850a24146 in __assert_fail_base () at /lib64/libc.so.6
  #3  0x00007ff850a241f2 in  () at /lib64/libc.so.6
  #4  0x00007ff85b33cb1d in vmstate_register_with_alias_id (dev=dev@entry=0x7ff863785800, instance_id=<optimized out>, 
    instance_id@entry=-1, vmsd=0x7ff85bb27900 <vmstate_xio3130_downstream>, opaque=opaque@entry=0x7ff863785800, alias_id=alias_id@entry=-1, required_for_version=required_for_version@entry=0) at /usr/src/debug/qemu-2.8.0/migration/savevm.c:667
  #5  0x00007ff85b44655f in device_set_realized (obj=<optimized out>, value=<optimized out>, errp=0x7fff71894248) at hw/core/qdev.c:936
  #6  0x00007ff85b520a5e in property_set_bool (obj=0x7ff863785800, v=<optimized out>, name=<optimized out>, opaque=0x7ff8632a58c0, errp=0x7fff71894248)
    at qom/object.c:1854
  #7  0x00007ff85b524721 in object_property_set_qobject (obj=0x7ff863785800, value=<optimized out>, name=0x7ff85b635f4b "realized", errp=0x7fff71894248) at qom/qom-qobject.c:27
  #8  0x00007ff85b522590 in object_property_set_bool (obj=0x7ff863785800, value=<optimized out>, name=0x7ff85b635f4b "realized", errp=0x7fff71894248)
    at qom/object.c:1157
  #9  0x00007ff85b3ed733 in qdev_device_add (opts=0x7ff85dcb4120, errp=errp@entry=0x7fff71894320) at qdev-monitor.c:622
  #10 0x00007ff85b3f7a07 in device_init_func (opaque=<optimized out>, opts=<optimized out>, errp=<optimized out>) at vl.c:2382
  #11 0x00007ff85b5dc3ca in qemu_opts_foreach (list=<optimized out>, func=func@entry=
    0x7ff85b3f79e0 <device_init_func>, opaque=opaque@entry=0x0, errp=errp@entry=0x0) at util/qemu-option.c:1116
  #12 0x00007ff85b2da8aa in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4585


Expected results:
  qemu boot successfully


Additional info:
1)Failed results with 8-24 switch layers please refer to bz1309227
2)qemu also core dump with qemu-kvm-rhev-2.6.0-28.el7_3.3.x86_64

[1]
/usr/libexec/qemu-kvm \
-M q35 \
-cpu Penryn \
-nodefaults -rtc base=utc \
-m 4G \
-smp 4,sockets=1,cores=4,threads=1 \
-enable-kvm \
-name rhel7.4 \
-uuid 990ea161-6b67-47b2-b803-19fb01d30d12 \
-smbios type=1,manufacturer='Red Hat',product='RHEV Hypervisor',version=el6,serial=koTUXQrb,uuid=feebc8fd-f8b0-4e75-abc3-e63fcdb67170 \
-k en-us \
-serial unix:/tmp/console,server,nowait \
-boot menu=on \
-bios /usr/share/seabios/bios.bin \
-chardev file,path=/home/seabios.log,id=seabios \
-device isa-debugcon,chardev=seabios,iobase=0x402 \
-qmp tcp::8892,server,nowait \
-vga qxl \
-vnc :2 \
-device ioh3420,id=root.0,slot=1 \
-drive file=/home/demo/rhel74.img,id=drive0,format=qcow2,if=none,cache=none,werror=stop,rerror=stop \
-device virtio-scsi-pci,id=scsi,bus=root.0 \
-device scsi-disk,id=virtio-disk0,bus=scsi.0,drive=drive0 \
-device ioh3420,id=root.1,slot=2 \
-device virtio-net-pci,netdev=dev1,bus=root.1,mac=9a:6a:6b:6c:6d:6e,id=net1 \
-netdev tap,id=dev1,vhost=on \
-device ioh3420,id=root.2,slot=3 \
-device x3130-upstream,id=upstream1,bus=root.2 \
-device xio3130-downstream,id=downstream1,bus=upstream1,chassis=1 \
-device x3130-upstream,id=upstream2,bus=downstream1 \
-device xio3130-downstream,id=downstream2,bus=upstream2,chassis=2 \
-device x3130-upstream,id=upstream3,bus=downstream2 \
-device xio3130-downstream,id=downstream3,bus=upstream3,chassis=3 \
-device x3130-upstream,id=upstream4,bus=downstream3 \
-device xio3130-downstream,id=downstream4,bus=upstream4,chassis=4 \
-device x3130-upstream,id=upstream5,bus=downstream4 \
-device xio3130-downstream,id=downstream5,bus=upstream5,chassis=5 \
-device x3130-upstream,id=upstream6,bus=downstream5 \
-device xio3130-downstream,id=downstream6,bus=upstream6,chassis=6 \
-device x3130-upstream,id=upstream7,bus=downstream6 \
-device xio3130-downstream,id=downstream7,bus=upstream7,chassis=7 \
-device x3130-upstream,id=upstream8,bus=downstream7 \
-device xio3130-downstream,id=downstream8,bus=upstream8,chassis=8 \
-device x3130-upstream,id=upstream9,bus=downstream8 \
-device xio3130-downstream,id=downstream9,bus=upstream9,chassis=9 \
-device x3130-upstream,id=upstream10,bus=downstream9 \
-device xio3130-downstream,id=downstream10,bus=upstream10,chassis=10 \
-device x3130-upstream,id=upstream11,bus=downstream10 \
-device xio3130-downstream,id=downstream11,bus=upstream11,chassis=11 \
-device x3130-upstream,id=upstream12,bus=downstream11 \
-device xio3130-downstream,id=downstream12,bus=upstream12,chassis=12 \
-device x3130-upstream,id=upstream13,bus=downstream12 \
-device xio3130-downstream,id=downstream13,bus=upstream13,chassis=13 \
-device x3130-upstream,id=upstream14,bus=downstream13 \
-device xio3130-downstream,id=downstream14,bus=upstream14,chassis=14 \
-device x3130-upstream,id=upstream15,bus=downstream14 \
-device xio3130-downstream,id=downstream15,bus=upstream15,chassis=15 \
-device x3130-upstream,id=upstream16,bus=downstream15 \
-device xio3130-downstream,id=downstream16,bus=upstream16,chassis=16 \
-device x3130-upstream,id=upstream17,bus=downstream16 \
-device xio3130-downstream,id=downstream17,bus=upstream17,chassis=17 \
-device x3130-upstream,id=upstream18,bus=downstream17 \
-device xio3130-downstream,id=downstream18,bus=upstream18,chassis=18 \
-device x3130-upstream,id=upstream19,bus=downstream18 \
-device xio3130-downstream,id=downstream19,bus=upstream19,chassis=19 \
-device x3130-upstream,id=upstream20,bus=downstream19 \
-device xio3130-downstream,id=downstream20,bus=upstream20,chassis=20 \
-device x3130-upstream,id=upstream21,bus=downstream20 \
-device xio3130-downstream,id=downstream21,bus=upstream21,chassis=21 \
-device x3130-upstream,id=upstream22,bus=downstream21 \
-device xio3130-downstream,id=downstream22,bus=upstream22,chassis=22 \
-device x3130-upstream,id=upstream23,bus=downstream22 \
-device xio3130-downstream,id=downstream23,bus=upstream23,chassis=23 \
-device x3130-upstream,id=upstream24,bus=downstream23 \
-device xio3130-downstream,id=downstream24,bus=upstream24,chassis=24 \
-device x3130-upstream,id=upstream25,bus=downstream24 \
-device xio3130-downstream,id=downstream25,bus=upstream25,chassis=25 \
-drive file=/home/demo/test1.qcow2,id=drive1,format=qcow2,if=none,cache=none,werror=stop,rerror=stop \
-device virtio-blk-pci,id=virtio-disk1,drive=drive1,bus=downstream25 \
-monitor stdio \

Comment 2 Marcel Apfelbaum 2017-02-14 10:01:19 UTC

*** This bug has been marked as a duplicate of bug 1058597 ***