RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 822386 - qemu-kvm core dumps after virtio-blk hotplug-in/removed then stop/cont
Summary: qemu-kvm core dumps after virtio-blk hotplug-in/removed then stop/cont
Keywords:
Status: CLOSED DUPLICATE of bug 869586
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.3
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Asias He
QA Contact: Virtualization Bugs
URL:
Whiteboard:
: 884420 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-05-17 08:38 UTC by Xiaoqing Wei
Modified: 2013-05-09 08:33 UTC (History)
21 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-05-09 08:33:46 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
gdb thread apply all bt full (282.55 KB, text/plain)
2012-05-17 08:38 UTC, Xiaoqing Wei
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 909059 0 medium CLOSED Switch to upstream solution for chardev flow control 2021-02-22 00:41:40 UTC

Internal Links: 909059

Description Xiaoqing Wei 2012-05-17 08:38:14 UTC
Created attachment 585142 [details]
gdb thread apply all bt full

Description of problem:

qemu-kvm core dumps after virtio-blk hotplug-in/removed then stop/cont
Version-Release number of selected component (if applicable):
qemu-kvm-rhev-0.12.1.2-2.293.el6.x86_64

How reproducible:
100%

Steps to Reproduce:
1. boot a guest
qemu-kvm -monitor stdio -S -chardev socket,id=serial_id_20120515-041452-KkUY,path=/tmp/serial-20120515-041452-KkUY,server,nowait -device isa-serial,chardev=serial_id_20120515-041452-KkUY -device ich9-usb-uhci1,id=usb1,bus=pci.0,addr=0x4 -drive file='/home/staf-kvm-devel/autotest-devel/client/tests/kvm/images/RHEL-Server-5.8-64-virtio.raw',index=0,if=none,id=drive-virtio-disk1,media=disk,cache=none,boot=off,snapshot=off,readonly=off,format=raw,aio=native -device virtio-blk-pci,bus=pci.0,addr=0x5,drive=drive-virtio-disk1,id=virtio-disk1 -device virtio-net-pci,netdev=idLYjg29,mac=9a:6e:47:a6:d8:f9,id=ndev00idLYjg29,bus=pci.0,addr=0x3 -netdev tap,id=idLYjg29,vhost=on -m 2048 -smp 4,cores=2,threads=1,sockets=2 -cpu 'Opteron_G4' -drive file='/home/RHEL5.8-Server-20120202.0-x86_64-DVD.iso',index=1,if=none,id=drive-ide0-0-0,media=cdrom,boot=off,snapshot=off,readonly=on,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -device usb-tablet,id=usb-tablet1,bus=usb1.0 -spice port=8000,disable-ticketing -vga qxl -rtc base=utc,clock=host,driftfix=slew -M rhel6.3.0 -boot order=cdn,once=c,menu=off    -no-kvm-pit-reinjection -enable-kvm
2. in guest: modprobe acpiphp

3. hotplug-in a virtio-blk 
qemu# __com.redhat_drive_add id=hot1,file=/root/hot1.raw,format=raw,media=disk 
qemu# device_add virtio-blk-pci,id=hot_virtio,drive=hot1

#####and format it in guest by "mkfs.ext3 /dev/vdb"

then remove it by
qemu# device_del hot_virtio

3. send stop/cont to qemu
qemu# stop
qemu# cont
  
Actual results:
qemu core dumps

Expected results:
guest work without any issue.

Additional info:

guest : RHEL.5.8.64

detail gdb output will be attached, but for who'd like to have quick glance

Program terminated with signal 11, Segmentation fault.
#0  0x00007f70083ea32c in qdict_destroy_obj (obj=<value optimized out>) at qdict.c:470
470	            QLIST_REMOVE(entry, next);
(gdb) bt
#0  0x00007f70083ea32c in qdict_destroy_obj (obj=<value optimized out>) at qdict.c:470
#1  0x00007f70083ea4ff in qobject_decref (obj=<value optimized out>) at qobject.h:99
#2  qlist_destroy_obj (obj=<value optimized out>) at qlist.c:151
#3  0x00007f70083eb569 in qobject_decref (lexer=0x7f700975df00, token=0x7f7008f796a0, 
    type=JSON_OPERATOR, x=37, y=5388) at qobject.h:99
#4  json_message_process_token (lexer=0x7f700975df00, token=0x7f7008f796a0, type=JSON_OPERATOR, 
    x=37, y=5388) at json-streamer.c:89
#5  0x00007f70083eb1d0 in json_lexer_feed_char (lexer=0x7f700975df00, ch=125 '}', flush=false)
    at json-lexer.c:303
#6  0x00007f70083eb319 in json_lexer_feed (lexer=0x7f700975df00, buffer=0x7fff873f23d0 "}", size=1)
    at json-lexer.c:355
#7  0x00007f700839984e in monitor_control_read (opaque=<value optimized out>, 
    buf=<value optimized out>, size=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/monitor.c:4810
#8  0x00007f700840de0a in qemu_chr_read (opaque=0x7f7008ca5470) at qemu-char.c:180
#9  tcp_chr_read (opaque=0x7f7008ca5470) at qemu-char.c:2217
#10 0x00007f700839266f in main_loop_wait (timeout=1000) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:3990
#11 0x00007f70083b3f1a in kvm_main_loop () at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2244
#12 0x00007f70083951bc in main_loop (argc=20, argv=<value optimized out>, envp=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4202
#13 main (argc=20, argv=<value optimized out>, envp=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:6427

Comment 2 Xiaoqing Wei 2012-05-17 10:00:58 UTC
hit again with a rhel6.3-20120516 x64 guest

(gdb) bt
#0  qemu_bh_delete (bh=0x0) at async.c:118
#1  0x00007f89d13969df in virtio_blk_dma_restart_bh (opaque=0x7f89d3248f00)
    at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-blk.c:444
#2  0x00007f89d13b4df1 in qemu_bh_poll () at async.c:70
#3  0x00007f89d13827e9 in main_loop_wait (timeout=1000) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4032
#4  0x00007f89d13a3f1a in kvm_main_loop () at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2244
#5  0x00007f89d13851bc in main_loop (argc=20, argv=<value optimized out>, envp=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4202
#6  main (argc=20, argv=<value optimized out>, envp=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:6427

Comment 3 yunpingzheng 2012-05-21 10:27:24 UTC
hit host: rhel6.3-20120516
    guest: RHEL.5.8 PAE 
(gdb) bt
#0  0x00007f90930ffe5c in qdict_destroy_obj (obj=<value optimized out>) at qdict.c:470
#1  0x00007f909310002f in qobject_decref (obj=<value optimized out>) at qobject.h:99
#2  qlist_destroy_obj (obj=<value optimized out>) at qlist.c:151
#3  0x00007f9093101099 in qobject_decref (lexer=0x7f9095ae2950, token=0x7f90957b38c0, type=JSON_OPERATOR, x=37, y=5380)
    at qobject.h:99
#4  json_message_process_token (lexer=0x7f9095ae2950, token=0x7f90957b38c0, type=JSON_OPERATOR, x=37, y=5380)
    at json-streamer.c:89
#5  0x00007f9093100d00 in json_lexer_feed_char (lexer=0x7f9095ae2950, ch=125 '}', flush=false) at json-lexer.c:303
#6  0x00007f9093100e49 in json_lexer_feed (lexer=0x7f9095ae2950, buffer=0x7ffff5057400 "}", size=1) at json-lexer.c:355
#7  0x00007f90930af37e in monitor_control_read (opaque=<value optimized out>, buf=<value optimized out>, 
    size=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/monitor.c:4810
#8  0x00007f9093122b4a in qemu_chr_read (opaque=0x7f90944a7d30) at qemu-char.c:180
#9  tcp_chr_read (opaque=0x7f90944a7d30) at qemu-char.c:2217
#10 0x00007f90930a819f in main_loop_wait (timeout=1000) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:3990
#11 0x00007f90930c9a4a in kvm_main_loop () at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2244
#12 0x00007f90930aacec in main_loop (argc=20, argv=<value optimized out>, envp=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4202
#13 main (argc=20, argv=<value optimized out>, envp=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:6427

Comment 4 Amit Shah 2012-05-21 10:56:34 UTC
Looks like a dup of bug 808295

Comment 5 Ademar Reis 2012-05-21 22:01:38 UTC
Xiaoqing, can you reproduce it with RHEL6.2? In other words, is this a 6.3 regression?

(In reply to comment #4)
> Looks like a dup of bug 808295

Postponing this to 6.4 as well. If it turns out to be a regression and we decide to target 6.3, we'll have to do this via the z-stream anyway.

Comment 6 Xiaoqing Wei 2012-05-22 02:07:51 UTC
(In reply to comment #5)
> Xiaoqing, can you reproduce it with RHEL6.2? In other words, is this a 6.3
> regression?
> 
This is not a regression, I tried qemu-kvm-209, same step, core dump too.

Comment 8 RHEL Program Management 2012-07-10 07:16:20 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.

Comment 9 RHEL Program Management 2012-07-11 02:08:14 UTC
This request was erroneously removed from consideration in Red Hat Enterprise Linux 6.4, which is currently under development.  This request will be evaluated for inclusion in Red Hat Enterprise Linux 6.4.

Comment 11 Luiz Capitulino 2012-07-23 14:30:23 UTC
Xiaoqing, has this always existed then?

Asias, I saw that you recently fixed a bug on virtio-blk hotplug/remove, can you please check if this is the bug you fixed before I jump into this?

Comment 12 Asias He 2012-07-24 04:22:37 UTC
(In reply to comment #11)
> Xiaoqing, has this always existed then?
> 
> Asias, I saw that you recently fixed a bug on virtio-blk hotplug/remove, can
> you please check if this is the bug you fixed before I jump into this?

Sure.

Comment 17 Luiz Capitulino 2012-08-08 14:25:42 UTC
I've tried to reproduce this with qemu-kvm-0.12.1.2-2.302 w/o success.

Could you please try to reproduce again with:

- qemu-kvm-0.12.1.2-2.302
- RHEL6 guest
- reduce the command-line options incrementally. That is, first drop the usb stuff, then try to reproduce. Then drop the additional disks, then spice etc

Also, two questions:

1. What's the size of the disk being added?
2. Do you wait for mkfs to finish before you do device_del or is mkfs still running when you delete the device?

Comment 18 Xiaoqing Wei 2012-08-09 13:38:46 UTC
(In reply to comment #17)
> I've tried to reproduce this with qemu-kvm-0.12.1.2-2.302 w/o success.
I am trying with qemu-kvm-rhev..302, should be the same, right ?
> 
> Could you please try to reproduce again with:
> 
> - qemu-kvm-0.12.1.2-2.302
> - RHEL6 guest
> - reduce the command-line options incrementally. That is, first drop the usb
> stuff, then try to reproduce. Then drop the additional disks, then spice etc
> 
I am now testing rhel6 guest with three scenarios:
1) boot guest w/ one virtio-blk, w/ usb device , and w/ spice
2) boot guest w/ one virtio-blk, w/ spice
3) boot guest w/ one virtio-blk

each for 300 rounds of hotplug and format, will update bz then.
if the above scenarios is not you want, pls fix me.

> Also, two questions:
> 
> 1. What's the size of the disk being added?
1GB
> 2. Do you wait for mkfs to finish before you do device_del or is mkfs still
> running when you delete the device?

wait till it finishes


BR,
Xiaoqing.

Comment 19 Luiz Capitulino 2012-08-09 15:23:42 UTC
(In reply to comment #18)
> (In reply to comment #17)
> > I've tried to reproduce this with qemu-kvm-0.12.1.2-2.302 w/o success.
> I am trying with qemu-kvm-rhev..302, should be the same, right ?

I'm not sure. I don't know the exact differences (I just know that some features are enabled/disabled for qemu-kvm-rhev).

Can you try with both, please?

> > Could you please try to reproduce again with:
> > 
> > - qemu-kvm-0.12.1.2-2.302
> > - RHEL6 guest
> > - reduce the command-line options incrementally. That is, first drop the usb
> > stuff, then try to reproduce. Then drop the additional disks, then spice etc
> > 
> I am now testing rhel6 guest with three scenarios:
> 1) boot guest w/ one virtio-blk, w/ usb device , and w/ spice
> 2) boot guest w/ one virtio-blk, w/ spice
> 3) boot guest w/ one virtio-blk
> 
> each for 300 rounds of hotplug and format, will update bz then.
> if the above scenarios is not you want, pls fix me.

Which one reproduces the problem?

Also, do you need 300 rounds to get the problem or does it happen before it? I tested just a few times.

Comment 20 Xiaoqing Wei 2012-08-10 03:16:13 UTC
(In reply to comment #19)
> (In reply to comment #18)
> > (In reply to comment #17)
> > > I've tried to reproduce this with qemu-kvm-0.12.1.2-2.302 w/o success.
> > I am trying with qemu-kvm-rhev..302, should be the same, right ?
> 
> I'm not sure. I don't know the exact differences (I just know that some
> features are enabled/disabled for qemu-kvm-rhev).
> 
> Can you try with both, please?

The diff between qemu-kvm and qemu-kvm-rhev is about mirroring/streaming(qemu-kvm lacks these feature),

and Both can trigger this issue.

> 
> > > Could you please try to reproduce again with:
> > > 
> > > - qemu-kvm-0.12.1.2-2.302
> > > - RHEL6 guest
> > > - reduce the command-line options incrementally. That is, first drop the usb
> > > stuff, then try to reproduce. Then drop the additional disks, then spice etc
> > > 
> > I am now testing rhel6 guest with three scenarios:
> > 1) boot guest w/ one virtio-blk, w/ usb device , and w/ spice
> > 2) boot guest w/ one virtio-blk, w/ spice
> > 3) boot guest w/ one virtio-blk
> > 
> > each for 300 rounds of hotplug and format, will update bz then.
> > if the above scenarios is not you want, pls fix me.
> 
> Which one reproduces the problem?

all the 3 scenario, here's the cmd of 3), w/o usb, w/o spice, and with qemu-kvm:

/home/staf-kvm-devel/autotest-devel/client/tests/kvm/qemu -name 'vm1' -nodefaults -chardev socket,id=qmp_monitor_id_qmpmonitor1,path=/tmp/monitor-qmpmonitor1-20120808-154459-TCzB,server,nowait -mon chardev=qmp_monitor_id_qmpmonitor1,mode=control -chardev socket,id=serial_id_20120808-154459-TCzB,path=/tmp/serial-20120808-154459-TCzB,server,nowait -device isa-serial,chardev=serial_id_20120808-154459-TCzB -drive file='/home/staf-kvm-devel/autotest-devel/client/tests/kvm/images/RHEL-Server-6.2-64-virtio.qcow2',if=none,id=drive-virtio-disk1,media=disk,cache=none,boot=off,snapshot=off,format=qcow2,aio=native -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk1,id=virtio-disk1 -device virtio-net-pci,netdev=idJR1wGD,mac=9a:7a:eb:5b:77:6f,id=ndev00idJR1wGD,bus=pci.0,addr=0x3 -netdev tap,id=idJR1wGD,vhost=on,fd=19 -m 2048 -smp 2,cores=2,threads=0,sockets=2 -cpu 'Penryn' -vnc :1 -vga cirrus -rtc base=utc,clock=host,driftfix=slew -M rhel6.3.0 -boot order=cdn,once=c,menu=off    -no-kvm-pit-reinjection -bios /usr/share/seabios/bios-pm.bin -enable-kvm





> 
> Also, do you need 300 rounds to get the problem or does it happen before it?
> I tested just a few times.

this issue is happen on stop/cont, will not happen during the plug/unplug.

Strange, it's now not 100% reproducible for me.

Comment 21 Luiz Capitulino 2012-08-15 18:37:32 UTC
(In reply to comment #20)
 
> all the 3 scenario, here's the cmd of 3), w/o usb, w/o spice, and with
> qemu-kvm:
> 
> /home/staf-kvm-devel/autotest-devel/client/tests/kvm/qemu -name 'vm1'
> -nodefaults -chardev
> socket,id=qmp_monitor_id_qmpmonitor1,path=/tmp/monitor-qmpmonitor1-20120808-
> 154459-TCzB,server,nowait -mon
> chardev=qmp_monitor_id_qmpmonitor1,mode=control -chardev
> socket,id=serial_id_20120808-154459-TCzB,path=/tmp/serial-20120808-154459-
> TCzB,server,nowait -device isa-serial,chardev=serial_id_20120808-154459-TCzB
> -drive
> file='/home/staf-kvm-devel/autotest-devel/client/tests/kvm/images/RHEL-
> Server-6.2-64-virtio.qcow2',if=none,id=drive-virtio-disk1,media=disk,
> cache=none,boot=off,snapshot=off,format=qcow2,aio=native -device
> virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk1,id=virtio-disk1
> -device
> virtio-net-pci,netdev=idJR1wGD,mac=9a:7a:eb:5b:77:6f,id=ndev00idJR1wGD,
> bus=pci.0,addr=0x3 -netdev tap,id=idJR1wGD,vhost=on,fd=19 -m 2048 -smp
> 2,cores=2,threads=0,sockets=2 -cpu 'Penryn' -vnc :1 -vga cirrus -rtc
> base=utc,clock=host,driftfix=slew -M rhel6.3.0 -boot
> order=cdn,once=c,menu=off    -no-kvm-pit-reinjection -bios
> /usr/share/seabios/bios-pm.bin -enable-kvm

That command line doesn't have a human monitor, which makes me think that the test-case is automated by autotest? If this is the case and if autotest is sending several commands to qemu then you're probably hitting bug 808295.

Have you tried it with the human monitor only (w/o any QMP socket) by hand, as described in the original report?

Btw, I've just tried to reproduce this again with your command-line (but using HMP instead of QMP) and it just works.

Comment 22 Xiaoqing Wei 2012-08-16 06:59:17 UTC
(In reply to comment #21)
<snipped>
> That command line doesn't have a human monitor, which makes me think that
> the test-case is automated by autotest? If this is the case and if autotest
> is sending several commands to qemu then you're probably hitting bug 808295.
> 
> Have you tried it with the human monitor only (w/o any QMP socket) by hand,
> as described in the original report?
> 
> Btw, I've just tried to reproduce this again with your command-line (but
> using HMP instead of QMP) and it just works.

</snipped>

Yes, this is found by autotest, and I can reproduce it by human monitor, the cmd in bz desc is using HMP only, to repro it, just do:

1)HMP-monitor# hotplug a drive
2)guest_cmd#   format/mount/write/umount it
3)HMP-monitor# unplug the drive
4)HMP-monitor# stop
5)HMP-monitor# cont
6)if didn't repro, goto 1)

Regards,
Xiaoqing.

Comment 23 Luiz Capitulino 2012-08-16 20:36:36 UTC
I've finally managed to reproduce this! :)

The only difference from my earlier tries is that I'm mounting the hoplugged disk, writing to it and umounting it. It also took four or five tries to get the bug.

Here's the backtrace I got:

#0  virtio_blk_handle_request (req=0x81, mrb=0x7fffb24ed7f0) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-blk.c:373
#1  0x00007f976579e84b in virtio_blk_dma_restart_bh (opaque=0x7f9766334010) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-blk.c:450
#2  0x00007f97657bcc41 in qemu_bh_poll () at async.c:70
#3  0x00007f976578a629 in main_loop_wait (timeout=1000) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4032
#4  0x00007f97657abd5a in kvm_main_loop () at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2244
#5  0x00007f976578cffc in main_loop (argc=20, argv=<value optimized out>, envp=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4202
#6  main (argc=20, argv=<value optimized out>, envp=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:6430

So, this matches with the backtrace in comment #2 and it really looks like a block layer or virtio-blk issue.

The QMP backtraces that appear in the description and in comment #3 are probably kvm-autotest triggering bug 808295.

Asias, I'm re-assigning this to you because you're working on this area.

Comment 24 Luiz Capitulino 2012-08-16 20:40:39 UTC
Btw, just tried it on upstream and got it there too (qemu.git HEAD 5a4d701ac):

#0  0x00007f1119c592e4 in virtio_blk_handle_request (req=0x21, mrb=0x7fff35d2cef0)
    at /home/lcapitulino/work/src/qmp-unstable/hw/virtio-blk.c:368
368	    if (req->elem.out_num < 1 || req->elem.in_num < 1) {

Is req becoming invalid right after entering virtio_blk_handle_request()?

Comment 25 Luiz Capitulino 2012-08-16 20:46:07 UTC
Oh, wait. I didn't realize req's address. It certainly got invalid before virtio_blk_dma_restart_bh() then.

Comment 27 Xu Tian 2012-10-23 03:32:50 UTC
met the same issue on qemu-kvm-0.12.1.2-2.295.el6_3.4
see backtrace below:
(gdb) bt
#0  0x00007fd17c1546dc in qdict_destroy_obj (obj=<value optimized out>) at qdict.c:470
#1  0x00007fd17c1548af in qobject_decref (obj=<value optimized out>) at qobject.h:99
#2  qlist_destroy_obj (obj=<value optimized out>) at qlist.c:151
#3  0x00007fd17c155919 in qobject_decref (lexer=0x7fd17e63fe30, token=0x7fd17dec5df0, type=JSON_OPERATOR, x=37, y=5574) at qobject.h:99
#4  json_message_process_token (lexer=0x7fd17e63fe30, token=0x7fd17dec5df0, type=JSON_OPERATOR, x=37, y=5574) at json-streamer.c:89
#5  0x00007fd17c155580 in json_lexer_feed_char (lexer=0x7fd17e63fe30, ch=125 '}', flush=false) at json-lexer.c:303
#6  0x00007fd17c1556c9 in json_lexer_feed (lexer=0x7fd17e63fe30, buffer=0x7fffa18542c0 "}", size=1) at json-lexer.c:355
#7  0x00007fd17c10339e in monitor_control_read (opaque=<value optimized out>, buf=<value optimized out>, size=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/monitor.c:4810
#8  0x00007fd17c17744a in qemu_chr_read (opaque=0x7fd17da294a0) at qemu-char.c:180
#9  tcp_chr_read (opaque=0x7fd17da294a0) at qemu-char.c:2217
#10 0x00007fd17c0fc1bf in main_loop_wait (timeout=1000) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:3990
#11 0x00007fd17c11da6a in kvm_main_loop () at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2244
#12 0x00007fd17c0fed0c in main_loop (argc=20, argv=<value optimized out>, envp=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4202
#13 main (argc=20, argv=<value optimized out>, envp=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:6427
(gdb) 
 
it looks have same root case, if not let me know to fill a new bug to track,

Thanks,
Xu

Comment 29 Luiz Capitulino 2012-12-07 12:19:40 UTC
*** Bug 884420 has been marked as a duplicate of this bug. ***

Comment 31 Asias He 2013-03-26 02:10:57 UTC
Xiaoqing Wei and Xu, could any of you try the latest qemu-kvm and kernel package to see if we still have this issue. Thanks.

Comment 32 CongLi 2013-04-17 05:24:31 UTC
met a similiar issue on qemu-kvm-0.12.1.2-2.355.el6.x86_64
see backtrace below:
(gdb) bt
#0  0x00007faca0cc113c in qdict_destroy_obj (obj=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/qdict.c:470
#1  0x00007faca0c6b00e in qobject_decref (parser=<value optimized out>, tokens=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/qobject.h:99
#2  handle_qmp_command (parser=<value optimized out>, tokens=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/monitor.c:4960
#3  0x00007faca0cc2344 in json_message_process_token (lexer=0x7faca1cb4080, token=0x7faca1d4f9f0, type=JSON_OPERATOR, x=37, y=4735)
    at /usr/src/debug/qemu-kvm-0.12.1.2/json-streamer.c:87
#4  0x00007faca0cc1fe0 in json_lexer_feed_char (lexer=0x7faca1cb4080, ch=125 '}', flush=false) at /usr/src/debug/qemu-kvm-0.12.1.2/json-lexer.c:303
#5  0x00007faca0cc2129 in json_lexer_feed (lexer=0x7faca1cb4080, buffer=0x7fffee93ef50 "}", size=1) at /usr/src/debug/qemu-kvm-0.12.1.2/json-lexer.c:355
#6  0x00007faca0c6a5ee in monitor_control_read (opaque=<value optimized out>, buf=<value optimized out>, size=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/monitor.c:4973
#7  0x00007faca0ce48fa in qemu_chr_read (opaque=0x7faca1abc820) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-char.c:180
#8  tcp_chr_read (opaque=0x7faca1abc820) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-char.c:2211
#9  0x00007faca0c6329f in main_loop_wait (timeout=1000) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:3975
#10 0x00007faca0c8598a in kvm_main_loop () at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2244
#11 0x00007faca0c66018 in main_loop (argc=50, argv=<value optimized out>, envp=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4187
#12 main (argc=50, argv=<value optimized out>, envp=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:6526
(gdb) 

It's the same case, but #2 & #3 in bt info are different:
#2  handle_qmp_command (parser=<value optimized out>, tokens=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/monitor.c:4960
#3  0x00007faca0cc2344 in json_message_process_token (lexer=0x7faca1cb4080, token=0x7faca1d4f9f0, type=JSON_OPERATOR, x=37, y=4735)
    at /usr/src/debug/qemu-kvm-0.12.1.2/json-streamer.c:87

Comment 33 Qunfang Zhang 2013-04-26 08:59:45 UTC
This bug can be reproduced on both official qemu-kvm-361 and Amit's private v9 build for bug 909059 which includes lots of chardev related patches. 

On Amit's v9 build:

(qemu) 
Program received signal SIGSEGV, Segmentation fault.
qemu_bh_delete (bh=0x0) at /usr/src/debug/qemu-kvm-0.12.1.2/async.c:118
118	    bh->scheduled = 0;
(gdb) bt
#0  qemu_bh_delete (bh=0x0) at /usr/src/debug/qemu-kvm-0.12.1.2/async.c:118
#1  0x00007ffff7df283f in virtio_blk_dma_restart_bh (opaque=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-blk.c:465
#2  0x00007ffff7e14c01 in qemu_bh_poll () at /usr/src/debug/qemu-kvm-0.12.1.2/async.c:70
#3  0x00007ffff7ddddd9 in main_loop_wait (timeout=1000) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4057
#4  0x00007ffff7e0048a in kvm_main_loop () at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2244
#5  0x00007ffff7de0a18 in main_loop (argc=72, argv=<value optimized out>, envp=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4227
#6  main (argc=72, argv=<value optimized out>, envp=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:6570
(gdb)

Comment 34 Luiz Capitulino 2013-04-26 12:32:22 UTC
As far as I debugged this bug, it doesn't seem to have anything to do with the chardev layer. It happens on Amit's tree because it happens on RHEL6 and on upstream. Let's not mix BZs until we have real evidence they may be related.

Comment 35 Qunfang Zhang 2013-04-27 04:42:43 UTC
It's ok and I tested it because this bug is in the 'see also' bug list of bug 909059 which need to be tested with Amit's tree to see it's passed or failed after applied his patches. But I'm not sure whether it has some relation with the private tree actually.

Comment 36 Laszlo Ersek 2013-05-08 16:55:27 UTC
Very likely fixed by upstream commit 69b302b2. Please retest with the brew build linked in bug 869586 comment 27. If the issue disappears, this BZ should be closed as a duplicate of 869586. Thanks.


Note You need to log in before you can comment on or make changes to this bug.