Bug 734051

Summary: rhel6.1 guest hang when unplug is using virtio disk from monitor
Product: Red Hat Enterprise Linux 6 Reporter: FuXiangChun <xfu>
Component: kernelAssignee: Asias He <asias>
Status: CLOSED ERRATA QA Contact: Virtualization Bugs <virt-bugs>
Severity: medium Docs Contact:
Priority: medium    
Version: 6.2CC: amit.shah, areis, asias, bugproxy, jpan, juzhang, michen, mkenneth, mst, rhod, shu, sluo, tburke, virt-maint
Target Milestone: rc   
Target Release: 6.4   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: kernel-2.6.32-296.el6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-02-21 05:54:10 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
catch guest kernel error message when guest hang
none
Serial console output
none
var log message of guest
none
var log message of host
none
var log message of guest none

Description FuXiangChun 2011-08-29 10:37:46 UTC
Description of problem:
 boot guest and attach a virtio disk, use dd command to write date to disk and use "device_del disk_id" to unplug this disk, whole guest hang, ping stops replying after unplug too. 

Version-Release number of selected component (if applicable):
host info:
# uname -r
2.6.32-192.el6.x86_64

# rpm -qa|grep kvm
qemu-kvm-0.12.1.2-2.184.el6.x86_64

guest info:

How reproducible:
always

Steps to Reproduce:
1./usr/libexec/qemu-kvm -enable-kvm -m 2G -smp 4 -name rhel6 -uuid ddcbfb49-3411-1701-3c36-6bdbc00bedb9 -rtc base=utc,clock=host,driftfix=slew -boot c -drive file=/home/rhel61-new.qcow2,if=none,id=drive-virtio-0-1,format=qcow2,cache=none,werror=report,rerror=report -device ide-drive,drive=drive-virtio-0-1,id=virt0-0-1 -netdev tap,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:50:a4:c2:c5  -spice disable-ticketing,port=5911 -device virtio-balloon-pci,id=ballooning -monitor stdio  -qmp tcp:0:4455,server,nowait  -serial unix:/home/error-message,server,nowait

2. attach a virio disk from monitor
   
3. use dd command to write date to disk
dd if=/dev/zero of=/dev/vda bs=1M count=4000 in guest

4. device_del disk1 from monitor
  
Actual results:
guest hang


Expected results:
guest work well

Additional info:
device_del can not remove disk of using, monitor should have reminder messages too.

Comment 1 FuXiangChun 2011-08-29 10:39:02 UTC
Created attachment 520338 [details]
catch guest kernel error message when guest hang

Comment 3 Dor Laor 2011-09-12 06:53:43 UTC
Not sure this is really a bug, you pulled the disk out of the guest legs.

Comment 4 Markus Armbruster 2011-09-14 11:36:13 UTC
Well, the guest shouldn't just hang.  It should cry bloody murder about the disk going AWOL.  Heck, it's not even mounted.  I think we should figure out what's going on, just in case it's something that could bite customers in the field.

That said, I haven't had the time to look into it.

Comment 7 Markus Armbruster 2012-02-10 14:22:27 UTC
Reproduced with a local build of qemu-kvm-0.12.1.2-2.223.el6 and a RHEL-6 guest with serial console set up to capture kernel messages.

Steps:

0. Boot guest into runlevel 3

1. Verify hot-plug works:

1a. Plug a scratch disk containing crap:

(qemu) __com.redhat_drive_add id=foo,file=foo.img
(qemu) device_add virtio-blk-pci,id=bar,drive=foo

Device appears in guest as /dev/vda with "unknown partition table", as expected.

Note: foo.img is a 4GiB image file.  Happens to be sparse, shouldn't matter.

1b. Unplug:

(qemu) device_del bar

Device disappears in guest, as expected.

2. Reproduce the bug:

2a. Plug:

(qemu) __com.redhat_drive_add id=foo,file=foo.img
(qemu) device_add virtio-blk-pci,id=bar,drive=foo

Device appears in guest as /dev/vdb with "unknown partition table", as expected.

2b. Write to disk

# dd if=/dev/zero of=/dev/vdb

2c. While dd runs, unplug:

(qemu) device_del bar

Actual result: "kernel BUG at drivers/block/virtio_blk.c:444!", and hang (no reaction to console keyboard input).  Serial console output attached.

Expected result: kernel either refuses to comply with the unplug request, or maybe complies, then screams bloody murder about disk going AWOL.  But it shouldn't hang.

Also reproduced with upstream qemu, same RHEL-6 guest.

Comment 8 Markus Armbruster 2012-02-10 14:23:38 UTC
Created attachment 560902 [details]
Serial console output

Comment 14 FuXiangChun 2012-04-12 11:34:35 UTC
still can reproduce this bug with 2.6.32-259.el6.x86_64. 
guest hand and reboot.

If don't run dd command on second disk then guest work well.

Comment 15 RHEL Program Management 2012-07-10 06:59:18 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.

Comment 16 RHEL Program Management 2012-07-10 23:37:13 UTC
This request was erroneously removed from consideration in Red Hat Enterprise Linux 6.4, which is currently under development.  This request will be evaluated for inclusion in Red Hat Enterprise Linux 6.4.

Comment 17 Asias He 2012-07-16 08:26:31 UTC
I have fixed the hot un-plug problem in upstream kernel and will backport to RHEL when it hit linus's tree.

http://lists.linuxfoundation.org/pipermail/virtualization/2012-June/020173.html

Comment 18 RHEL Program Management 2012-07-16 22:49:49 UTC
This request was evaluated by Red Hat Product Management for
inclusion in a Red Hat Enterprise Linux release.  Product
Management has requested further review of this request by
Red Hat Engineering, for potential inclusion in a Red Hat
Enterprise Linux release for currently deployed products.
This request is not yet committed for inclusion in a release.

Comment 20 Asias He 2012-07-31 02:24:20 UTC
*** Bug 815658 has been marked as a duplicate of this bug. ***

Comment 21 IBM Bug Proxy 2012-07-31 02:39:34 UTC
Created attachment 601390 [details]
var log message of guest

Comment 22 IBM Bug Proxy 2012-07-31 02:39:44 UTC
Created attachment 601391 [details]
var log message of host

Comment 23 IBM Bug Proxy 2012-07-31 02:39:54 UTC
Created attachment 601393 [details]
var log message of guest

Comment 25 Suqin Huang 2012-08-10 04:55:55 UTC
*** Bug 847195 has been marked as a duplicate of this bug. ***

Comment 26 Jarod Wilson 2012-08-13 13:19:45 UTC
Patch(es) available on kernel-2.6.32-296.el6

Comment 29 Sibiao Luo 2012-12-10 09:02:48 UTC
Reproduce this issue on kernel-2.6.32-220.el6.x86_64.
guest info:
kernel-2.6.32-220.el6.x86_64
host info:
# uname -r && rpm -q qemu-kvm
2.6.32-345.el6.x86_64
qemu-kvm-0.12.1.2-2.337.el6.x86_64

Steps:
the same to comment #0

Results:
after hot-plug the data disk, the device appears in guest kernel as /dev/vdb with "vdb: unknown partition table", as expected.
after hot-unplug the data disk in use, the guest will hang and then call trace and reboot.
# nc -U /tmp/ttyS0
------------[ cut here ]------------
kernel BUG at drivers/block/virtio_blk.c:543!
invalid opcode: 0000 [#1] SMP 
last sysfs file: /sys/devices/virtual/block/dm-1/dm/name
CPU 0 
Modules linked in: fuse sunrpc ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 uinput ppdev parport_pc parport sg microcode virtio_net virtio_balloon snd_hda_intel snd_hda_codec snd_hwdep snd_seq snd_seq_device snd_pcm snd_timer snd soundcore snd_page_alloc virtio_console i2c_piix4 i2c_core ext4 mbcache jbd2 virtio_blk sr_mod cdrom virtio_pci virtio_ring virtio pata_acpi ata_generic ata_piix dm_mirror dm_region_hash dm_log dm_mod [last unloaded: speedstep_lib]

Pid: 40, comm: kacpi_hotplug Not tainted 2.6.32-220.el6.x86_64 #1 Red Hat KVM
RIP: 0010:[<ffffffffa0085a3a>]  [<ffffffffa0085a3a>] virtblk_remove+0x29/0xbb [virtio_blk]
RSP: 0018:ffff88011ce3db10  EFLAGS: 00010293
RAX: ffff880119104028 RBX: ffff880119104000 RCX: 00000000ffffffff
RDX: 0000000000000000 RSI: 0000000000000004 RDI: ffff880119104040
RBP: ffff88011ce3db20 R08: 0000000000000000 R09: 00000000000002ae
R10: 0000000000000000 R11: 0000000000000000 R12: ffff880118206c00
R13: ffffffffa00547e0 R14: ffff88011ce3ddc0 R15: ffff880118811368
FS:  0000000000000000(0000) GS:ffff880028200000(0000) knlGS:0000000000000000
CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
CR2: 000000000138c29c CR3: 00000000dda07000 CR4: 00000000000406f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process kacpi_hotplug (pid: 40, threadinfo ffff88011ce3c000, task ffff88011ce3b540)
Stack:
 ffff880118206c00 ffffffffa00862c0 ffff88011ce3db40 ffffffffa0054272
<0> ffff88011ce3db40 ffff880118206c08 ffff88011ce3db60 ffffffff8134623f
<0> ffff880118206c68 ffff880118206c08 ffff88011ce3db80 ffffffff813463ad
Call Trace:
 [<ffffffffa0054272>] virtio_dev_remove+0x22/0x50 [virtio]
 [<ffffffff8134623f>] __device_release_driver+0x6f/0xe0
 [<ffffffff813463ad>] device_release_driver+0x2d/0x40
 [<ffffffff81345243>] bus_remove_device+0xa3/0x100
 [<ffffffff81342e8d>] device_del+0x12d/0x1e0
 [<ffffffff81342f62>] device_unregister+0x22/0x60
 [<ffffffffa0054422>] unregister_virtio_device+0x12/0x20 [virtio]
 [<ffffffffa0063e4e>] virtio_pci_remove+0x2f/0x68 [virtio_pci]
 [<ffffffff8128ab37>] pci_device_remove+0x37/0x70
 [<ffffffff8134623f>] __device_release_driver+0x6f/0xe0
 [<ffffffff813463ad>] device_release_driver+0x2d/0x40
 [<ffffffff81345243>] bus_remove_device+0xa3/0x100
 [<ffffffff81342e8d>] device_del+0x12d/0x1e0
 [<ffffffff81342f62>] device_unregister+0x22/0x60
 [<ffffffff812845cc>] pci_stop_bus_device+0x8c/0xa0
 [<ffffffff8129b81a>] acpiphp_disable_slot+0x9a/0x1d0
 [<ffffffff8129c07d>] _handle_hotplug_event_func+0xed/0x1d0
 [<ffffffff8129bf90>] ? _handle_hotplug_event_func+0x0/0x1d0
 [<ffffffff8108b2b0>] worker_thread+0x170/0x2a0
 [<ffffffff81090bf0>] ? autoremove_wake_function+0x0/0x40
 [<ffffffff8108b140>] ? worker_thread+0x0/0x2a0
 [<ffffffff81090886>] kthread+0x96/0xa0
 [<ffffffff8100c14a>] child_rip+0xa/0x20
 [<ffffffff810907f0>] ? kthread+0x0/0xa0
 [<ffffffff8100c140>] ? child_rip+0x0/0x20
...
Restarting system.

-----------------------------------------------------------------

Tried this issue on kernel-2.6.32-296.el6.x86_64.
guest info:
kernel-2.6.32-296.el6.x86_64
host info:
# uname -r && rpm -q qemu-kvm
2.6.32-345.el6.x86_64
qemu-kvm-0.12.1.2-2.337.el6.x86_64

Steps:
the same to comment #0

Results:
after remove the used data-disk, the guest will call trace and reboot again(the same as bug 876601), guest kernel log as following:
general protection fault: 0000 [#1] SMP 
last sysfs file: /sys/devices/pci0000:00/0000:00:09.0/virtio4/block/vdb/removable
CPU 0 
Modules linked in: fuse sunrpc ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 uinput ppdev parport_pc parport microcode sg virtio_balloon snd_hda_intel snd_hda_codec snd_hwdep snd_seq snd_seq_device snd_pcm snd_timer snd soundcore snd_page_alloc virtio_net virtio_console i2c_piix4 i2c_core ext4 mbcache jbd2 virtio_blk sr_mod cdrom virtio_pci virtio_ring virtio pata_acpi ata_generic ata_piix dm_mirror dm_region_hash dm_log dm_mod [last unloaded: speedstep_lib]

Pid: 7386, comm: blkid Not tainted 2.6.32-296.el6.x86_64 #1 Red Hat KVM
RIP: 0010:[<ffffffffa00540c0>]  [<ffffffffa00540c0>] virtio_check_driver_offered_feature+0x10/0x50 [virtio]
RSP: 0018:ffff8800c932dd78  EFLAGS: 00010286
RAX: 6563697665642f30 RBX: ffff88011b13dc80 RCX: 0000000000000000
RDX: 0000000000005331 RSI: 0000000000000007 RDI: ffff88011727d800
RBP: ffff8800c932dd78 R08: ffffffffa00861a0 R09: 0000000000000000
R10: 0000000000000077 R11: 0000000000000001 R12: 000000000000101d
R13: 0000000000005331 R14: ffff88011727d800 R15: 0000000000000000
FS:  00007f0ab8a40740(0000) GS:ffff880028200000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f0ab81cb8a0 CR3: 000000010e240000 CR4: 00000000000406f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process blkid (pid: 7386, threadinfo ffff8800c932c000, task ffff88011ab70ae0)
Stack:
 ffff8800c932ddb8 ffffffffa008564b ffff8800c932de58 ffff880117b0ec00
<d> ffff880118317d98 0000000000000000 0000000000000003 0000000000000000
<d> ffff8800c932ddf8 ffffffff8125f007 0000000000000000 0000000000005331
Call Trace:
 [<ffffffffa008564b>] virtblk_ioctl+0x4b/0x90 [virtio_blk]
 [<ffffffff8125f007>] __blkdev_driver_ioctl+0x67/0x80
 [<ffffffff8125f48d>] blkdev_ioctl+0x1ed/0x6e0
 [<ffffffff811b42bc>] block_ioctl+0x3c/0x40
 [<ffffffff8118e992>] vfs_ioctl+0x22/0xa0
 [<ffffffff8118eb34>] do_vfs_ioctl+0x84/0x580
 [<ffffffff8118f0b1>] sys_ioctl+0x81/0xa0
 [<ffffffff8100b0f2>] system_call_fastpath+0x16/0x1b
Code: f0 41 ff d4 48 8b 5d e8 4c 8b 65 f0 4c 8b 6d f8 c9 c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 0f 1f 44 00 00 48 8b 87 88 00 00 00 <8b> 48 78 85 c9 74 23 48 8b 50 70 39 32 74 1f 31 c0 eb 10 0f 1f 
RIP  [<ffffffffa00540c0>] virtio_check_driver_offered_feature+0x10/0x50 [virtio]
 RSP <ffff8800c932dd78>
do_IRQ: 0.137 No irq handler for vector (irq -1)
�Mounting proc filesystem
Mounting sysfs filesystem
Creating /dev
Creating initial device nodes
Free memory/Total memory (free %): 83540 / 114888 ( 72.7143 )
Loading dm-mod.ko module
Loading dm-log.ko module
Loading dm-region-hash.ko module
Loading dm-mirror.ko module
Loading dm-zero.ko module
Loading dm-snapshot.ko module
Loading ipt_REJECT.ko module
Loading nf_defrag_ipv4.ko module
Loading ip_tables.ko module
Loading nf_conntrack.ko module
Loading ip6_tables.ko module
Loading ipv6.ko module
do_IRQ: 0.97 No irq handler for vector (irq -1)
Loading uinput.ko module
Loading parport.ko module
Loading microcode.ko module
Loading sg.ko module
Loading soundcore.ko module
Loading snd-page-alloc.ko module
Loading i2c-core.ko module
Loading mbcache.ko module
Loading jbd2.ko module
Loading cdrom.ko module
Loading virtio_ring.ko module
Loading virtio.ko module
Loading pata_acpi.ko module
Loading ata_generic.ko module
Loading ata_piix.ko module
Loading nf_conntrack_ipv4.ko module
Loading iptable_filter.ko module
Loading ip6t_REJECT.ko module
Loading nf_defrag_ipv6.ko module
Loading xt_state.ko module
Loading ip6table_filter.ko module
Loading ppdev.ko module
Loading parport_pc.ko module
Loading virtio_balloon.ko module
Loading snd.ko module
Loading virtio_net.ko module
Loading virtio_console.ko module
Loading i2c-piix4.ko module
Loading ext4.ko module
Loading virtio_blk.ko module
Loading sr_mod.ko module
Loading virtio_pci.ko module
Loading nf_conntrack_ipv6.ko module
Loading snd-hwdep.ko module
Loading snd-seq-device.ko module
Loading snd-timer.ko module
Loading snd-seq.ko module
Loading snd-pcm.ko module
Loading snd-hda-codec.ko module
Loading snd-hda-intel.ko module
Waiting for required block device discovery
Waiting for 1 vda-like device(s)...cat: can't open '/sys/block/vda/device/model': No such file or directory
cat: can't open '/sys/block/vda/device/type': No such file or directory
Found
Creating Block Devices
Creating block device loop0
Creating block device loop1
Creating block device loop2
Creating block device loop3
Creating block device loop4
Creating block device loop5
Creating block device loop6
Creating block device loop7
Creating block device ram0
Creating block device ram1
Creating block device ram10
Creating block device ram11
Creating block device ram12
Creating block device ram13
Creating block device ram14
Creating block device ram15
Creating block device ram2
Creating block device ram3
Creating block device ram4
Creating block device ram5
Creating block device ram6
Creating block device ram7
Creating block device ram8
Creating block device ram9
Creating block device sr0
Creating block device vda
Making device-mapper control node
Scanning logical volumes
  Reading all physical volumes.  This may take a while...
  Found volume group "VolGroup" using metadata type lvm2
Activating logical volumes
  2 logical volume(s) in volume group "VolGroup" now active
Free memory/Total memory (free %): 73520 / 114888 ( 63.9928 )
Saving to the local filesystem /dev/mapper/VolGroup-lv_root
e2fsck 1.41.12 (17-May-2010)
/dev/mapper/VolGroup-lv_root: recovering journal
Clearing orphaned inode 142935 (uid=0, gid=0, mode=0100600, size=208)
Clearing orphaned inode 142934 (uid=0, gid=0, mode=0100600, size=540)
/dev/mapper/VolGroup-lv_root: clean, 125352/1022000 files, 813679/4081664 blocks
Free memory/Total memory (free %): 72704 / 114888 ( 63.2825 )
Loading SELINUX policy
Copying data                       : [100 %] 
Saving core complete
Restarting system.

Base on above, re-assign it for fixing.

Comment 30 Asias He 2012-12-10 09:18:34 UTC
(In reply to comment #29)
> Reproduce this issue on kernel-2.6.32-220.el6.x86_64.
> guest info:
> kernel-2.6.32-220.el6.x86_64
> host info:
> # uname -r && rpm -q qemu-kvm
> 2.6.32-345.el6.x86_64
> qemu-kvm-0.12.1.2-2.337.el6.x86_64
> 
> Steps:
> the same to comment #0
> 
> Results:
> after hot-plug the data disk, the device appears in guest kernel as /dev/vdb
> with "vdb: unknown partition table", as expected.
> after hot-unplug the data disk in use, the guest will hang and then call
> trace and reboot.
> # nc -U /tmp/ttyS0
> ------------[ cut here ]------------
> kernel BUG at drivers/block/virtio_blk.c:543!
> invalid opcode: 0000 [#1] SMP 
> last sysfs file: /sys/devices/virtual/block/dm-1/dm/name
> CPU 0 
> Modules linked in: fuse sunrpc ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4
> iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6
> xt_state nf_conntrack ip6table_filter ip6_tables ipv6 uinput ppdev
> parport_pc parport sg microcode virtio_net virtio_balloon snd_hda_intel
> snd_hda_codec snd_hwdep snd_seq snd_seq_device snd_pcm snd_timer snd
> soundcore snd_page_alloc virtio_console i2c_piix4 i2c_core ext4 mbcache jbd2
> virtio_blk sr_mod cdrom virtio_pci virtio_ring virtio pata_acpi ata_generic
> ata_piix dm_mirror dm_region_hash dm_log dm_mod [last unloaded:
> speedstep_lib]
> 
> Pid: 40, comm: kacpi_hotplug Not tainted 2.6.32-220.el6.x86_64 #1 Red Hat KVM
> RIP: 0010:[<ffffffffa0085a3a>]  [<ffffffffa0085a3a>]
> virtblk_remove+0x29/0xbb [virtio_blk]
> RSP: 0018:ffff88011ce3db10  EFLAGS: 00010293
> RAX: ffff880119104028 RBX: ffff880119104000 RCX: 00000000ffffffff
> RDX: 0000000000000000 RSI: 0000000000000004 RDI: ffff880119104040
> RBP: ffff88011ce3db20 R08: 0000000000000000 R09: 00000000000002ae
> R10: 0000000000000000 R11: 0000000000000000 R12: ffff880118206c00
> R13: ffffffffa00547e0 R14: ffff88011ce3ddc0 R15: ffff880118811368
> FS:  0000000000000000(0000) GS:ffff880028200000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
> CR2: 000000000138c29c CR3: 00000000dda07000 CR4: 00000000000406f0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> Process kacpi_hotplug (pid: 40, threadinfo ffff88011ce3c000, task
> ffff88011ce3b540)
> Stack:
>  ffff880118206c00 ffffffffa00862c0 ffff88011ce3db40 ffffffffa0054272
> <0> ffff88011ce3db40 ffff880118206c08 ffff88011ce3db60 ffffffff8134623f
> <0> ffff880118206c68 ffff880118206c08 ffff88011ce3db80 ffffffff813463ad
> Call Trace:
>  [<ffffffffa0054272>] virtio_dev_remove+0x22/0x50 [virtio]
>  [<ffffffff8134623f>] __device_release_driver+0x6f/0xe0
>  [<ffffffff813463ad>] device_release_driver+0x2d/0x40
>  [<ffffffff81345243>] bus_remove_device+0xa3/0x100
>  [<ffffffff81342e8d>] device_del+0x12d/0x1e0
>  [<ffffffff81342f62>] device_unregister+0x22/0x60
>  [<ffffffffa0054422>] unregister_virtio_device+0x12/0x20 [virtio]
>  [<ffffffffa0063e4e>] virtio_pci_remove+0x2f/0x68 [virtio_pci]
>  [<ffffffff8128ab37>] pci_device_remove+0x37/0x70
>  [<ffffffff8134623f>] __device_release_driver+0x6f/0xe0
>  [<ffffffff813463ad>] device_release_driver+0x2d/0x40
>  [<ffffffff81345243>] bus_remove_device+0xa3/0x100
>  [<ffffffff81342e8d>] device_del+0x12d/0x1e0
>  [<ffffffff81342f62>] device_unregister+0x22/0x60
>  [<ffffffff812845cc>] pci_stop_bus_device+0x8c/0xa0
>  [<ffffffff8129b81a>] acpiphp_disable_slot+0x9a/0x1d0
>  [<ffffffff8129c07d>] _handle_hotplug_event_func+0xed/0x1d0
>  [<ffffffff8129bf90>] ? _handle_hotplug_event_func+0x0/0x1d0
>  [<ffffffff8108b2b0>] worker_thread+0x170/0x2a0
>  [<ffffffff81090bf0>] ? autoremove_wake_function+0x0/0x40
>  [<ffffffff8108b140>] ? worker_thread+0x0/0x2a0
>  [<ffffffff81090886>] kthread+0x96/0xa0
>  [<ffffffff8100c14a>] child_rip+0xa/0x20
>  [<ffffffff810907f0>] ? kthread+0x0/0xa0
>  [<ffffffff8100c140>] ? child_rip+0x0/0x20
> ...
> Restarting system.
> 
> -----------------------------------------------------------------
> 
> Tried this issue on kernel-2.6.32-296.el6.x86_64.
> guest info:
> kernel-2.6.32-296.el6.x86_64
> host info:
> # uname -r && rpm -q qemu-kvm
> 2.6.32-345.el6.x86_64
> qemu-kvm-0.12.1.2-2.337.el6.x86_64
> 
> Steps:
> the same to comment #0
> 
> Results:
> after remove the used data-disk, the guest will call trace and reboot
> again(the same as bug 876601), guest kernel log as following:

When did the call trace happen, right after the remove of the disk or after the remove and *plug* the disk again?


> general protection fault: 0000 [#1] SMP 
> last sysfs file:
> /sys/devices/pci0000:00/0000:00:09.0/virtio4/block/vdb/removable
> CPU 0 
> Modules linked in: fuse sunrpc ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4
> iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6
> xt_state nf_conntrack ip6table_filter ip6_tables ipv6 uinput ppdev
> parport_pc parport microcode sg virtio_balloon snd_hda_intel snd_hda_codec
> snd_hwdep snd_seq snd_seq_device snd_pcm snd_timer snd soundcore
> snd_page_alloc virtio_net virtio_console i2c_piix4 i2c_core ext4 mbcache
> jbd2 virtio_blk sr_mod cdrom virtio_pci virtio_ring virtio pata_acpi
> ata_generic ata_piix dm_mirror dm_region_hash dm_log dm_mod [last unloaded:
> speedstep_lib]
> 
> Pid: 7386, comm: blkid Not tainted 2.6.32-296.el6.x86_64 #1 Red Hat KVM
> RIP: 0010:[<ffffffffa00540c0>]  [<ffffffffa00540c0>]
> virtio_check_driver_offered_feature+0x10/0x50 [virtio]
> RSP: 0018:ffff8800c932dd78  EFLAGS: 00010286
> RAX: 6563697665642f30 RBX: ffff88011b13dc80 RCX: 0000000000000000
> RDX: 0000000000005331 RSI: 0000000000000007 RDI: ffff88011727d800
> RBP: ffff8800c932dd78 R08: ffffffffa00861a0 R09: 0000000000000000
> R10: 0000000000000077 R11: 0000000000000001 R12: 000000000000101d
> R13: 0000000000005331 R14: ffff88011727d800 R15: 0000000000000000
> FS:  00007f0ab8a40740(0000) GS:ffff880028200000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 00007f0ab81cb8a0 CR3: 000000010e240000 CR4: 00000000000406f0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> Process blkid (pid: 7386, threadinfo ffff8800c932c000, task ffff88011ab70ae0)
> Stack:
>  ffff8800c932ddb8 ffffffffa008564b ffff8800c932de58 ffff880117b0ec00
> <d> ffff880118317d98 0000000000000000 0000000000000003 0000000000000000
> <d> ffff8800c932ddf8 ffffffff8125f007 0000000000000000 0000000000005331
> Call Trace:
>  [<ffffffffa008564b>] virtblk_ioctl+0x4b/0x90 [virtio_blk]
>  [<ffffffff8125f007>] __blkdev_driver_ioctl+0x67/0x80
>  [<ffffffff8125f48d>] blkdev_ioctl+0x1ed/0x6e0
>  [<ffffffff811b42bc>] block_ioctl+0x3c/0x40
>  [<ffffffff8118e992>] vfs_ioctl+0x22/0xa0
>  [<ffffffff8118eb34>] do_vfs_ioctl+0x84/0x580
>  [<ffffffff8118f0b1>] sys_ioctl+0x81/0xa0
>  [<ffffffff8100b0f2>] system_call_fastpath+0x16/0x1b
> Code: f0 41 ff d4 48 8b 5d e8 4c 8b 65 f0 4c 8b 6d f8 c9 c3 66 0f 1f 84 00
> 00 00 00 00 55 48 89 e5 0f 1f 44 00 00 48 8b 87 88 00 00 00 <8b> 48 78 85 c9
> 74 23 48 8b 50 70 39 32 74 1f 31 c0 eb 10 0f 1f 
> RIP  [<ffffffffa00540c0>] virtio_check_driver_offered_feature+0x10/0x50
> [virtio]
>  RSP <ffff8800c932dd78>
> do_IRQ: 0.137 No irq handler for vector (irq -1)
> �Mounting proc filesystem
> Mounting sysfs filesystem
> Creating /dev
> Creating initial device nodes
> Free memory/Total memory (free %): 83540 / 114888 ( 72.7143 )
> Loading dm-mod.ko module
> Loading dm-log.ko module
> Loading dm-region-hash.ko module
> Loading dm-mirror.ko module
> Loading dm-zero.ko module
> Loading dm-snapshot.ko module
> Loading ipt_REJECT.ko module
> Loading nf_defrag_ipv4.ko module
> Loading ip_tables.ko module
> Loading nf_conntrack.ko module
> Loading ip6_tables.ko module
> Loading ipv6.ko module
> do_IRQ: 0.97 No irq handler for vector (irq -1)
> Loading uinput.ko module
> Loading parport.ko module
> Loading microcode.ko module
> Loading sg.ko module
> Loading soundcore.ko module
> Loading snd-page-alloc.ko module
> Loading i2c-core.ko module
> Loading mbcache.ko module
> Loading jbd2.ko module
> Loading cdrom.ko module
> Loading virtio_ring.ko module
> Loading virtio.ko module
> Loading pata_acpi.ko module
> Loading ata_generic.ko module
> Loading ata_piix.ko module
> Loading nf_conntrack_ipv4.ko module
> Loading iptable_filter.ko module
> Loading ip6t_REJECT.ko module
> Loading nf_defrag_ipv6.ko module
> Loading xt_state.ko module
> Loading ip6table_filter.ko module
> Loading ppdev.ko module
> Loading parport_pc.ko module
> Loading virtio_balloon.ko module
> Loading snd.ko module
> Loading virtio_net.ko module
> Loading virtio_console.ko module
> Loading i2c-piix4.ko module
> Loading ext4.ko module
> Loading virtio_blk.ko module
> Loading sr_mod.ko module
> Loading virtio_pci.ko module
> Loading nf_conntrack_ipv6.ko module
> Loading snd-hwdep.ko module
> Loading snd-seq-device.ko module
> Loading snd-timer.ko module
> Loading snd-seq.ko module
> Loading snd-pcm.ko module
> Loading snd-hda-codec.ko module
> Loading snd-hda-intel.ko module
> Waiting for required block device discovery
> Waiting for 1 vda-like device(s)...cat: can't open
> '/sys/block/vda/device/model': No such file or directory
> cat: can't open '/sys/block/vda/device/type': No such file or directory
> Found
> Creating Block Devices
> Creating block device loop0
> Creating block device loop1
> Creating block device loop2
> Creating block device loop3
> Creating block device loop4
> Creating block device loop5
> Creating block device loop6
> Creating block device loop7
> Creating block device ram0
> Creating block device ram1
> Creating block device ram10
> Creating block device ram11
> Creating block device ram12
> Creating block device ram13
> Creating block device ram14
> Creating block device ram15
> Creating block device ram2
> Creating block device ram3
> Creating block device ram4
> Creating block device ram5
> Creating block device ram6
> Creating block device ram7
> Creating block device ram8
> Creating block device ram9
> Creating block device sr0
> Creating block device vda
> Making device-mapper control node
> Scanning logical volumes
>   Reading all physical volumes.  This may take a while...
>   Found volume group "VolGroup" using metadata type lvm2
> Activating logical volumes
>   2 logical volume(s) in volume group "VolGroup" now active
> Free memory/Total memory (free %): 73520 / 114888 ( 63.9928 )
> Saving to the local filesystem /dev/mapper/VolGroup-lv_root
> e2fsck 1.41.12 (17-May-2010)
> /dev/mapper/VolGroup-lv_root: recovering journal
> Clearing orphaned inode 142935 (uid=0, gid=0, mode=0100600, size=208)
> Clearing orphaned inode 142934 (uid=0, gid=0, mode=0100600, size=540)
> /dev/mapper/VolGroup-lv_root: clean, 125352/1022000 files, 813679/4081664
> blocks
> Free memory/Total memory (free %): 72704 / 114888 ( 63.2825 )
> Loading SELINUX policy
> Copying data                       : [100 %] 
> Saving core complete
> Restarting system.
> 
> Base on above, re-assign it for fixing.

Comment 31 Asias He 2012-12-10 09:22:30 UTC
The issue we are currently seeing is different from the original bug report of BZ 734051. Also we have opened the bug 876601 for the issue. Why are you opening this bug again?

Comment 32 Sibiao Luo 2012-12-10 09:32:51 UTC
(In reply to comment #30)
> When did the call trace happen, right after the remove of the disk or after
> the remove and *plug* the disk again?
> 
call trace happened just after remove the data disk via '(qemu) device_del $device'.
(In reply to comment #31)
> The issue we are currently seeing is different from the original bug report
> of BZ 734051. Also we have opened the bug 876601 for the issue. Why are you
> opening this bug again?
ok, thanks for your sure, as the status of bz 734051 is same as bz 876601 now, we can close this issue to VERIFIED, please correct me if any problem.

Comment 33 Asias He 2012-12-11 02:32:39 UTC
(In reply to comment #32)
> (In reply to comment #30)
> > When did the call trace happen, right after the remove of the disk or after
> > the remove and *plug* the disk again?
> > 
> call trace happened just after remove the data disk via '(qemu) device_del
> $device'.
> (In reply to comment #31)

Can you try this:

1. boot a vm without virito disk

2. attach a virio disk from monitor as disk1
   
3. mount /dev/vda /mnt; do some read/write in /mnt;

4. device_del disk1 from monitor

If no panic occurs, kill the guest and go to step 1 directly. (Do not plug the disk and unplug try to reproduce the panic without a new boot)

Does it panic after setp 4?

> > The issue we are currently seeing is different from the original bug report
> > of BZ 734051. Also we have opened the bug 876601 for the issue. Why are you
> > opening this bug again?
> ok, thanks for your sure, as the status of bz 734051 is same as bz 876601
> now, we can close this issue to VERIFIED, please correct me if any problem.

Comment 34 Sibiao Luo 2012-12-17 10:11:47 UTC
Tried this issue on kernel-2.6.32-348.el6.x86_64 & qemu-kvm-0.12.1.2-2.346.el6.x86_64.
host info:
# uname -r && rpm -q qemu-kvm
2.6.32-348.el6.x86_64
qemu-kvm-0.12.1.2-2.346.el6.x86_64
guest info:
# uname -r
2.6.32-348.el6.x86_64

Steps:
just as comment #33

Results:
after the step 2, the guest kernel will prompt 'vda: unknown partition table', that's just as expected.

after the step 4, there is no panic occurs any more and guest kernel will prompt 'virtio-pci 0000:00:04.0: PCI INT A disabled', that's just as expected.

Base on above and comment #29, i think this bug was fixed, please correct me if any problem.

Best Regards.
sluo

Comment 35 Sibiao Luo 2012-12-18 02:43:24 UTC
(In reply to comment #34)
> Tried this issue on kernel-2.6.32-348.el6.x86_64 &
> qemu-kvm-0.12.1.2-2.346.el6.x86_64.
> host info:
> # uname -r && rpm -q qemu-kvm
> 2.6.32-348.el6.x86_64
> qemu-kvm-0.12.1.2-2.346.el6.x86_64
> guest info:
> # uname -r
> 2.6.32-348.el6.x86_64
> 
> Steps:
> just as comment #33
> 
> Results:
> after the step 2, the guest kernel will prompt 'vda: unknown partition
> table', that's just as expected.
> 
> after the step 4, there is no panic occurs any more and guest kernel will
> prompt 'virtio-pci 0000:00:04.0: PCI INT A disabled', that's just as
> expected.
> 
> Base on above and comment #29, i think this bug was fixed, please correct me
> if any problem.
> 
just test by manually for 20 times, I will let xwei help verify for 2000 times, and will update the results here.

Comment 36 Sibiao Luo 2013-01-04 02:44:07 UTC
Hi Asias,

   The job was filed and core dump if do 1000 times test as comment #33, but I cannot make sure whether the core dump was generated by unplug a using virtio disk. There many different core dump file, i will paste some logs of them, please help me check it, thx.

Program terminated with signal 11, Segmentation fault.
#0  virtio_blk_handle_request (req=0x71, mrb=0x7fff23d75a80) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-blk.c:373
373	    if (req->elem.out_num < 1 || req->elem.in_num < 1) {
(gdb) bt
#0  virtio_blk_handle_request (req=0x71, mrb=0x7fff23d75a80) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-blk.c:373
#1  0x00007f92b31efe8b in virtio_blk_dma_restart_bh (opaque=0x7f92b4f33590) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-blk.c:450
#2  0x00007f92b32104a1 in qemu_bh_poll () at /usr/src/debug/qemu-kvm-0.12.1.2/async.c:70
#3  0x00007f92b31db589 in main_loop_wait (timeout=1000) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4017
#4  0x00007f92b31fd9ba in kvm_main_loop () at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2244
#5  0x00007f92b31de178 in main_loop (argc=45, argv=<value optimized out>, envp=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4187
#6  main (argc=45, argv=<value optimized out>, envp=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:6525
(gdb)


Program terminated with signal 11, Segmentation fault.
#0  0x00007f693bfc64fc in qdict_destroy_obj (obj=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/qdict.c:470
470	            QLIST_REMOVE(entry, next);
(gdb) bt
#0  0x00007f693bfc64fc in qdict_destroy_obj (obj=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/qdict.c:470
#1  0x00007f693bfc66cf in qobject_decref (obj=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/qobject.h:99
#2  qlist_destroy_obj (obj=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/qlist.c:151
#3  0x00007f693bfc7739 in qobject_decref (lexer=0x7f693dc8cc30, token=0x7f693ecea930, type=JSON_OPERATOR, x=37, y=36) at /usr/src/debug/qemu-kvm-0.12.1.2/qobject.h:99
#4  json_message_process_token (lexer=0x7f693dc8cc30, token=0x7f693ecea930, type=JSON_OPERATOR, x=37, y=36) at /usr/src/debug/qemu-kvm-0.12.1.2/json-streamer.c:89
#5  0x00007f693bfc73a0 in json_lexer_feed_char (lexer=0x7f693dc8cc30, ch=125 '}', flush=false) at /usr/src/debug/qemu-kvm-0.12.1.2/json-lexer.c:303
#6  0x00007f693bfc74e9 in json_lexer_feed (lexer=0x7f693dc8cc30, buffer=0x7fffda790210 "}", size=1) at /usr/src/debug/qemu-kvm-0.12.1.2/json-lexer.c:355
#7  0x00007f693bf7174e in monitor_control_read (opaque=<value optimized out>, buf=<value optimized out>, size=<value optimized out>)
    at /usr/src/debug/qemu-kvm-0.12.1.2/monitor.c:4973
#8  0x00007f693bfea87a in qemu_chr_read (opaque=0x7f693da9e700) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-char.c:180
#9  tcp_chr_read (opaque=0x7f693da9e700) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-char.c:2211
#10 0x00007f693bf6a40f in main_loop_wait (timeout=1000) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:3975
#11 0x00007f693bf8c9ba in kvm_main_loop () at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2244
#12 0x00007f693bf6d178 in main_loop (argc=45, argv=<value optimized out>, envp=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4187
#13 main (argc=45, argv=<value optimized out>, envp=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:6525
(gdb) 


Program terminated with signal 11, Segmentation fault.
#0  qemu_bh_delete (bh=0x90) at /usr/src/debug/qemu-kvm-0.12.1.2/async.c:118
118	    bh->scheduled = 0;
(gdb) bt
#0  qemu_bh_delete (bh=0x90) at /usr/src/debug/qemu-kvm-0.12.1.2/async.c:118
#1  0x00007f671e182e5f in virtio_blk_dma_restart_bh (opaque=0x7f6721c5cd80) at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-blk.c:444
#2  0x00007f671e1a34a1 in qemu_bh_poll () at /usr/src/debug/qemu-kvm-0.12.1.2/async.c:70
#3  0x00007f671e16e589 in main_loop_wait (timeout=1000) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4017
#4  0x00007f671e1909ba in kvm_main_loop () at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2244
#5  0x00007f671e171178 in main_loop (argc=45, argv=<value optimized out>, envp=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4187
#6  main (argc=45, argv=<value optimized out>, envp=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:6525
(gdb)

Best Regards.
sluo

Comment 37 Asias He 2013-01-05 02:13:03 UTC
I checked the core dumps, some are related to virtio-blk, some are not. From the core dumps, I can not tell if the core dumps was generated by unplug a using virtio disk. The test is 1000 times and the core dumps are ~10 times. If we have issue with 'test in comment #33', manually test should be able to catch it. The good thing is that, we can firm there is no panic in guest side.

Comment 38 juzhang 2013-01-05 02:32:19 UTC
(In reply to comment #37)
> I checked the core dumps, some are related to virtio-blk, some are not. From
> the core dumps, I can not tell if the core dumps was generated by unplug a
> using virtio disk. The test is 1000 times and the core dumps are ~10 times.
> If we have issue with 'test in comment #33', manually test should be able to
> catch it. The good thing is that, we can firm there is no panic in guest
> side.

Hi, Asias

Can KVM QE set this issue as verified and open new issue about comment35? Since you has confirmed that "there is no panic in guest side."

Comment 39 Asias He 2013-01-05 02:58:47 UTC
(In reply to comment #38)
> (In reply to comment #37)
> > I checked the core dumps, some are related to virtio-blk, some are not. From
> > the core dumps, I can not tell if the core dumps was generated by unplug a
> > using virtio disk. The test is 1000 times and the core dumps are ~10 times.
> > If we have issue with 'test in comment #33', manually test should be able to
> > catch it. The good thing is that, we can firm there is no panic in guest
> > side.
> 
> Hi, Asias
> 
> Can KVM QE set this issue as verified and open new issue about comment35?
> Since you has confirmed that "there is no panic in guest side."

Yes, please do.

Comment 40 juzhang 2013-01-05 03:53:21 UTC
(In reply to comment #36)
> Hi Asias,
> 
>    The job was filed and core dump if do 1000 times test as comment #33, but
> I cannot make sure whether the core dump was generated by unplug a using
> virtio disk. There many different core dump file, i will paste some logs of
> them, please help me check it, thx.
> 
> Program terminated with signal 11, Segmentation fault.
> #0  virtio_blk_handle_request (req=0x71, mrb=0x7fff23d75a80) at
> /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-blk.c:373
> 373	    if (req->elem.out_num < 1 || req->elem.in_num < 1) {
> (gdb) bt
> #0  virtio_blk_handle_request (req=0x71, mrb=0x7fff23d75a80) at
> /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-blk.c:373
> #1  0x00007f92b31efe8b in virtio_blk_dma_restart_bh (opaque=0x7f92b4f33590)
> at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-blk.c:450
> #2  0x00007f92b32104a1 in qemu_bh_poll () at
> /usr/src/debug/qemu-kvm-0.12.1.2/async.c:70
> #3  0x00007f92b31db589 in main_loop_wait (timeout=1000) at
> /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4017
> #4  0x00007f92b31fd9ba in kvm_main_loop () at
> /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2244
> #5  0x00007f92b31de178 in main_loop (argc=45, argv=<value optimized out>,
> envp=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4187
> #6  main (argc=45, argv=<value optimized out>, envp=<value optimized out>)
> at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:6525
> (gdb)
> 
> 
> Program terminated with signal 11, Segmentation fault.
> #0  0x00007f693bfc64fc in qdict_destroy_obj (obj=<value optimized out>) at
> /usr/src/debug/qemu-kvm-0.12.1.2/qdict.c:470
> 470	            QLIST_REMOVE(entry, next);
> (gdb) bt
> #0  0x00007f693bfc64fc in qdict_destroy_obj (obj=<value optimized out>) at
> /usr/src/debug/qemu-kvm-0.12.1.2/qdict.c:470
> #1  0x00007f693bfc66cf in qobject_decref (obj=<value optimized out>) at
> /usr/src/debug/qemu-kvm-0.12.1.2/qobject.h:99
> #2  qlist_destroy_obj (obj=<value optimized out>) at
> /usr/src/debug/qemu-kvm-0.12.1.2/qlist.c:151
> #3  0x00007f693bfc7739 in qobject_decref (lexer=0x7f693dc8cc30,
> token=0x7f693ecea930, type=JSON_OPERATOR, x=37, y=36) at
> /usr/src/debug/qemu-kvm-0.12.1.2/qobject.h:99
> #4  json_message_process_token (lexer=0x7f693dc8cc30, token=0x7f693ecea930,
> type=JSON_OPERATOR, x=37, y=36) at
> /usr/src/debug/qemu-kvm-0.12.1.2/json-streamer.c:89
> #5  0x00007f693bfc73a0 in json_lexer_feed_char (lexer=0x7f693dc8cc30, ch=125
> '}', flush=false) at /usr/src/debug/qemu-kvm-0.12.1.2/json-lexer.c:303
> #6  0x00007f693bfc74e9 in json_lexer_feed (lexer=0x7f693dc8cc30,
> buffer=0x7fffda790210 "}", size=1) at
> /usr/src/debug/qemu-kvm-0.12.1.2/json-lexer.c:355
> #7  0x00007f693bf7174e in monitor_control_read (opaque=<value optimized
> out>, buf=<value optimized out>, size=<value optimized out>)
>     at /usr/src/debug/qemu-kvm-0.12.1.2/monitor.c:4973
> #8  0x00007f693bfea87a in qemu_chr_read (opaque=0x7f693da9e700) at
> /usr/src/debug/qemu-kvm-0.12.1.2/qemu-char.c:180
> #9  tcp_chr_read (opaque=0x7f693da9e700) at
> /usr/src/debug/qemu-kvm-0.12.1.2/qemu-char.c:2211
> #10 0x00007f693bf6a40f in main_loop_wait (timeout=1000) at
> /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:3975
> #11 0x00007f693bf8c9ba in kvm_main_loop () at
> /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2244
> #12 0x00007f693bf6d178 in main_loop (argc=45, argv=<value optimized out>,
> envp=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4187
> #13 main (argc=45, argv=<value optimized out>, envp=<value optimized out>)
> at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:6525
> (gdb) 
> 
> 
> Program terminated with signal 11, Segmentation fault.
> #0  qemu_bh_delete (bh=0x90) at /usr/src/debug/qemu-kvm-0.12.1.2/async.c:118
> 118	    bh->scheduled = 0;
> (gdb) bt
> #0  qemu_bh_delete (bh=0x90) at /usr/src/debug/qemu-kvm-0.12.1.2/async.c:118
> #1  0x00007f671e182e5f in virtio_blk_dma_restart_bh (opaque=0x7f6721c5cd80)
> at /usr/src/debug/qemu-kvm-0.12.1.2/hw/virtio-blk.c:444
> #2  0x00007f671e1a34a1 in qemu_bh_poll () at
> /usr/src/debug/qemu-kvm-0.12.1.2/async.c:70
> #3  0x00007f671e16e589 in main_loop_wait (timeout=1000) at
> /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4017
> #4  0x00007f671e1909ba in kvm_main_loop () at
> /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2244
> #5  0x00007f671e171178 in main_loop (argc=45, argv=<value optimized out>,
> envp=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:4187
> #6  main (argc=45, argv=<value optimized out>, envp=<value optimized out>)
> at /usr/src/debug/qemu-kvm-0.12.1.2/vl.c:6525
> (gdb)
> 
> Best Regards.
> sluo

Open separate bug[1] to track this issue.

[1]Bug 892067 qemu-kvm sometimes core dump when unplug a using virtio data disk

Comment 41 juzhang 2013-01-05 03:54:38 UTC
According to comment36, comment37 and comment38, set this issue as verified.

Comment 43 errata-xmlrpc 2013-02-21 05:54:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2013-0496.html