Bug 1374251 - qemu-kvm-rhev core dumped when enabling virtio-scsi "data plane" and executing "eject"
Summary: qemu-kvm-rhev core dumped when enabling virtio-scsi "data plane" and executin...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.3
Hardware: x86_64
OS: Linux
high
urgent
Target Milestone: rc
: ---
Assignee: Fam Zheng
QA Contact: FuXiangChun
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-09-08 10:37 UTC by FuXiangChun
Modified: 2016-11-07 21:35 UTC (History)
10 users (show)

Fixed In Version: qemu-kvm-rhev-2.6.0-26.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-07 21:35:50 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:2673 normal SHIPPED_LIVE qemu-kvm-rhev bug fix and enhancement update 2016-11-08 01:06:13 UTC

Description FuXiangChun 2016-09-08 10:37:24 UTC
Description of problem:
Boot guest with virtio-scsi "data plane" and cdrom. Then execute "eject" command via qmp. qemu-kvm process core dumped.

If remove data plane from qemu command line. qemu-kvm process works.

Version-Release number of selected component (if applicable):
qemu-kvm-rhev-2.6.0-23.el7
3.10.0-493.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1./usr/libexec/qemu-kvm -boot menu=on -m 2G -vnc :1 
-object iothread,id=iothread0 \

-drive file=rbd:libvirt-pool/rhel.raw:mon_host=10.66.144.26,format=raw,if=none,id=drive-scsi-disk0,cache=none,werror=stop,rerror=stop \

-device virtio-scsi-pci,id=scsi0,iothread=iothread0 \

-device scsi-hd,drive=drive-scsi-disk0,bus=scsi0.0,scsi-id=0,lun=0,id=scsi-disk0 \

-qmp tcp:0:6666,server,nowait \

-device virtio-scsi-pci,bus=pci.0,addr=0x9,id=scsi1 \

-drive file=/home/aaa.iso,if=none,media=cdrom,readonly=on,format=raw,id=cdrom1 \

-device scsi-cd,bus=scsi0.0,drive=cdrom1,id=scsi0-0 -monitor stdio

2.
{"execute":"eject","arguments":{"device":"cdrom1"}}
{"error": {"class": "GenericError", "desc": "Device 'cdrom1' is locked and force was not specified, wait for tray to open and try again"}}
{"timestamp": {"seconds": 1473329228, "microseconds": 634451}, "event": "DEVICE_TRAY_MOVED", "data": {"device": "cdrom1", "tray-open": true}}

{"execute":"eject","arguments":{"device":"cdrom1"}}
{"return": {}}

3.

Actual results:
(gdb) bt
#0  0x00007fffec5031d7 in raise () from /lib64/libc.so.6
#1  0x00007fffec5048c8 in abort () from /lib64/libc.so.6
#2  0x00007fffec4fc146 in __assert_fail_base () from /lib64/libc.so.6
#3  0x00007fffec4fc1f2 in __assert_fail () from /lib64/libc.so.6
#4  0x00005555557abc63 in virtio_scsi_handle_cmd_req_prepare (req=0x555556d9c1c0, s=0x5555570fc340)
    at /usr/src/debug/qemu-2.6.0/hw/scsi/virtio-scsi.c:543
#5  virtio_scsi_handle_cmd_vq (s=0x5555570fc340, vq=0x5555592100f0) at /usr/src/debug/qemu-2.6.0/hw/scsi/virtio-scsi.c:577
#6  0x000055555596e892 in aio_dispatch (ctx=ctx@entry=0x555556ce3c80) at aio-posix.c:330
#7  0x000055555596eaa8 in aio_poll (ctx=0x555556ce3c80, blocking=<optimized out>) at aio-posix.c:479
#8  0x0000555555839159 in iothread_run (opaque=0x555556cca640) at iothread.c:46
#9  0x00007fffede8adc5 in start_thread () from /lib64/libpthread.so.0
#10 0x00007fffec5c573d in clone () from /lib64/libc.so.6


Expected results:
works

Additional info:

Comment 1 FuXiangChun 2016-09-08 10:48:12 UTC
Re-tested build qemu-kvm-rhev-2.6.0-1.el7. qemu-kvm-rhev and guest work well. 

result:
{"execute":"eject","arguments":{"device":"cdrom1"}}
{"error": {"class": "GenericError", "desc": "Node 'cdrom1' is busy: block device is in use by data plane"}}

so,It is a regression bug. I added "regression" to keywords. If I am wrong. please remove it from keywords.  and I will confirm which build cause this problem later.

Comment 4 Fam Zheng 2016-09-12 02:30:01 UTC
Proposed fix upstream:

https://lists.gnu.org/archive/html/qemu-devel/2016-09/msg02243.html

Comment 6 Fam Zheng 2016-09-14 04:30:18 UTC
Updated wrong BZ, ignore comment 5 please.

Comment 8 Miroslav Rezanina 2016-09-20 12:30:39 UTC
Fix included in qemu-kvm-rhev-2.6.0-26.el7

Comment 10 yduan 2016-09-23 04:25:41 UTC
Reproduced with qemu-kvm-rhev-2.6.0-23.el7.

In QMP:
{"execute":"eject","arguments":{"device":"drive_syscd"}}
{"error": {"class": "GenericError", "desc": "Device 'drive_syscd' is locked and force was not specified, wait for tray to open and try again"}}
{"timestamp": {"seconds": 1474599291, "microseconds": 332248}, "event": "DEVICE_TRAY_MOVED", "data": {"device": "drive_syscd", "tray-open": true}}

{"execute":"eject","arguments":{"device":"drive_syscd"}}
{"return": {}}

Core dump with prompts below in HMP:
(qemu) qemu-kvm: /builddir/build/BUILD/qemu-2.6.0/hw/scsi/virtio-scsi.c:543: virtio_scsi_handle_cmd_req_prepare: Assertion `blk_get_aio_context(d->conf.blk) == s->ctx' failed.
eject.sh: line 40: 18850 Aborted                 (core dumped)

**************************************************************

Verified with qemu-kvm-rhev-2.6.0-26.el7.

In QMP:
{"execute":"eject","arguments":{"device":"drive_syscd"}}
{"error": {"class": "GenericError", "desc": "Device 'drive_syscd' is locked and force was not specified, wait for tray to open and try again"}}
{"timestamp": {"seconds": 1474600363, "microseconds": 338248}, "event": "DEVICE_TRAY_MOVED", "data": {"device": "drive_syscd", "tray-open": true}}

{"execute":"eject","arguments":{"device":"drive_syscd"}}
{"return": {}}

Eject successfully, without any error prompt or coredump:
(qemu) info block
drive_sysdisk (#block132): sysdisk.qcow2 (qcow2)
    Cache mode:       writeback, direct
drive_syscd: [not inserted]
    Removable device: not locked, tray open

**************************************************************

So this issue has been fixed already, change status to verified.

Comment 12 errata-xmlrpc 2016-11-07 21:35:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2673.html


Note You need to log in before you can comment on or make changes to this bug.