Bug 1250861

Summary: data-plane hotplug should be refused to start if device is already in use (drive-mirror job)
Product: Red Hat Enterprise Linux 7 Reporter: huiqingding <huding>
Component: qemu-kvm-rhevAssignee: Stefan Hajnoczi <stefanha>
Status: CLOSED DUPLICATE QA Contact: Virtualization Bugs <virt-bugs>
Severity: medium Docs Contact:
Priority: medium    
Version: 7.2CC: areis, chayang, famz, hhuang, juzhang, knoel, michen, pbonzini, qzhang, rbalakri, sluo, stefanha, virt-bugs, virt-maint, xfu
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1140001 Environment:
Last Closed: 2016-06-21 13:51:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1140001    
Bug Blocks:    

Comment 2 huiqingding 2015-08-06 08:01:08 UTC
Version-Release number of selected component (if applicable):
kernel-3.10.0-302.el7.x86_64
qemu-kvm-rhev-2.3.0-15.el7.x86_64

How reproducible:
100%

Steps to Reproduce:

1.start guest:
# /usr/libexec/qemu-kvm -name rhel7 -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off -m 4096 -cpu SandyBridge -realtime mlock=on -sandbox off -smp 4,maxcpus=4,sockets=4,cores=1,threads=1 -object iothread,id=iothread0 -drive file=/home/rhel7.2.qcow2,if=none,id=data-disk1,format=qcow2,cache=none,aio=native,werror=stop,rerror=stop -net none -device sga -spice port=5910,password=redhat-vga,disable-ticketing -vga qxl -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=67108864 -qmp tcp:0:4466,server,nowait -serial unix:/tmp/ttym,server,nowait -monitor stdio

2.the guest will be at the BIOS screen because there is no virtio-blk-pci device yet. Then start a drive-mirror operation on the QMP monitor: { "execute": "drive-mirror", "arguments": { "device": "data-disk1", "target": "/root/sn1", "format": "qcow2", "mode": "absolute-paths", "sync": "full", "speed": 1000000000, "on-target-error": "stop" } }

3.do hotplug virtio-blk data-plane device.
{"execute":"device_add","arguments":{"driver":"virtio-blk-pci","drive":"data-disk1","id":"system-disk0","iothread":"iothread0"}}

4.do system_rest to start KVM guest.

Actual results
after step3, the qmp command can return normally.
after step4, qemu-kvm quits with core dump:
(gdb) bt
#0  0x00007ffff071b5d7 in raise () from /lib64/libc.so.6
#1  0x00007ffff071ccc8 in abort () from /lib64/libc.so.6
#2  0x00007ffff0714546 in __assert_fail_base () from /lib64/libc.so.6
#3  0x00007ffff07145f2 in __assert_fail () from /lib64/libc.so.6
#4  0x00005555558545f7 in iov_memset (iov=<optimized out>, iov_cnt=<optimized out>, offset=<optimized out>, offset@entry=524288, fillc=fillc@entry=0, bytes=18446744073709092864) at util/iov.c:75
#5  0x0000555555854d53 in qemu_iovec_memset (qiov=<optimized out>, offset=offset@entry=524288, fillc=fillc@entry=0, bytes=<optimized out>) at util/iov.c:394
#6  0x00005555558163f1 in qemu_laio_process_completion (s=0x555556a2d200, laiocb=0x555558020ca0) at block/linux-aio.c:84
#7  qemu_laio_completion_bh (opaque=0x555556a2d200) at block/linux-aio.c:138
#8  0x00005555557d5144 in aio_bh_poll (ctx=ctx@entry=0x5555569ce9a0) at async.c:85
#9  0x00005555557e4340 in aio_dispatch (ctx=ctx@entry=0x5555569ce9a0) at aio-posix.c:137
#10 0x00005555557e4562 in aio_poll (ctx=0x5555569ce9a0, blocking=<optimized out>) at aio-posix.c:248
#11 0x00005555556c5ae9 in iothread_run (opaque=0x5555569fc000) at iothread.c:44
#12 0x00007ffff6bc4dc5 in start_thread () from /lib64/libpthread.so.0
#13 0x00007ffff07dc1bd in clone () from /lib64/libc.so.6

Expected results:
QEMU should print an error message and refuse to hotplug with virtio-blk data-plane while the drive-mirror job is running, this can prevent data corruption.

Comment 3 Ademar Reis 2015-08-07 17:11:26 UTC
I'm confuse about the reason for this clone... What's different from the original report (Bug 1140001)? Is it the same problem that is back (a regression)?

Comment 4 Stefan Hajnoczi 2015-09-02 09:53:51 UTC
This is a new issue.  It's probably not critical for 7.2 because libvirt shouldn't trigger it.

I wasn't able to reproduce this exact crash, but I hit a related one.

The problem is that although block jobs and dataplane can be used at the same time, the transition into and out of dataplane doesn't synchronize with running blockjobs.  This can lead to crashes because the blockjob may still be using the old AioContext (there is a race condition).

Comment 6 Stefan Hajnoczi 2016-06-21 13:51:14 UTC
This BZ has the same root cause as bz#1265179.  I have already posted a backport for that bug.

*** This bug has been marked as a duplicate of bug 1265179 ***