Bug 1262277 - qemu quit when block mirror 2 disk enable data-plane
qemu quit when block mirror 2 disk enable data-plane
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev (Show other bugs)
7.2
x86_64 Linux
high Severity high
: rc
: ---
Assigned To: Jeff Cody
Qianqian Zhu
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-09-11 06:37 EDT by weliao
Modified: 2017-08-01 23:24 EDT (History)
9 users (show)

See Also:
Fixed In Version: qemu-kvm-rhev-2.8.0-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-08-01 19:29:42 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description weliao 2015-09-11 06:37:32 EDT
Description of problem:
boot windows7 guest enable data plane,block mirror disk2, hotplug disk4,block mirror disk4, reboot win7 guest,qemu core dumped. 

Version-Release number of selected component (if applicable):
host:
AMD AMD Opteron(tm) Processor 6376  
3.10.0-314.el7.x86_64
qemu-kvm-rhev-2.3.0-22.el7.x86_64


How reproducible:
60%

Steps to Reproduce:
1.launch a KVM guest with data-plane.
2.connect to the QMP and try to do block mirror.
{ "execute": "drive-mirror", "arguments": { "device": "drive-data-disk0", "target": "/root/sn1", "format": "qcow2", "mode": "absolute-paths", "sync": "full", "speed": 1000000000, "on-source-error": "stop", "on-target-error": "stop" } }
3.hot-plug a disk with data-plane
(qemu) __com.redhat_drive_add file=/abc/data-disk4,id=drive-data-disk4,format=raw,cache=none,aio=native,werror=stop,rerror=stop
(qemu) device_add virtio-blk-pci,scsi=off,bus=pci.0,addr=0x11,drive=drive-data-disk4,id=data-disk4,iothread=iothread0
4.do block mirror for drive-data-disk4
{ "execute": "drive-mirror", "arguments": { "device": "drive-data-disk4", "target": "/root/sn4", "format": "raw", "mode": "absolute-paths", "sync": "full", "speed": 1000000000, "on-source-error": "stop", "on-target-error": "stop" } }
5.reboot guest.
(sometimes no reboot guest will core dumped.)
Actual results:
Qemu core dumped

Expected results:
work well

Additional info:
usr/libexec/qemu-kvm -name Win7.0 -cpu Opteron_G5,enforce -m 4096 -smp 4 -object iothread,id=iothread0 \
-drive file=/abc/sn1,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native,werror=stop,rerror=stop \
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x2,drive=drive-virtio-disk0,id=virtio-disk0,iothread=iothread0 -boot menu=on \
-netdev tap,id=hostnet0 \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:55:00:77:89:8d,bus=pci.0,addr=0x3 \
-device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vgamem_mb=16,bus=pci.0,addr=0x4 \
-drive file=/abc/data-disk2,if=none,id=drive-data-disk0,format=qcow2,cache=none,aio=native,werror=stop,rerror=stop,bps=1024000,bps_rd=0,bps_wr=0,iops=1024000,iops_rd=0,iops_wr=0 \
-device virtio-blk-pci,drive=drive-data-disk0,id=data-disk0,iothread=iothread0,bus=pci.0,addr=0x7 \
-spice port=6600,disable-ticketing, -monitor stdio -qmp tcp:0:4444,server,nowait

gdb debug info:
(gdb) bt full
#0  bdrv_co_do_rw (opaque=0x0) at block.c:4993
        acb = 0x0
        bs = <optimized out>
#1  0x00005555557ed82a in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) at coroutine-ucontext.c:80
        self = 0x555559858240
        co = 0x555559858240
#2  0x00007ffff0700110 in ?? () from /lib64/libc.so.6
No symbol table info available.
#3  0x00007fffe7963f40 in ?? ()
No symbol table info available.
#4  0x0000000000000000 in ?? ()
No symbol table info available.
Comment 3 Jeff Cody 2017-04-18 12:43:40 EDT
This was fixed in QEMU version 2.8.  Oddly enough, while testing this I found a regression for 2.9-rc4, with a different root cause but similar manifestation (although this regression was a deadlock, not a segfault). This regression has now been fixed for -rc5, so leaving the fixed-in-version as 2.8.
Comment 5 Qianqian Zhu 2017-05-04 01:53:41 EDT
Reproduced on:
qemu-kvm-rhev-2.3.0-22.el7.x86_64
kernel-3.10.0-643.el7.x86_64

Steps:
1. Launch guest with two disks enabled data-plane:
/usr/libexec/qemu-kvm -name rhel7_4-9343 -m 1G -smp 2 -object iothread,id=iothread0 -drive file=/home/kvm_autotest_root/images/rhel74-64-virtio.qcow2,format=qcow2,if=none,cache=none,media=disk,werror=stop,rerror=stop,id=drive-0 -device virtio-blk-pci,drive=drive-0,id=virtio-blk-0,iothread=iothread0,bootindex=0 -drive file=/home/test,format=qcow2,if=none,cache=none,aio=native,id=drive-virtio-blk1,werror=stop,rerror=stop -device virtio-blk-pci,drive=drive-virtio-blk1,id=virtio-blk1,iothread=iothread0,bus=pci.0,addr=0x14,serial="QEMU-DISK2" -monitor stdio -qmp tcp:0:5555,server,nowait -vnc :3 -netdev tap,id=hostnet0,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet0,id=virtio-net-pci0,mac=70:e2:84:14:0e:15

2. Block mirror:
{ "execute": "drive-mirror", "arguments": { "device": "drive-virtio-blk1", "target": "/home/mirror", "format": "raw", "mode": "absolute-paths", "sync": "full", "speed": 1000000000, "on-source-error": "stop", "on-target-error": "stop" } }

3. (qemu) system_reset 

Result:
Both qemu and guest hang, after mirror job ready, execute '(qemu) quit', qemu core dump.

Verified on:
qemu-kvm-rhev-2.9.0-1.el7.x86_64
kernel-3.10.0-640.el7.x86_64

Steps same as above.
Result:
qemu works well, and guest reboot succeeds.

Moving to VERIFIED per above results.
Comment 7 errata-xmlrpc 2017-08-01 19:29:42 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392
Comment 8 errata-xmlrpc 2017-08-01 21:07:21 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392
Comment 9 errata-xmlrpc 2017-08-01 21:59:20 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392
Comment 10 errata-xmlrpc 2017-08-01 22:40:06 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392
Comment 11 errata-xmlrpc 2017-08-01 23:04:50 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392
Comment 12 errata-xmlrpc 2017-08-01 23:24:58 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Note You need to log in before you can comment on or make changes to this bug.