Bug 1329543 - live merge - qemu-kvm hangs in aio_bh_poll
Summary: live merge - qemu-kvm hangs in aio_bh_poll
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.2
Hardware: Unspecified
OS: Unspecified
Target Milestone: rc
: ---
Assignee: Jeff Cody
QA Contact: Virtualization Bugs
Depends On: 1319400
Blocks: 1346429 1349525
TreeView+ depends on / blocked
Reported: 2016-04-22 08:11 UTC by Marcel Kolaja
Modified: 2019-11-14 07:50 UTC (History)
29 users (show)

Fixed In Version: qemu-kvm-rhev-2.3.0-31.el7_2.14
Doc Type: Bug Fix
Doc Text:
During the block-stream job of a live merge operation, the QEMU process in some cases became unresponsive. This update improves the handling of block jobs, which ensures that QEMU stays responsive during block-stream jobs as expected.
Clone Of: 1319400
: 1346429 (view as bug list)
Last Closed: 2016-06-29 16:22:01 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1371 normal SHIPPED_LIVE qemu-kvm-rhev bug fix update 2016-06-29 20:19:11 UTC

Description Marcel Kolaja 2016-04-22 08:11:14 UTC
This bug has been copied from bug #1319400 and has been proposed
to be backported to 7.2 z-stream (EUS).

Comment 3 Miroslav Rezanina 2016-05-16 10:17:19 UTC
Fix included in qemu-kvm-rhev-2.3.0-31.el7_2.14

Comment 5 Qianqian Zhu 2016-05-23 06:03:34 UTC
Reproduced with:

1. Launch guest:
/usr/libexec/qemu-kvm -cpu host -m 1024 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive file=/home/RHEL-Server-7.3-64-virtio.raw,format=raw,if=none,id=drive-virtio-disk0,werror=stop,rerror=stop -device virtio-blk-pci,drive=drive-virtio-disk0,id=virtio-disk0 -vnc :1 -monitor stdio -netdev tap,id=hostnet0,vhost=on -device virtio-net-pci,netdev=hostnet0,id=net0,mac=3C:D9:2B:09:AB:44,bus=pci.0,addr=0x4 -qmp tcp:0:5555,server,nowait -drive file=/home/disk1,if=none,id=drive-virtio-disk1,format=qcow2,serial=531fa1b1-fbd0-42ca-9f6b-cf764c91f8a9,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk1,id=virtio-disk1 -drive file=/home/disk2,if=none,id=drive-virtio-disk2,format=qcow2,serial=19644ad4-fe32-4b12-8bfb-d75a16fec85a,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0xa,drive=drive-virtio-disk2,id=virtio-disk2 -drive file=/home/disk3,if=none,id=drive-virtio-disk3,format=qcow2,serial=f105f1cc-ab59-4c62-a465-e3ef33886c17,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0xb,drive=drive-virtio-disk3,id=virtio-disk3 -drive file=/home/disk0,if=none,id=drive-virtio-disk4,format=qcow2,serial=f8ebfb39-2ac6-4b87-b193-4204d1854edc,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0xc,drive=drive-virtio-disk4,id=virtio-disk4

2. Generate snapshots with transaction:
{ "execute": "transaction", "arguments": { "actions": [ { 'type': 'blockdev-snapshot-sync', 'data' : { "device": "drive-virtio-disk0",
"snapshot-file": "/home/disk0-sn1", "format": "qcow2" } }, { 'type': 'blockdev-snapshot-sync', 'data' : { "device": "drive-virtio-disk1", "snapshot-file": "/home/disk1-sn1", "format": "qcow2" } }, { 'type': 'blockdev-snapshot-sync', 'data' : { "device": "drive-virtio-disk2", "snapshot-file": "/home/disk2-sn1", "format": "qcow2" } },{ 'type': 'blockdev-snapshot-sync', 'data' : { "device": "drive-virtio-disk3",
"snapshot-file": "/home/disk3-sn1", "format": "qcow2" } }, { 'type': 'blockdev-snapshot-sync', 'data' : { "device": "drive-virtio-disk4", "snapshot-file": "/home/disk4-sn1", "format": "qcow2" } }] } }

3. block stream simultaneously:
{ "execute": "block-stream", "arguments": { "device": "drive-virtio-disk0", "on-error": "report" } }
{"return": {}}
{ "execute": "block-stream", "arguments": { "device": "drive-virtio-disk1", "on-error": "report" } }
{"return": {}}

qemu hangs for a while until all block job complete.

Verified with:

Same as above.

qemu does not hang, later job could finished before previous job, and info block could give correct information before all block job complete.

{"timestamp": {"seconds": 1463981812, "microseconds": 379183}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "drive-virtio-disk1", "len": 1073741824, "offset": 1073741824, "speed": 0, "type": "stream"}}
{"timestamp": {"seconds": 1463981996, "microseconds": 325298}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "drive-virtio-disk0", "len": 21474836480, "offset": 21474836480, "speed": 0, "type": "stream"}}

(qemu) info block
drive-virtio-disk0: /home/disk0-sn1 (qcow2)
    Cache mode:       writeback
    Backing file:     /home/RHEL-Server-7.3-64-virtio.raw (chain depth: 1)

drive-virtio-disk1: /home/disk1-sn1 (qcow2)
    Cache mode:       writeback, direct

Comment 17 errata-xmlrpc 2016-06-29 16:22:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.