Bug 1329543
Summary: | live merge - qemu-kvm hangs in aio_bh_poll | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Marcel Kolaja <mkolaja> | |
Component: | qemu-kvm-rhev | Assignee: | Jeff Cody <jcody> | |
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> | |
Severity: | urgent | Docs Contact: | ||
Priority: | high | |||
Version: | 7.2 | CC: | acanan, ahino, alitke, amureini, bcholler, bmcclain, chayang, cshao, cww, dougsland, fdeutsch, gveitmic, gwatson, huding, jcody, jherrman, juzhang, knoel, michal.skrivanek, mkalinin, mst, pax, pezhang, qizhu, sbonazzo, troels, virt-maint, xfu, ykaul | |
Target Milestone: | rc | Keywords: | ZStream | |
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | qemu-kvm-rhev-2.3.0-31.el7_2.14 | Doc Type: | Bug Fix | |
Doc Text: |
During the block-stream job of a live merge operation, the QEMU process in some cases became unresponsive. This update improves the handling of block jobs, which ensures that QEMU stays responsive during block-stream jobs as expected.
|
Story Points: | --- | |
Clone Of: | 1319400 | |||
: | 1346429 (view as bug list) | Environment: | ||
Last Closed: | 2016-06-29 16:22:01 UTC | Type: | --- | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1319400 | |||
Bug Blocks: | 1346429, 1349525 |
Description
Marcel Kolaja
2016-04-22 08:11:14 UTC
Fix included in qemu-kvm-rhev-2.3.0-31.el7_2.14 Reproduced with: qemu-kvm-rhev-2.3.0-31.el7_2.11.x86_64 kernel-3.10.0-401.el7.x86_64 Steps: 1. Launch guest: /usr/libexec/qemu-kvm -cpu host -m 1024 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive file=/home/RHEL-Server-7.3-64-virtio.raw,format=raw,if=none,id=drive-virtio-disk0,werror=stop,rerror=stop -device virtio-blk-pci,drive=drive-virtio-disk0,id=virtio-disk0 -vnc :1 -monitor stdio -netdev tap,id=hostnet0,vhost=on -device virtio-net-pci,netdev=hostnet0,id=net0,mac=3C:D9:2B:09:AB:44,bus=pci.0,addr=0x4 -qmp tcp:0:5555,server,nowait -drive file=/home/disk1,if=none,id=drive-virtio-disk1,format=qcow2,serial=531fa1b1-fbd0-42ca-9f6b-cf764c91f8a9,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk1,id=virtio-disk1 -drive file=/home/disk2,if=none,id=drive-virtio-disk2,format=qcow2,serial=19644ad4-fe32-4b12-8bfb-d75a16fec85a,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0xa,drive=drive-virtio-disk2,id=virtio-disk2 -drive file=/home/disk3,if=none,id=drive-virtio-disk3,format=qcow2,serial=f105f1cc-ab59-4c62-a465-e3ef33886c17,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0xb,drive=drive-virtio-disk3,id=virtio-disk3 -drive file=/home/disk0,if=none,id=drive-virtio-disk4,format=qcow2,serial=f8ebfb39-2ac6-4b87-b193-4204d1854edc,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0xc,drive=drive-virtio-disk4,id=virtio-disk4 2. Generate snapshots with transaction: { "execute": "transaction", "arguments": { "actions": [ { 'type': 'blockdev-snapshot-sync', 'data' : { "device": "drive-virtio-disk0", "snapshot-file": "/home/disk0-sn1", "format": "qcow2" } }, { 'type': 'blockdev-snapshot-sync', 'data' : { "device": "drive-virtio-disk1", "snapshot-file": "/home/disk1-sn1", "format": "qcow2" } }, { 'type': 'blockdev-snapshot-sync', 'data' : { "device": "drive-virtio-disk2", "snapshot-file": "/home/disk2-sn1", "format": "qcow2" } },{ 'type': 'blockdev-snapshot-sync', 'data' : { "device": "drive-virtio-disk3", "snapshot-file": "/home/disk3-sn1", "format": "qcow2" } }, { 'type': 'blockdev-snapshot-sync', 'data' : { "device": "drive-virtio-disk4", "snapshot-file": "/home/disk4-sn1", "format": "qcow2" } }] } } 3. block stream simultaneously: { "execute": "block-stream", "arguments": { "device": "drive-virtio-disk0", "on-error": "report" } } {"return": {}} { "execute": "block-stream", "arguments": { "device": "drive-virtio-disk1", "on-error": "report" } } {"return": {}} Result: qemu hangs for a while until all block job complete. Verified with: qemu-kvm-rhev-2.3.0-31.el7_2.14.x86_64 kernel-3.10.0-401.el7.x86_64 Steps: Same as above. Results: qemu does not hang, later job could finished before previous job, and info block could give correct information before all block job complete. {"timestamp": {"seconds": 1463981812, "microseconds": 379183}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "drive-virtio-disk1", "len": 1073741824, "offset": 1073741824, "speed": 0, "type": "stream"}} {"timestamp": {"seconds": 1463981996, "microseconds": 325298}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "drive-virtio-disk0", "len": 21474836480, "offset": 21474836480, "speed": 0, "type": "stream"}} (qemu) info block drive-virtio-disk0: /home/disk0-sn1 (qcow2) Cache mode: writeback Backing file: /home/RHEL-Server-7.3-64-virtio.raw (chain depth: 1) drive-virtio-disk1: /home/disk1-sn1 (qcow2) Cache mode: writeback, direct Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1371 |