Hide Forgot
Per the upstream patch, libvirt expects that all three commands are changed as a group. Either we have the old synchronous cancel interface with no partial streaming: block_job_set_speed block_job_cancel block_stream or we have the new asynchronous cancel interface and partial streaming: block-job-set-speed block-job-cancel block-stream
Upstream raised another issue where a semantic difference would be desirable: https://lists.gnu.org/archive/html/qemu-devel/2012-04/msg02273.html If upstream indeed goes with block-job-set-speed being callable at any time, and not just when a block job is active, then this semantic change from block_job_set_speed would be another thing that libvirt would like to differentiate on based on the spelling of the monitor command. I'm not sure whether to clone this into another BZ, but it should be fixed before both qemu 1.1 and RHEL 6.3 if we decide to go with this semantic change, to make libvirt's life easier. (I suppose that libvirt could blindly try to set speed in advance, and fall back to setting it after the job, as a mitigation if we cannot rely on the spelling of the command to tell the difference).
Technical note added. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. New Contents: No documentation needed
Verified this issue with steps and environment as follows: # uname -r;rpm -q qemu-kvm-rhev 2.6.32-262.el6.x86_64 qemu-kvm-rhev-0.12.1.2-2.282.el6.x86_64 Test both HMP and QMP , command works fine. HMP: (qemu) snapshot_blkdev drive-virtio-disk0 /root/sn1 qcow2 (qemu) block-stream drive-virtio-disk0 (qemu) block-job-set-speed drive-virtio-disk0 200M (qemu) info block-jobs Streaming device drive-virtio-disk0: Completed 679477248 of 21474836480 bytes, speed limit 209715200 bytes/s (qemu) block-job-cancel drive-virtio-disk0 (qemu) info block-jobs No active jobs QMP: { 'execute' : 'qmp_capabilities' } {"return": {}} { "execute": "blockdev-snapshot-sync", "arguments": { "device": "drive-virtio-disk0","snapshot-file":"/root/sn1","format": "qcow2" } } {"return": {}} { "execute": "block-stream", "arguments": { "device": "drive-virtio-disk0" } } {"return": {}} { "execute": "block-job-set-speed", "arguments": { "device": "drive-virtio-disk0", "value": 1024 } } {"return": {}} { "execute" : "query-block-jobs", "arguments" : {} } {"return": [{"device": "drive-virtio-disk0", "len": 21474836480, "offset": 605028352, "speed": 1024, "type": "stream"}]} { "execute": "block-job-cancel", "arguments": { "device": "drive-virtio-disk0" } } {"timestamp": {"seconds": 1335426669, "microseconds": 715579}, "event": "BLOCK_JOB_CANCELLED", "data": {"device": "drive-virtio-disk0", "len": 21474836480, "offset": 656932864, "speed": 1024, "type": "stream"}} {"return": {}} { "execute" : "query-block-jobs", "arguments" : {} } {"return": []} So, this bug has been fixed.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2012-0746.html