Bug 1524254 - Can't resume block stream job after remote glusterfs server reboot
Summary: Can't resume block stream job after remote glusterfs server reboot
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: ---
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: 8.0
Assignee: Virtualization Maintenance
QA Contact: aihua liang
URL:
Whiteboard:
Depends On: 1333358
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-12-11 04:44 UTC by Qianqian Zhu
Modified: 2020-09-16 04:09 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-09-16 04:09:18 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Qianqian Zhu 2017-12-11 04:44:28 UTC
Description of problem:
Reboot remote glusterfs server during block stream job process, block stream job finished with BLOCK_IO_ERROR "No medium found". After glusterfs server boot up and the volume being connected again, try to block stream again, the block job still reports "No medium found", and the guest have a little probability to reboot itself but fail.

Version-Release number of selected component (if applicable):
qemu-kvm-rhev-2.10.0-11.el7.x86_64
kernel-3.10.0-808.el7.x86_64

How reproducible:
Reconnect failure occurred more than 60%, and the guest reboot fail only hit once.

Steps to Reproduce:
1.Launch guest with two disk:
/usr/libexec/qemu-kvm \
-M q35,accel=kvm,kernel-irqchip=split \
-m 4G \
-device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0 \
-drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=writethrough,format=qcow2,file=/home/kvm_autotest_root/images/win2012-64r2-virtio-scsi.qcow2 \
-device scsi-hd,id=image1,drive=drive_image1 \
-device virtio-net-pci,mac=9a:89:8a:8b:8c:8d,id=idTqeiKU,netdev=idU17IIx,bus=pcie.0  \
-netdev tap,id=idU17IIx  \
-cpu 'SandyBridge',+kvm_pv_unhalt  \
-vnc :0   \
-enable-kvm \
-qmp tcp:localhost:5555,server,nowait \
-device virtio-scsi-pci,id=virtio_scsi_pci1,bus=pcie.0 \
-drive id=drive_image2,if=none,snapshot=off,aio=threads,cache=writethrough,format=qcow2,file=/home/data1.qcow2 \
-device scsi-hd,id=image2,drive=drive_image2 \
-monitor stdio

2. Live snapshot non-root disk to a remote glusterfs volume:
{ "execute": "blockdev-snapshot-sync", "arguments": { "device": "drive_image2","snapshot-file": "gluster://10.66.8.119/gv0/sn1","mode": "absolute-paths", "format": "qcow2" } }

3. Block stream the disk:
{ "execute": "block-stream", "arguments": { "device": "drive_image2"}}

4. Reboot glusterfs server

5. Wait for glusterfs server boot up, check glusterfs volume status:

# gluster volume status
Status of volume: gv0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.66.8.119:/home/brick1/gv0          49154     0          Y       2024 
 
Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks


#  qemu-img info gluster://10.66.8.119/gv0/sn1
image: gluster://10.66.8.119/gv0/sn1
file format: qcow2
virtual size: 2.0G (2147483648 bytes)
disk size: 502M
cluster_size: 65536
backing file: /home/data1.qcow2
backing file format: qcow2
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false


6. Block stream again:
{ "execute": "block-stream", "arguments": { "device": "drive_image2"}}

7. Quit qemu

Actual results:
After step 4:
{"timestamp": {"seconds": 1512963224, "microseconds": 895990}, "event": "BLOCK_JOB_ERROR", "data": {"device": "drive_image2", "operation": "read", "action": "report"}}
{"timestamp": {"seconds": 1512963224, "microseconds": 896312}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "drive_image2", "len": 2147483648, "offset": 536870912, "speed": 0, "type": "stream", "error": "Transport endpoint is not connected"}}
{"timestamp": {"seconds": 1512963224, "microseconds": 907807}, "event": "BLOCK_IO_ERROR", "data": {"device": "drive_image2", "nospace": false, "__com.redhat_reason": "eother", "node-name": "#block1046", "reason": "No medium found", "operation": "read", "action": "report"}}


hmp monitor:
[2017-12-11 03:33:43.895159] C [rpc-clnt-ping.c:166:rpc_clnt_ping_timer_expired] 0-gv0-client-0: server 10.66.8.119:49154 has not responded in the last 42 seconds, disconnecting.
[2017-12-11 03:33:43.895511] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f7ea8cc6d8b] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7f7ea8a8bd9e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f7ea8a8bebe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7f7ea8a8d640] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7f7ea8a8e130] ))))) 0-gv0-client-0: forced unwinding frame type(GlusterFS 3.3) op(FSYNC(16)) called at 2017-12-11 03:32:31.121093 (xid=0x406)
[2017-12-11 03:33:43.895696] E [rpc-clnt.c:350:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f7ea8cc6d8b] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7f7ea8a8bd9e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f7ea8a8bebe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7f7ea8a8d640] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7f7ea8a8e130] ))))) 0-gv0-client-0: forced unwinding frame type(GF-DUMP) op(NULL(2)) called at 2017-12-11 03:33:01.894000 (xid=0x407)
main-loop: WARNING: I/O thread spun for 1000 iterations


After step 6:
{"timestamp": {"seconds": 1512963247, "microseconds": 525635}, "event": "BLOCK_JOB_ERROR", "data": {"device": "drive_image2", "operation": "read", "action": "report"}}
{"timestamp": {"seconds": 1512963247, "microseconds": 525684}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "drive_image2", "len": 2147483648, "offset": 536870912, "speed": 0, "type": "stream", "error": "No medium found"}}

After step 7:
(qemu) quit
qemu-kvm: Failed to flush the L2 table cache: No medium found
qemu-kvm: Failed to flush the refcount block cache: No medium found

Expected results:
Qemu should reconnect to the glusterfs server and be able to perform block stream.

Additional info:

Comment 2 Ademar Reis 2017-12-29 20:07:10 UTC
See also bug 1524273

Comment 3 Gu Nini 2018-07-19 08:45:58 UTC
For the user space glusterfs, reproduced the bug with block-commit; while for block-stream, the 2nd time job could finish:
Host kernel: 3.10.0-918.el7.x86_64
Qemu-kvm-rhev: qemu-kvm-rhev-2.12.0-7.el7.x86_64


{"execute":"qmp_capabilities"}
{"return": {}}
{"timestamp": {"seconds": 1531987669, "microseconds": 380763}, "event": "VSERPORT_CHANGE", "data": {"open": true, "id": "qemu-ga0"}}

###################1. For block-stream test###################

{ "execute": "blockdev-snapshot-sync", "arguments": { "device": "drive_image2","snapshot-file": "gluster://10.73.196.179/gv1/sn1", "format": "qcow2" } }
{"return": {}}
{"timestamp": {"seconds": 1531987677, "microseconds": 248082}, "event": "NIC_RX_FILTER_CHANGED", "data": {"name": "id8e5D72", "path": "/machine/peripheral/id8e5D72/virtio-backend"}}
{ "execute": "blockdev-snapshot-sync", "arguments": { "device": "drive_image2","snapshot-file": "gluster://10.73.196.179/gv1/sn2", "format": "qcow2" } }
{"return": {}}
{"timestamp": {"seconds": 1531987682, "microseconds": 206774}, "event": "VNC_CONNECTED", "data": {"server": {"auth": "none", "family": "ipv4", "service": "5910", "host": "0.0.0.0", "websocket": false}, "client": {"family": "ipv4", "service": "36580", "host": "10.66.8.198", "websocket": false}}}
{"timestamp": {"seconds": 1531987682, "microseconds": 219903}, "event": "VNC_INITIALIZED", "data": {"server": {"auth": "none", "family": "ipv4", "service": "5910", "host": "0.0.0.0", "websocket": false}, "client": {"family": "ipv4", "service": "36580", "host": "10.66.8.198", "websocket": false}}}
{"execute":"block-stream","arguments":{"device":"drive_image2","speed":100}}
{"timestamp": {"seconds": 1531987715, "microseconds": 551637}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "drive_image2"}}
{"timestamp": {"seconds": 1531987715, "microseconds": 551774}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "drive_image2"}}
{"return": {}}

####Stop the glusterfs volume at this point

{ "execute": "block-job-set-speed", "arguments": { "device": "drive_image2","speed":1000000000}}
{"return": {}}
{"timestamp": {"seconds": 1531987785, "microseconds": 486801}, "event": "BLOCK_JOB_ERROR", "data": {"device": "drive_image2", "operation": "read", "action": "report"}}
{"timestamp": {"seconds": 1531987785, "microseconds": 486896}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "drive_image2"}}
{"timestamp": {"seconds": 1531987785, "microseconds": 486923}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "drive_image2", "len": 21474836480, "offset": 1245184, "speed": 1000000000, "type": "stream", "error": "Transport endpoint is not connected"}}
{"timestamp": {"seconds": 1531987785, "microseconds": 486956}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "drive_image2"}}
{"timestamp": {"seconds": 1531987785, "microseconds": 486976}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "drive_image2"}}

####Redo block-stream

{"execute":"block-stream","arguments":{"device":"drive_image2","speed":100}}
{"timestamp": {"seconds": 1531987802, "microseconds": 169487}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "drive_image2"}}
{"timestamp": {"seconds": 1531987802, "microseconds": 169577}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "drive_image2"}}
{"return": {}}
{ "execute": "block-job-set-speed", "arguments": { "device": "drive_image2","speed":1000000000}}
{"return": {}}
{"timestamp": {"seconds": 1531987829, "microseconds": 713501}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "drive_image2"}}
{"timestamp": {"seconds": 1531987829, "microseconds": 713543}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "drive_image2"}}
{"timestamp": {"seconds": 1531987829, "microseconds": 713573}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "drive_image2", "len": 21474836480, "offset": 21474836480, "speed": 1000000000, "type": "stream"}}
{"timestamp": {"seconds": 1531987829, "microseconds": 713595}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "drive_image2"}}
{"timestamp": {"seconds": 1531987829, "microseconds": 713613}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "drive_image2"}}

####The block-stream job finishes

###################2. For block-commit test###################

{ "execute": "blockdev-snapshot-sync", "arguments": { "device": "drive_image2","snapshot-file": "gluster://10.73.196.179/gv1/sn3","format":"qcow2"}}
{"return": {}}
{"execute":"block-commit","arguments":{"device":"drive_image2","speed":100}}
{"timestamp": {"seconds": 1531987971, "microseconds": 536140}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "drive_image2"}}
{"timestamp": {"seconds": 1531987971, "microseconds": 536226}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "drive_image2"}}
{"return": {}}

####Stop the glusterfs volume at this point

{"timestamp": {"seconds": 1531987978, "microseconds": 323495}, "event": "BLOCK_IO_ERROR", "data": {"device": "drive_image2", "nospace": false, "__com.redhat_reason": "eother", "node-name": "#block1976", "reason": "Transport endpoint is not connected", "operation": "write", "action": "report"}}

{"execute":"query-block-jobs"}
{"return": [{"auto-finalize": true, "io-status": "ok", "device": "drive_image2", "auto-dismiss": true, "busy": false, "len": 390332416, "offset": 65536, "status": "running", "paused": false, "speed": 100, "ready": false, "type": "commit"}]}
{"execute":"block-job-cancel","arguments":{"device":"drive_image2"}}
{"return": {}}
{"timestamp": {"seconds": 1531988032, "microseconds": 495826}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "drive_image2"}}
{"timestamp": {"seconds": 1531988032, "microseconds": 495873}, "event": "BLOCK_JOB_CANCELLED", "data": {"device": "drive_image2", "len": 390332416, "offset": 65536, "speed": 100, "type": "commit"}}
{"timestamp": {"seconds": 1531988032, "microseconds": 495898}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "drive_image2"}}
{"timestamp": {"seconds": 1531988032, "microseconds": 495916}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "drive_image2"}}

####Redo block-commit

{"execute":"block-commit","arguments":{"device":"drive_image2","speed":100}}
{"error": {"class": "GenericError", "desc": "Device has no medium"}}

Comment 7 Ademar Reis 2019-06-12 07:54:59 UTC
This was added to 8.0.1 but it should be in the backlog instead. Fixing.

Comment 10 Stefano Garzarella 2019-07-08 14:25:34 UTC
I'm able to reproduce on upstream QEMU and I think that BZ#1524273 is related to this bug.
The difference is that in this case QEMU is not crashing and I think it depends on the use of iothread in the BZ#1524273.
Maybe, this bug triggers the BZ#1524273.

I will try to investigate the cause of this behavior that could also be inside the gluster library, since there is this error:
[2019-07-08 13:45:48.799735] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x131)[0x7f61fba57131] (--> /lib64/libgfrpc.so.0(+0xda01)[0x7f61fba1ca01] (--> /lib64/libgfrpc.so.0(+0xdb22)[0x7f61fba1cb22] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x94)[0x7f61fba1e0b4] (--> /lib64/libgfrpc.so.0(+0xfc50)[0x7f61fba1ec50] ))))) 0-gv0-client-0: forced unwinding frame type(GlusterFS 4.x v1) op(FSYNC(16)) called at 2019-07-08 13:45:48.777769 (xid=0x44a2)

Comment 11 Stefano Garzarella 2019-08-07 09:48:55 UTC
The issue is in the qemu_gluster_co_flush_to_disk() [1] [qemu/block/gluster.c]
If the glfs_fsync_async() fails, we will close the connection and set drv to NULL, because there was a problem with some version of Gluster that will not retain its cache after a fsync failure.
In the code there is a good comment that explains why we do this.

Gluster provides 'resync-failed-syncs-after-fsync' option to solve it, but there is no way to know if it is supported, because the glfs_set_xlator_option() returns success also for unknown key/value pairs [2].

I found a BZ#1333358 moved to github [3] where Niels de Vos asked a way to know if it is supported, but I don't think it is fixed yet.

When we close the connection I found that glfs_fsync_async() fails with ENOTCONN(107) or EBADFD(77),
In both cases, if I skip the closure and the drv setting to NULL, everything works correctly when the server restarts.

I'm not sure if we can skip the closure and the drv setting to NULL for these errors. (I'm worried about EBADFD):

I'll do more tests and check upstream.

[1] https://github.com/qemu/qemu/blob/864ab314f1d924129d06ac7b571f105a2b76a4b2/block/gluster.c#L1291
[2] https://github.com/qemu/qemu/blob/864ab314f1d924129d06ac7b571f105a2b76a4b2/block/gluster.c#L856
[3] https://github.com/gluster/glusterfs/issues/600

Comment 14 Ademar Reis 2020-02-05 22:45:58 UTC
QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks

Comment 15 aihua liang 2020-09-16 04:09:18 UTC
Test on qemu-kvm-5.1.0-6.module+el8.3.0+8041+42ff16b8, don't hit this issue any more, close it as currentrelease.

Test Steps:
  1.Mount gluster server 
    #mount.glusterfs dell-per715-03.lab.eng.pek2.redhat.com:/aliang /mnt

  2.Start guest with qemu cmds:
     ...
     -blockdev node-name=file_image1,driver=file,aio=threads,filename=/home/kvm_autotest_root/images/rhel830-64-virtio-scsi.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,write-cache=on,bus=pcie-root-port-2,iothread=iothread0 \
    ...

  3.Create target node
    {'execute':'blockdev-create','arguments':{'options': {'driver':'file','filename':'/mnt/sn1','size':21474836480},'job-id':'job1'}}
    {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_sn1','filename':'/mnt/sn1'}}
    {'execute':'blockdev-create','arguments':{'options': {'driver': 'qcow2','file':'drive_sn1','size':21474836480},'job-id':'job2'}}
    {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'sn1','file':'drive_sn1'}}
    {'execute':'job-dismiss','arguments':{'id':'job1'}}
    {'execute':'job-dismiss','arguments':{'id':'job2'}}

  4.Do snapshot
    {"execute":"blockdev-snapshot","arguments":{"node":"drive_image1","overlay":"sn1"}}

  5.Do block stream
     {'execute': 'block-stream', 'arguments': { 'device': 'sn1','job-id':'j1'}}
{"timestamp": {"seconds": 1600228111, "microseconds": 354269}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "j1"}}
{"timestamp": {"seconds": 1600228111, "microseconds": 354476}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "j1"}}
{"return": {}}

  6.Reboot glusterfs server
     (gluster server)#reboot
   after step6,
     {"timestamp": {"seconds": 1600228231, "microseconds": 160608}, "event": "BLOCK_IO_ERROR", "data": {"device": "", "nospace": false, "node-name": "sn1", "reason": "Transport endpoint is not connected", "operation": "read", "action": "report"}}
{"timestamp": {"seconds": 1600228231, "microseconds": 167272}, "event": "BLOCK_IO_ERROR", "data": {"device": "", "nospace": false, "node-name": "sn1", "reason": "Transport endpoint is not connected", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1600228231, "microseconds": 169878}, "event": "BLOCK_JOB_ERROR", "data": {"device": "j1", "operation": "read", "action": "report"}}
{"timestamp": {"seconds": 1600228231, "microseconds": 170161}, "event": "JOB_STATUS_CHANGE", "data": {"status": "aborting", "id": "j1"}}
{"timestamp": {"seconds": 1600228231, "microseconds": 170606}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "j1", "len": 21474836480, "offset": 8494055424, "speed": 0, "type": "stream", "error": "Transport endpoint is not connected"}}
{"timestamp": {"seconds": 1600228231, "microseconds": 170891}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "j1"}}
{"timestamp": {"seconds": 1600228231, "microseconds": 171024}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "j1"}}
{"timestamp": {"seconds": 1600228231, "microseconds": 171387}, "event": "BLOCK_IO_ERROR", "data": {"device": "", "nospace": false, "node-name": "sn1", "reason": "Transport endpoint is not connected", "operation": "read", "action": "report"}}
{"timestamp": {"seconds": 1600228231, "microseconds": 175339}, "event": "BLOCK_IO_ERROR", "data": {"device": "", "nospace": false, "node-name": "sn1", "reason": "Transport endpoint is not connected", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1600228231, "microseconds": 185628}, "event": "BLOCK_IO_ERROR", "data": {"device": "", "nospace": false, "node-name": "sn1", "reason": "Transport endpoint is not connected", "operation": "write", "action": "report"}}
{"timestamp": {"seconds": 1600228231, "microseconds": 186308}, "event": "BLOCK_IO_ERROR", "data": {"device": "", "nospace": false, "node-name": "sn1", "reason": "Transport endpoint is not connected", "operation": "read", "action": "report"}}

 7. After gluster server boot up, check volume status
    (gluster server)#gluster volume status
Status of volume: aliang
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick dell-per715-03.lab.eng.pek2.redhat.co
m:/home/aliang                              49152     0          Y       1610 
 
Task Status of Volume aliang
------------------------------------------------------------------------------
There are no active volume tasks

  8. Continue block stream
     {'execute': 'block-stream', 'arguments': { 'device': 'sn1','job-id':'j1'}}
{"timestamp": {"seconds": 1600228433, "microseconds": 836711}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "j1"}}
{"timestamp": {"seconds": 1600228433, "microseconds": 836923}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "j1"}}
{"return": {}}
{"timestamp": {"seconds": 1600228480, "microseconds": 639944}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "j1"}}
{"timestamp": {"seconds": 1600228480, "microseconds": 640344}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "j1"}}
{"timestamp": {"seconds": 1600228480, "microseconds": 641545}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "j1", "len": 21474836480, "offset": 21474836480, "speed": 0, "type": "stream"}}
{"timestamp": {"seconds": 1600228480, "microseconds": 641696}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "j1"}}
{"timestamp": {"seconds": 1600228480, "microseconds": 641810}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "j1"}}

  9. Quit vm
    (qemu)quit
    {"timestamp": {"seconds": 1600228569, "microseconds": 756251}, "event": "SHUTDOWN", "data": {"guest": false, "reason": "host-qmp-quit"}}


Note You need to log in before you can comment on or make changes to this bug.