Login
Log in using an SSO provider:
Fedora Account System
Red Hat Associate
Red Hat Customer
Login using a Red Hat Bugzilla account
Forgot Password
Create an Account
Red Hat Bugzilla – Attachment 1979993 Details for
Bug 2226576
Libvirt should popup operation unsupported err for blockjob --abort during a copy-storage migration
Home
New
Search
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh89 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
[?]
This site requires JavaScript to be enabled to function correctly, please enable it.
The daemon log for the steps above
virtqemud.log (text/plain), 193.17 KB, created by
Han Han
on 2023-07-26 02:18:36 UTC
(
hide
)
Description:
The daemon log for the steps above
Filename:
MIME Type:
Creator:
Han Han
Created:
2023-07-26 02:18:36 UTC
Size:
193.17 KB
patch
obsolete
>2023-07-26 02:11:46.217+0000: 386821: debug : virProcessAbort:146 : trying SIGTERM to child process 389384 >2023-07-26 02:11:46.217+0000: 386892: debug : virThreadJobClear:118 : Thread 386892 (rpc-virtqemud) finished job remoteDispatchDomainMigratePerform3Params with ret=-1 >2023-07-26 02:11:46.227+0000: 386821: debug : virProcessAbort:153 : process has ended: exit status 255 >2023-07-26 02:11:46.229+0000: 386925: debug : virThreadJobSet:93 : Thread 386925 (prio-rpc-virtqemud) is now running job remoteDispatchConnectUnregisterCloseCallback >2023-07-26 02:11:46.229+0000: 386925: debug : virConnectUnregisterCloseCallback:1538 : conn=0x7fb744004050 >2023-07-26 02:11:46.229+0000: 386925: debug : virThreadJobClear:118 : Thread 386925 (prio-rpc-virtqemud) finished job remoteDispatchConnectUnregisterCloseCallback with ret=0 >2023-07-26 02:11:46.229+0000: 386926: debug : virThreadJobSet:93 : Thread 386926 (prio-rpc-virtqemud) is now running job remoteDispatchConnectClose >2023-07-26 02:11:46.229+0000: 386926: debug : virThreadJobClear:118 : Thread 386926 (prio-rpc-virtqemud) finished job remoteDispatchConnectClose with ret=0 >2023-07-26 02:11:46.230+0000: 386821: debug : virConnectClose:1320 : conn=0x7fb744004050 >2023-07-26 02:11:46.230+0000: 386821: debug : virCloseCallbacksDomainRunForConn:346 : conn=0x7fb744004050 >2023-07-26 02:12:19.589+0000: 386922: debug : virThreadJobSet:93 : Thread 386922 (prio-rpc-virtqemud) is now running job remoteDispatchAuthList >2023-07-26 02:12:19.589+0000: 386922: debug : virThreadJobClear:118 : Thread 386922 (prio-rpc-virtqemud) finished job remoteDispatchAuthList with ret=0 >2023-07-26 02:12:19.589+0000: 386898: debug : virThreadJobSet:93 : Thread 386898 (rpc-virtqemud) is now running job remoteDispatchConnectSupportsFeature >2023-07-26 02:12:19.589+0000: 386898: debug : virThreadJobClear:118 : Thread 386898 (rpc-virtqemud) finished job remoteDispatchConnectSupportsFeature with ret=0 >2023-07-26 02:12:19.589+0000: 386899: debug : virThreadJobSet:93 : Thread 386899 (rpc-virtqemud) is now running job remoteDispatchConnectOpen >2023-07-26 02:12:19.589+0000: 386899: debug : virConnectOpenAuth:1277 : name=, auth=(nil), flags=0x0 >2023-07-26 02:12:19.589+0000: 386899: debug : virConfLoadConfig:1515 : Loading config file '/etc/libvirt/libvirt.conf' >2023-07-26 02:12:19.589+0000: 386899: debug : virConfReadFile:723 : filename=/etc/libvirt/libvirt.conf >2023-07-26 02:12:19.589+0000: 386899: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-26 02:12:19.589+0000: 386899: debug : virConnectOpenInternal:933 : Trying to probe for default URI >2023-07-26 02:12:19.589+0000: 386899: debug : virConnectOpenInternal:938 : QEMU driver URI probe returned 'qemu:///system' >2023-07-26 02:12:19.589+0000: 386899: debug : virConfGetValueStringList:913 : Get value string list (nil) 0 >2023-07-26 02:12:19.590+0000: 386899: debug : virConnectOpenInternal:966 : Split "qemu:///system" to URI components: > scheme qemu > server <null> > user <null> > port 0 > path /system >2023-07-26 02:12:19.590+0000: 386899: debug : virConnectOpenInternal:1072 : trying driver 0 (Test) ... >2023-07-26 02:12:19.590+0000: 386899: debug : virConnectOpenInternal:1094 : Checking for supported URI schemes >2023-07-26 02:12:19.590+0000: 386899: debug : virConnectOpenInternal:1103 : No matching URI scheme >2023-07-26 02:12:19.590+0000: 386899: debug : virConnectOpenInternal:1072 : trying driver 1 (ESX) ... >2023-07-26 02:12:19.590+0000: 386899: debug : virConnectOpenInternal:1094 : Checking for supported URI schemes >2023-07-26 02:12:19.590+0000: 386899: debug : virConnectOpenInternal:1103 : No matching URI scheme >2023-07-26 02:12:19.590+0000: 386899: debug : virConnectOpenInternal:1072 : trying driver 2 (remote) ... >2023-07-26 02:12:19.590+0000: 386899: debug : virConnectOpenInternal:1111 : Matching any URI scheme for 'qemu' >2023-07-26 02:12:19.590+0000: 386899: debug : virConnectOpenInternal:1137 : driver 2 remote returned DECLINED >2023-07-26 02:12:19.590+0000: 386899: debug : virConnectOpenInternal:1072 : trying driver 3 (QEMU) ... >2023-07-26 02:12:19.590+0000: 386899: debug : virConnectOpenInternal:1094 : Checking for supported URI schemes >2023-07-26 02:12:19.590+0000: 386899: debug : virConnectOpenInternal:1097 : Matched URI scheme 'qemu' >2023-07-26 02:12:19.590+0000: 386899: debug : virConnectOpenInternal:1137 : driver 3 QEMU returned SUCCESS >2023-07-26 02:12:19.590+0000: 386899: debug : virConnectGetType:163 : conn=0x7fb73c0152d0 >2023-07-26 02:12:19.590+0000: 386899: debug : virThreadJobClear:118 : Thread 386899 (rpc-virtqemud) finished job remoteDispatchConnectOpen with ret=0 >2023-07-26 02:12:19.590+0000: 386925: debug : virThreadJobSet:93 : Thread 386925 (prio-rpc-virtqemud) is now running job remoteDispatchConnectGetURI >2023-07-26 02:12:19.590+0000: 386925: debug : virConnectGetURI:316 : conn=0x7fb73c0152d0 >2023-07-26 02:12:19.590+0000: 386925: debug : virThreadJobClear:118 : Thread 386925 (prio-rpc-virtqemud) finished job remoteDispatchConnectGetURI with ret=0 >2023-07-26 02:12:19.590+0000: 386901: debug : virThreadJobSet:93 : Thread 386901 (rpc-virtqemud) is now running job remoteDispatchConnectSupportsFeature >2023-07-26 02:12:19.590+0000: 386901: debug : virThreadJobClear:118 : Thread 386901 (rpc-virtqemud) finished job remoteDispatchConnectSupportsFeature with ret=0 >2023-07-26 02:12:19.590+0000: 386902: debug : virThreadJobSet:93 : Thread 386902 (rpc-virtqemud) is now running job remoteDispatchConnectSupportsFeature >2023-07-26 02:12:19.590+0000: 386902: debug : virThreadJobClear:118 : Thread 386902 (rpc-virtqemud) finished job remoteDispatchConnectSupportsFeature with ret=0 >2023-07-26 02:12:19.601+0000: 386923: debug : virThreadJobSet:93 : Thread 386923 (prio-rpc-virtqemud) is now running job remoteDispatchConnectRegisterCloseCallback >2023-07-26 02:12:19.601+0000: 386923: debug : virConnectRegisterCloseCallback:1501 : conn=0x7fb73c0152d0 >2023-07-26 02:12:19.601+0000: 386923: debug : virThreadJobClear:118 : Thread 386923 (prio-rpc-virtqemud) finished job remoteDispatchConnectRegisterCloseCallback with ret=0 >2023-07-26 02:12:19.601+0000: 386924: debug : virThreadJobSet:93 : Thread 386924 (prio-rpc-virtqemud) is now running job remoteDispatchDomainLookupByName >2023-07-26 02:12:19.601+0000: 386924: debug : virDomainLookupByName:449 : conn=0x7fb73c0152d0, name=rhel-9.2 >2023-07-26 02:12:19.601+0000: 386924: debug : virThreadJobClear:118 : Thread 386924 (prio-rpc-virtqemud) finished job remoteDispatchDomainLookupByName with ret=0 >2023-07-26 02:12:19.894+0000: 386925: debug : virThreadJobSet:93 : Thread 386925 (prio-rpc-virtqemud) is now running job remoteDispatchDomainLookupByName >2023-07-26 02:12:19.894+0000: 386925: debug : virDomainLookupByName:449 : conn=0x7fb73c0152d0, name=rhel-9.2 >2023-07-26 02:12:19.894+0000: 386925: debug : virThreadJobClear:118 : Thread 386925 (prio-rpc-virtqemud) finished job remoteDispatchDomainLookupByName with ret=0 >2023-07-26 02:12:19.895+0000: 386926: debug : virThreadJobSet:93 : Thread 386926 (prio-rpc-virtqemud) is now running job remoteDispatchConnectSupportsFeature >2023-07-26 02:12:19.895+0000: 386926: debug : virConnectSupportsFeature:127 : conn=0x7fb73c0152d0, feature=13 >2023-07-26 02:12:19.895+0000: 386926: debug : virThreadJobClear:118 : Thread 386926 (prio-rpc-virtqemud) finished job remoteDispatchConnectSupportsFeature with ret=0 >2023-07-26 02:12:19.896+0000: 386922: debug : virThreadJobSet:93 : Thread 386922 (prio-rpc-virtqemud) is now running job remoteDispatchConnectSupportsFeature >2023-07-26 02:12:19.896+0000: 386922: debug : virConnectSupportsFeature:127 : conn=0x7fb73c0152d0, feature=7 >2023-07-26 02:12:19.896+0000: 386922: debug : virThreadJobClear:118 : Thread 386922 (prio-rpc-virtqemud) finished job remoteDispatchConnectSupportsFeature with ret=0 >2023-07-26 02:12:19.896+0000: 386908: debug : virThreadJobSet:93 : Thread 386908 (rpc-virtqemud) is now running job remoteDispatchDomainMigrateBegin3Params >2023-07-26 02:12:19.896+0000: 386908: debug : virDomainMigrateBegin3Params:5235 : dom=0x7fb780009c10, (VM: name=rhel-9.2, uuid=e46b3d21-99dd-4be9-92ff-78556c7234c4), params=(nil), nparams=0, cookieout=0x7fb701fd28b8, cookieoutlen=0x7fb701fd28ac, flags=0x141 >2023-07-26 02:12:19.897+0000: 386908: debug : qemuMigrationSrcStoreDomainState:217 : Storing pre-migration state=1 domain=0x7fb76008e840 >2023-07-26 02:12:19.897+0000: 386908: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:19.897+0000: 386908: debug : qemuMonitorGetBlockInfo:1938 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:19.897+0000: 386908: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"query-block","id":"libvirt-451"} > fd=-1 >2023-07-26 02:12:19.897+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"query-block","id":"libvirt-451"} > len=46 ret=46 errno=0 >2023-07-26 02:12:19.898+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": [{"io-status": "ok", "device": "", "locked": false, "removable": false, "inserted": {"iops_rd": 0, "detect_zeroes": "off", "image": {"backing-image": {"virtual-size": 10737418240, "filename": "/var/lib/libvirt/images/rhel-9.2.qcow2", "cluster-size": 65536, "format": "qcow2", "actual-size": 2300813312, "format-specific": {"type": "qcow2", "data": {"compat": "1.1", "compression-type": "zlib", "lazy-refcounts": false, "refcount-bits": 16, "corrupt": false, "extended-l2": false}}, "dirty-flag": false}, "virtual-size": 10737418240, "filename": "json:{\"driver\": \"copy-on-read\", \"file\": {\"driver\": \"qcow2\", \"file\": {\"driver\": \"file\", \"filename\": \"/var/lib/libvirt/images/rhel-9.2.qcow2\"}}}", "cluster-size": 65536, "format": "copy-on-read", "actual-size": 2300813312, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-CoR-vda", "backing_file_depth": 1, "drv": "copy-on-read", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "json:{\"driver\": \"copy-on-read\", \"file\": {\"driver\": \"qcow2\", \"file\": {\"driver\": \"file\", \"filename\": \"/var/lib/libvirt/images/rhel-9.2.qcow2\"}}}"}, "qdev": "/machine/peripheral/virtio-disk0/virtio-backend", "type": "unknown"}, {"io-status": "ok", "device": "", "locked": false, "removable": false, "inserted": {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10485760, "filename": "nbd://10.0.79.60:10809/foo", "format": "raw"}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-format", "backing_file_depth": 0, "drv": "raw", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "nbd://10.0.79.60:10809/foo"}, "qdev": "/machine/peripheral/virtio-disk1/virtio-backend", "type": "unknown"}], "id": "libvirt-451"}] >2023-07-26 02:12:19.898+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": [{"io-status": "ok", "device": "", "locked": false, "removable": false, "inserted": {"iops_rd": 0, "detect_zeroes": "off", "image": {"backing-image": {"virtual-size": 10737418240, "filename": "/var/lib/libvirt/images/rhel-9.2.qcow2", "cluster-size": 65536, "format": "qcow2", "actual-size": 2300813312, "format-specific": {"type": "qcow2", "data": {"compat": "1.1", "compression-type": "zlib", "lazy-refcounts": false, "refcount-bits": 16, "corrupt": false, "extended-l2": false}}, "dirty-flag": false}, "virtual-size": 10737418240, "filename": "json:{\"driver\": \"copy-on-read\", \"file\": {\"driver\": \"qcow2\", \"file\": {\"driver\": \"file\", \"filename\": \"/var/lib/libvirt/images/rhel-9.2.qcow2\"}}}", "cluster-size": 65536, "format": "copy-on-read", "actual-size": 2300813312, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-CoR-vda", "backing_file_depth": 1, "drv": "copy-on-read", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "json:{\"driver\": \"copy-on-read\", \"file\": {\"driver\": \"qcow2\", \"file\": {\"driver\": \"file\", \"filename\": \"/var/lib/libvirt/images/rhel-9.2.qcow2\"}}}"}, "qdev": "/machine/peripheral/virtio-disk0/virtio-backend", "type": "unknown"}, {"io-status": "ok", "device": "", "locked": false, "removable": false, "inserted": {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10485760, "filename": "nbd://10.0.79.60:10809/foo", "format": "raw"}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-format", "backing_file_depth": 0, "drv": "raw", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "nbd://10.0.79.60:10809/foo"}, "qdev": "/machine/peripheral/virtio-disk1/virtio-backend", "type": "unknown"}], "id": "libvirt-451"} >2023-07-26 02:12:19.898+0000: 386908: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:19.898+0000: 386908: debug : qemuMigrationSrcBeginPhase:2553 : driver=0x7fb7600223f0, vm=0x7fb76008e840, xmlin=<null>, dname=<null>, cookieout=0x7fb701fd28b8, cookieoutlen=0x7fb701fd28ac, nmigrate_disks=0, migrate_disks=(nil), flags=0x141 >2023-07-26 02:12:19.898+0000: 386908: debug : qemuDomainObjStartJobPhase:588 : Starting phase 'begin3' of 'migration out' job >2023-07-26 02:12:19.898+0000: 386908: debug : qemuDomainObjSetJobPhase:558 : Setting 'migration out' phase to 'begin3' >2023-07-26 02:12:19.898+0000: 386908: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:19.898+0000: 386908: debug : qemuMonitorGetMigrationBlockers:4329 : blockers=0x7fb701fd25e0 >2023-07-26 02:12:19.898+0000: 386908: debug : qemuMonitorGetMigrationBlockers:4331 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:19.898+0000: 386908: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"query-migrate","id":"libvirt-452"} > fd=-1 >2023-07-26 02:12:19.899+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"query-migrate","id":"libvirt-452"} > len=48 ret=48 errno=0 >2023-07-26 02:12:19.899+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {}, "id": "libvirt-452"}] >2023-07-26 02:12:19.899+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": {}, "id": "libvirt-452"} >2023-07-26 02:12:19.899+0000: 386908: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:19.899+0000: 386908: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:19.899+0000: 386908: debug : qemuMonitorBlockGetNamedNodeData:2015 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:19.899+0000: 386908: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"query-named-block-nodes","arguments":{"flat":true},"id":"libvirt-453"} > fd=-1 >2023-07-26 02:12:19.899+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"query-named-block-nodes","arguments":{"flat":true},"id":"libvirt-453"} > len=84 ret=84 errno=0 >2023-07-26 02:12:19.901+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": [{"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10485760, "filename": "nbd://10.0.79.60:10809/foo", "format": "raw"}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-format", "backing_file_depth": 0, "drv": "raw", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "nbd://10.0.79.60:10809/foo"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10485760, "filename": "nbd://10.0.79.60:10809/foo", "format": "nbd"}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-storage", "backing_file_depth": 0, "drv": "nbd", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "nbd://10.0.79.60:10809/foo"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10737418240, "filename": "json:{\"driver\": \"copy-on-read\", \"file\": {\"driver\": \"qcow2\", \"file\": {\"driver\": \"file\", \"filename\": \"/var/lib/libvirt/images/rhel-9.2.qcow2\"}}}", "cluster-size": 65536, "format": "copy-on-read", "actual-size": 2300813312, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-CoR-vda", "backing_file_depth": 0, "drv": "copy-on-read", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "json:{\"driver\": \"copy-on-read\", \"file\": {\"driver\": \"qcow2\", \"file\": {\"driver\": \"file\", \"filename\": \"/var/lib/libvirt/images/rhel-9.2.qcow2\"}}}"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10737418240, "filename": "/var/lib/libvirt/images/rhel-9.2.qcow2", "cluster-size": 65536, "format": "qcow2", "actual-size": 2300813312, "format-specific": {"type": "qcow2", "data": {"compat": "1.1", "compression-type": "zlib", "lazy-refcounts": false, "refcount-bits": 16, "corrupt": false, "extended-l2": false}}, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-2-format", "backing_file_depth": 0, "drv": "qcow2", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": true, "writeback": true}, "file": "/var/lib/libvirt/images/rhel-9.2.qcow2"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 2300837888, "filename": "/var/lib/libvirt/images/rhel-9.2.qcow2", "format": "file", "actual-size": 2300813312, "format-specific": {"type": "file", "data": {}}, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-2-storage", "backing_file_depth": 0, "drv": "file", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": true, "writeback": true}, "file": "/var/lib/libvirt/images/rhel-9.2.qcow2"}], "id": "libvirt-453"}] >2023-07-26 02:12:19.901+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": [{"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10485760, "filename": "nbd://10.0.79.60:10809/foo", "format": "raw"}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-format", "backing_file_depth": 0, "drv": "raw", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "nbd://10.0.79.60:10809/foo"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10485760, "filename": "nbd://10.0.79.60:10809/foo", "format": "nbd"}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-storage", "backing_file_depth": 0, "drv": "nbd", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "nbd://10.0.79.60:10809/foo"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10737418240, "filename": "json:{\"driver\": \"copy-on-read\", \"file\": {\"driver\": \"qcow2\", \"file\": {\"driver\": \"file\", \"filename\": \"/var/lib/libvirt/images/rhel-9.2.qcow2\"}}}", "cluster-size": 65536, "format": "copy-on-read", "actual-size": 2300813312, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-CoR-vda", "backing_file_depth": 0, "drv": "copy-on-read", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "json:{\"driver\": \"copy-on-read\", \"file\": {\"driver\": \"qcow2\", \"file\": {\"driver\": \"file\", \"filename\": \"/var/lib/libvirt/images/rhel-9.2.qcow2\"}}}"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10737418240, "filename": "/var/lib/libvirt/images/rhel-9.2.qcow2", "cluster-size": 65536, "format": "qcow2", "actual-size": 2300813312, "format-specific": {"type": "qcow2", "data": {"compat": "1.1", "compression-type": "zlib", "lazy-refcounts": false, "refcount-bits": 16, "corrupt": false, "extended-l2": false}}, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-2-format", "backing_file_depth": 0, "drv": "qcow2", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": true, "writeback": true}, "file": "/var/lib/libvirt/images/rhel-9.2.qcow2"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 2300837888, "filename": "/var/lib/libvirt/images/rhel-9.2.qcow2", "format": "file", "actual-size": 2300813312, "format-specific": {"type": "file", "data": {}}, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-2-storage", "backing_file_depth": 0, "drv": "file", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": true, "writeback": true}, "file": "/var/lib/libvirt/images/rhel-9.2.qcow2"}], "id": "libvirt-453"} >2023-07-26 02:12:19.901+0000: 386908: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:19.901+0000: 386908: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:19.901+0000: 386908: debug : qemuMonitorBlockStatsUpdateCapacityBlockdev:1996 : stats=0x7fb78000ccc0 >2023-07-26 02:12:19.901+0000: 386908: debug : qemuMonitorBlockStatsUpdateCapacityBlockdev:1998 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:19.901+0000: 386908: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"query-named-block-nodes","arguments":{"flat":true},"id":"libvirt-454"} > fd=-1 >2023-07-26 02:12:19.901+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"query-named-block-nodes","arguments":{"flat":true},"id":"libvirt-454"} > len=84 ret=84 errno=0 >2023-07-26 02:12:19.902+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": [{"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10485760, "filename": "nbd://10.0.79.60:10809/foo", "format": "raw"}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-format", "backing_file_depth": 0, "drv": "raw", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "nbd://10.0.79.60:10809/foo"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10485760, "filename": "nbd://10.0.79.60:10809/foo", "format": "nbd"}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-storage", "backing_file_depth": 0, "drv": "nbd", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "nbd://10.0.79.60:10809/foo"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10737418240, "filename": "json:{\"driver\": \"copy-on-read\", \"file\": {\"driver\": \"qcow2\", \"file\": {\"driver\": \"file\", \"filename\": \"/var/lib/libvirt/images/rhel-9.2.qcow2\"}}}", "cluster-size": 65536, "format": "copy-on-read", "actual-size": 2300813312, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-CoR-vda", "backing_file_depth": 0, "drv": "copy-on-read", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "json:{\"driver\": \"copy-on-read\", \"file\": {\"driver\": \"qcow2\", \"file\": {\"driver\": \"file\", \"filename\": \"/var/lib/libvirt/images/rhel-9.2.qcow2\"}}}"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10737418240, "filename": "/var/lib/libvirt/images/rhel-9.2.qcow2", "cluster-size": 65536, "format": "qcow2", "actual-size": 2300813312, "format-specific": {"type": "qcow2", "data": {"compat": "1.1", "compression-type": "zlib", "lazy-refcounts": false, "refcount-bits": 16, "corrupt": false, "extended-l2": false}}, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-2-format", "backing_file_depth": 0, "drv": "qcow2", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": true, "writeback": true}, "file": "/var/lib/libvirt/images/rhel-9.2.qcow2"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 2300837888, "filename": "/var/lib/libvirt/images/rhel-9.2.qcow2", "format": "file", "actual-size": 2300813312, "format-specific": {"type": "file", "data": {}}, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-2-storage", "backing_file_depth": 0, "drv": "file", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": true, "writeback": true}, "file": "/var/lib/libvirt/images/rhel-9.2.qcow2"}], "id": "libvirt-454"}] >2023-07-26 02:12:19.902+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": [{"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10485760, "filename": "nbd://10.0.79.60:10809/foo", "format": "raw"}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-format", "backing_file_depth": 0, "drv": "raw", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "nbd://10.0.79.60:10809/foo"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10485760, "filename": "nbd://10.0.79.60:10809/foo", "format": "nbd"}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-storage", "backing_file_depth": 0, "drv": "nbd", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "nbd://10.0.79.60:10809/foo"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10737418240, "filename": "json:{\"driver\": \"copy-on-read\", \"file\": {\"driver\": \"qcow2\", \"file\": {\"driver\": \"file\", \"filename\": \"/var/lib/libvirt/images/rhel-9.2.qcow2\"}}}", "cluster-size": 65536, "format": "copy-on-read", "actual-size": 2300813312, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-CoR-vda", "backing_file_depth": 0, "drv": "copy-on-read", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "json:{\"driver\": \"copy-on-read\", \"file\": {\"driver\": \"qcow2\", \"file\": {\"driver\": \"file\", \"filename\": \"/var/lib/libvirt/images/rhel-9.2.qcow2\"}}}"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10737418240, "filename": "/var/lib/libvirt/images/rhel-9.2.qcow2", "cluster-size": 65536, "format": "qcow2", "actual-size": 2300813312, "format-specific": {"type": "qcow2", "data": {"compat": "1.1", "compression-type": "zlib", "lazy-refcounts": false, "refcount-bits": 16, "corrupt": false, "extended-l2": false}}, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-2-format", "backing_file_depth": 0, "drv": "qcow2", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": true, "writeback": true}, "file": "/var/lib/libvirt/images/rhel-9.2.qcow2"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 2300837888, "filename": "/var/lib/libvirt/images/rhel-9.2.qcow2", "format": "file", "actual-size": 2300813312, "format-specific": {"type": "file", "data": {}}, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-2-storage", "backing_file_depth": 0, "drv": "file", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": true, "writeback": true}, "file": "/var/lib/libvirt/images/rhel-9.2.qcow2"}], "id": "libvirt-454"} >2023-07-26 02:12:19.902+0000: 386908: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:19.902+0000: 386908: debug : qemuMigrationCookieFormat:1484 : cookielen=873 cookie=<qemu-migration> > <name>rhel-9.2</name> > <uuid>e46b3d21-99dd-4be9-92ff-78556c7234c4</uuid> > <hostname>vm-10-0-79-60.hosted.upshift.rdu2.redhat.com</hostname> > <hostuuid>1e240ff7-4a95-4c66-b0a8-828a77d2394d</hostuuid> > <feature name='lockstate'/> > <nbd> > <disk target='vda' capacity='10737418240'/> > <disk target='vdb' capacity='10485760'/> > </nbd> > <capabilities> > <cap name='xbzrle' auto='no'/> > <cap name='auto-converge' auto='no'/> > <cap name='rdma-pin-all' auto='no'/> > <cap name='postcopy-ram' auto='no'/> > <cap name='compress' auto='no'/> > <cap name='pause-before-switchover' auto='yes'/> > <cap name='late-block-activate' auto='no'/> > <cap name='multifd' auto='no'/> > <cap name='dirty-bitmaps' auto='no'/> > <cap name='return-path' auto='no'/> > <cap name='zero-copy-send' auto='no'/> > </capabilities> ></qemu-migration> > >2023-07-26 02:12:19.904+0000: 386908: debug : virStringMatch:656 : match '/var/lib/libvirt/qemu/channel/target/domain-2-rhel-9.2/org.qemu.guest_agent.0' for '^/var/lib/libvirt/qemu/channel/target/([^/]+\.)|(domain-[^/]+/)org\.qemu\.guest_agent\.0$' >2023-07-26 02:12:19.904+0000: 386908: debug : qemuDomainAssignVirtioSerialAddresses:138 : Finished reserving existing ports >2023-07-26 02:12:19.904+0000: 386908: debug : qemuDomainUSBAddressAddHubs:3065 : Found 2 USB devices and 12 provided USB ports; adding 0 hubs >2023-07-26 02:12:19.904+0000: 386908: debug : qemuDomainAssignUSBAddresses:3246 : Existing USB addresses have been reserved >2023-07-26 02:12:19.904+0000: 386908: debug : qemuDomainAssignUSBAddresses:3254 : Finished assigning USB ports >2023-07-26 02:12:19.904+0000: 386908: debug : qemuDomainCleanupAdd:7612 : vm=rhel-9.2, cb=0x7fb7a8748560 >2023-07-26 02:12:19.904+0000: 386908: debug : qemuDomainObjReleaseAsyncJob:628 : Releasing ownership of 'migration out' async job >2023-07-26 02:12:19.904+0000: 386908: debug : virDomainMigrateBegin3Params:5252 : xml <domain type='kvm'> > <name>rhel-9.2</name> > <uuid>e46b3d21-99dd-4be9-92ff-78556c7234c4</uuid> > <metadata> > <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0"> > <libosinfo:os id="http://libosinfo.org/unknown"/> > </libosinfo:libosinfo> > </metadata> > <memory unit='KiB'>2097152</memory> > <currentMemory unit='KiB'>2097152</currentMemory> > <vcpu placement='static' current='1'>8</vcpu> > <resource> > <partition>/machine/test</partition> > </resource> > <os> > <type arch='x86_64' machine='pc-q35-rhel9.2.0'>hvm</type> > <boot dev='hd'/> > </os> > <features> > <acpi/> > <apic/> > </features> > <cpu mode='host-passthrough' check='none' migratable='on'/> > <clock offset='utc'> > <timer name='rtc' tickpolicy='catchup'/> > <timer name='pit' tickpolicy='delay'/> > <timer name='hpet' present='no'/> > </clock> > <on_poweroff>destroy</on_poweroff> > <on_reboot>restart</on_reboot> > <on_crash>destroy</on_crash> > <pm> > <suspend-to-mem enabled='no'/> > <suspend-to-disk enabled='no'/> > </pm> > <devices> > <emulator>/usr/libexec/qemu-kvm</emulator> > <disk type='file' device='disk'> > <driver name='qemu' type='qcow2' cache='none' io='io_uring' copy_on_read='on' ats='on' packed='on'/> > <source file='/var/lib/libvirt/images/rhel-9.2.qcow2'/> > <backingStore/> > <target dev='vda' bus='virtio'/> > <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> > </disk> > <disk type='network' device='disk'> > <driver name='qemu' type='raw'/> > <source protocol='nbd' name='foo'> > <host name='10.0.79.60' port='10809'/> > <reconnect delay='10'/> > </source> > <target dev='vdb' bus='virtio'/> > <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> > </disk> > <controller type='scsi' index='0' model='virtio-scsi'> > <driver queues='3' cmd_per_lun='10' max_sectors='512'/> > <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> > </controller> > <controller type='usb' index='0' model='ich9-ehci1'> > <address type='pci' domain='0x0000' bus='0x10' slot='0x01' function='0x7'/> > </controller> > <controller type='usb' index='0' model='ich9-uhci1'> > <master startport='0'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x0' multifunction='on'/> > </controller> > <controller type='usb' index='1' model='ich9-ehci1'> > <address type='pci' domain='0x0000' bus='0x10' slot='0x02' function='0x7'/> > </controller> > <controller type='usb' index='1' model='ich9-uhci2'> > <master startport='1'/> > <address type='pci' domain='0x0000' bus='0x10' slot='0x02' function='0x1'/> > </controller> > <controller type='pci' index='0' model='pcie-root'/> > <controller type='pci' index='1' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='1' port='0x10'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> > </controller> > <controller type='pci' index='2' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='2' port='0x11'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> > </controller> > <controller type='pci' index='3' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='3' port='0x12'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> > </controller> > <controller type='pci' index='4' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='4' port='0x13'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> > </controller> > <controller type='pci' index='5' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='5' port='0x14'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> > </controller> > <controller type='pci' index='6' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='6' port='0x15'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/> > </controller> > <controller type='pci' index='7' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='7' port='0x16'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/> > </controller> > <controller type='pci' index='8' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='8' port='0x17'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/> > </controller> > <controller type='pci' index='9' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='9' port='0x18'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/> > </controller> > <controller type='pci' index='10' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='10' port='0x19'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/> > </controller> > <controller type='pci' index='11' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='11' port='0x1a'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/> > </controller> > <controller type='pci' index='12' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='12' port='0x1b'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/> > </controller> > <controller type='pci' index='13' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='13' port='0x1c'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/> > </controller> > <controller type='pci' index='14' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='14' port='0x1d'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/> > </controller> > <controller type='pci' index='15' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='15' port='0x1e'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/> > </controller> > <controller type='pci' index='16' model='pcie-to-pci-bridge'> > <model name='pcie-pci-bridge'/> > <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> > </controller> > <controller type='sata' index='0'> > <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> > </controller> > <controller type='virtio-serial' index='0'> > <driver iommu='on' ats='on'/> > <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> > </controller> > <controller type='virtio-serial' index='1'> > <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> > </controller> > <interface type='network'> > <mac address='52:54:00:aa:2b:86'/> > <source network='default'/> > <model type='e1000e'/> > <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> > </interface> > <serial type='pty'> > <log file='/var/log/libvirt/qemu/guestname-serial0.log' append='off'/> > <target type='isa-serial' port='0'> > <model name='isa-serial'/> > </target> > </serial> > <serial type='dev'> > <source path='/dev/ttyS0'/> > <target type='isa-serial' port='2'> > <model name='isa-serial'/> > </target> > </serial> > <console type='pty'> > <log file='/var/log/libvirt/qemu/guestname-serial0.log' append='off'/> > <target type='serial' port='0'/> > </console> > <channel type='unix'> > <target type='virtio' name='org.qemu.guest_agent.0'/> > <address type='virtio-serial' controller='0' bus='0' port='1'/> > </channel> > <input type='mouse' bus='ps2'/> > <input type='keyboard' bus='usb'> > <address type='usb' bus='0' port='1'/> > </input> > <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'> > <listen type='address' address='0.0.0.0'/> > </graphics> > <video> > <model type='vga' vram='16384' heads='1' primary='yes'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> > </video> > <hostdev mode='subsystem' type='usb' managed='no'> > <source autoAddress='yes'> > <vendor id='0x0627'/> > <product id='0x0001'/> > <address bus='1' device='2'/> > </source> > <alias name='ua-hostdev046b7e883-8517-4e49-b2db-3b3a8d96ccab'/> > <address type='usb' bus='0' port='2'/> > </hostdev> > <memballoon model='virtio'> > <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> > </memballoon> > </devices> > <seclabel type='dynamic' model='selinux' relabel='yes'/> ></domain> > >2023-07-26 02:12:19.904+0000: 386908: debug : virThreadJobClear:118 : Thread 386908 (rpc-virtqemud) finished job remoteDispatchDomainMigrateBegin3Params with ret=0 >2023-07-26 02:12:19.905+0000: 386923: debug : virThreadJobSet:93 : Thread 386923 (prio-rpc-virtqemud) is now running job remoteDispatchDomainGetState >2023-07-26 02:12:19.905+0000: 386923: debug : virDomainGetState:2714 : dom=0x7fb78000a690, (VM: name=rhel-9.2, uuid=e46b3d21-99dd-4be9-92ff-78556c7234c4), state=0x7fb730004000, reason=0x7fb730004004, flags=0x0 >2023-07-26 02:12:19.905+0000: 386923: debug : virThreadJobClear:118 : Thread 386923 (prio-rpc-virtqemud) finished job remoteDispatchDomainGetState with ret=0 >2023-07-26 02:12:20.395+0000: 386910: debug : virThreadJobSet:93 : Thread 386910 (rpc-virtqemud) is now running job remoteDispatchDomainGetJobInfo >2023-07-26 02:12:20.395+0000: 386910: debug : virDomainGetJobInfo:9330 : dom=0x7fb730006550, (VM: name=rhel-9.2, uuid=e46b3d21-99dd-4be9-92ff-78556c7234c4), info=0x7fb700fd0860 >2023-07-26 02:12:20.395+0000: 386910: debug : virThreadJobClear:118 : Thread 386910 (rpc-virtqemud) finished job remoteDispatchDomainGetJobInfo with ret=0 >2023-07-26 02:12:20.806+0000: 386911: debug : virThreadJobSet:93 : Thread 386911 (rpc-virtqemud) is now running job remoteDispatchDomainMigratePerform3Params >2023-07-26 02:12:20.806+0000: 386911: debug : virDomainMigratePerform3Params:5378 : dom=0x7fb78c006810, (VM: name=rhel-9.2, uuid=e46b3d21-99dd-4be9-92ff-78556c7234c4), dconnuri=<null>, params=0x7fb78c0075a0, nparams=2, cookiein=0x7fb78c005940, cookieinlen=857, cookieout=0x7fb7007cf8b8, cookieoutlen=0x7fb7007cf8ac, flags=0x141 >2023-07-26 02:12:20.807+0000: 386911: debug : virDomainMigratePerform3Params:5382 : params["destination_xml"]=(string)<domain type='kvm'> > <name>rhel-9.2</name> > <uuid>e46b3d21-99dd-4be9-92ff-78556c7234c4</uuid> > <metadata> > <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0"> > <libosinfo:os id="http://libosinfo.org/unknown"/> > </libosinfo:libosinfo> > </metadata> > <memory unit='KiB'>2097152</memory> > <currentMemory unit='KiB'>2097152</currentMemory> > <vcpu placement='static' current='1'>8</vcpu> > <resource> > <partition>/machine/test</partition> > </resource> > <os> > <type arch='x86_64' machine='pc-q35-rhel9.2.0'>hvm</type> > <boot dev='hd'/> > </os> > <features> > <acpi/> > <apic/> > </features> > <cpu mode='host-passthrough' check='none' migratable='on'/> > <clock offset='utc'> > <timer name='rtc' tickpolicy='catchup'/> > <timer name='pit' tickpolicy='delay'/> > <timer name='hpet' present='no'/> > </clock> > <on_poweroff>destroy</on_poweroff> > <on_reboot>restart</on_reboot> > <on_crash>destroy</on_crash> > <pm> > <suspend-to-mem enabled='no'/> > <suspend-to-disk enabled='no'/> > </pm> > <devices> > <emulator>/usr/libexec/qemu-kvm</emulator> > <disk type='file' device='disk'> > <driver name='qemu' type='qcow2' cache='none' io='io_uring' copy_on_read='on' ats='on' packed='on'/> > <source file='/var/lib/libvirt/images/rhel-9.2.qcow2'/> > <backingStore/> > <target dev='vda' bus='virtio'/> > <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> > </disk> > <disk type='network' device='disk'> > <driver name='qemu' type='raw'/> > <source protocol='nbd' name='foo'> > <host name='10.0.79.60' port='10809'/> > <reconnect delay='10'/> > </source> > <target dev='vdb' bus='virtio'/> > <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> > </disk> > <controller type='scsi' index='0' model='virtio-scsi'> > <driver queues='3' cmd_per_lun='10' max_sectors='512'/> > <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> > </controller> > <controller type='usb' index='0' model='ich9-ehci1'> > <address type='pci' domain='0x0000' bus='0x10' slot='0x01' function='0x7'/> > </controller> > <controller type='usb' index='0' model='ich9-uhci1'> > <master startport='0'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x0' multifunction='on'/> > </controller> > <controller type='usb' index='1' model='ich9-ehci1'> > <address type='pci' domain='0x0000' bus='0x10' slot='0x02' function='0x7'/> > </controller> > <controller type='usb' index='1' model='ich9-uhci2'> > <master startport='1'/> > <address type='pci' domain='0x0000' bus='0x10' slot='0x02' function='0x1'/> > </controller> > <controller type='pci' index='0' model='pcie-root'/> > <controller type='pci' index='1' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='1' port='0x10'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> > </controller> > <controller type='pci' index='2' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='2' port='0x11'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> > </controller> > <controller type='pci' index='3' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='3' port='0x12'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> > </controller> > <controller type='pci' index='4' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='4' port='0x13'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> > </controller> > <controller type='pci' index='5' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='5' port='0x14'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> > </controller> > <controller type='pci' index='6' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='6' port='0x15'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/> > </controller> > <controller type='pci' index='7' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='7' port='0x16'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/> > </controller> > <controller type='pci' index='8' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='8' port='0x17'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/> > </controller> > <controller type='pci' index='9' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='9' port='0x18'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/> > </controller> > <controller type='pci' index='10' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='10' port='0x19'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/> > </controller> > <controller type='pci' index='11' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='11' port='0x1a'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/> > </controller> > <controller type='pci' index='12' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='12' port='0x1b'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/> > </controller> > <controller type='pci' index='13' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='13' port='0x1c'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/> > </controller> > <controller type='pci' index='14' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='14' port='0x1d'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/> > </controller> > <controller type='pci' index='15' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='15' port='0x1e'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/> > </controller> > <controller type='pci' index='16' model='pcie-to-pci-bridge'> > <model name='pcie-pci-bridge'/> > <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> > </controller> > <controller type='sata' index='0'> > <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> > </controller> > <controller type='virtio-serial' index='0'> > <driver iommu='on' ats='on'/> > <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> > </controller> > <controller type='virtio-serial' index='1'> > <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> > </controller> > <interface type='network'> > <mac address='52:54:00:aa:2b:86'/> > <source network='default'/> > <model type='e1000e'/> > <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> > </interface> > <serial type='pty'> > <log file='/var/log/libvirt/qemu/guestname-serial0.log' append='off'/> > <target type='isa-serial' port='0'> > <model name='isa-serial'/> > </target> > </serial> > <serial type='dev'> > <source path='/dev/ttyS0'/> > <target type='isa-serial' port='2'> > <model name='isa-serial'/> > </target> > </serial> > <console type='pty'> > <log file='/var/log/libvirt/qemu/guestname-serial0.log' append='off'/> > <target type='serial' port='0'/> > </console> > <channel type='unix'> > <target type='virtio' name='org.qemu.guest_agent.0'/> > <address type='virtio-serial' controller='0' bus='0' port='1'/> > </channel> > <input type='mouse' bus='ps2'/> > <input type='keyboard' bus='usb'> > <address type='usb' bus='0' port='1'/> > </input> > <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'> > <listen type='address' address='0.0.0.0'/> > </graphics> > <video> > <model type='vga' vram='16384' heads='1' primary='yes'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> > </video> > <hostdev mode='subsystem' type='usb' managed='no'> > <source autoAddress='yes'> > <vendor id='0x0627'/> > <product id='0x0001'/> > <address bus='1' device='2'/> > </source> > <alias name='ua-hostdev046b7e883-8517-4e49-b2db-3b3a8d96ccab'/> > <address type='usb' bus='0' port='2'/> > </hostdev> > <memballoon model='virtio'> > <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> > </memballoon> > </devices> > <seclabel type='dynamic' model='selinux' relabel='yes'/> ></domain> > >2023-07-26 02:12:20.807+0000: 386911: debug : virDomainMigratePerform3Params:5382 : params["migrate_uri"]=(string)tcp:vm-10-0-79-186.hosted.upshift.rdu2.redhat.com:49152 >2023-07-26 02:12:20.807+0000: 386911: debug : qemuMigrationParamsFromFlags:688 : Enabling migration capability 'return-path' >2023-07-26 02:12:20.807+0000: 386911: debug : qemuMigrationParamsFromFlags:700 : Setting migration parameter 'cpu-throttle-initial' from 'auto_converge.initial' >2023-07-26 02:12:20.807+0000: 386911: debug : qemuMigrationParamsFromFlags:700 : Setting migration parameter 'cpu-throttle-increment' from 'auto_converge.increment' >2023-07-26 02:12:20.807+0000: 386911: debug : qemuMigrationParamsFromFlags:700 : Setting migration parameter 'compress-level' from 'compression.mt.level' >2023-07-26 02:12:20.807+0000: 386911: debug : qemuMigrationParamsFromFlags:700 : Setting migration parameter 'compress-threads' from 'compression.mt.threads' >2023-07-26 02:12:20.807+0000: 386911: debug : qemuMigrationParamsFromFlags:700 : Setting migration parameter 'decompress-threads' from 'compression.mt.dthreads' >2023-07-26 02:12:20.807+0000: 386911: debug : qemuMigrationParamsFromFlags:700 : Setting migration parameter 'xbzrle-cache-size' from 'compression.xbzrle.cache' >2023-07-26 02:12:20.807+0000: 386911: debug : qemuMigrationParamsFromFlags:700 : Setting migration parameter 'max-postcopy-bandwidth' from 'bandwidth.postcopy' >2023-07-26 02:12:20.807+0000: 386911: debug : qemuMigrationParamsFromFlags:700 : Setting migration parameter 'multifd-channels' from 'parallel.connections' >2023-07-26 02:12:20.807+0000: 386911: debug : qemuMigrationParamsFromFlags:700 : Setting migration parameter 'multifd-zlib-level' from 'compression.zlib.level' >2023-07-26 02:12:20.807+0000: 386911: debug : qemuMigrationParamsFromFlags:700 : Setting migration parameter 'multifd-zstd-level' from 'compression.zstd.level' >2023-07-26 02:12:20.807+0000: 386911: debug : qemuMigrationParamsFromFlags:700 : Setting migration parameter 'tls-hostname' from 'tls.destination' >2023-07-26 02:12:20.807+0000: 386911: debug : qemuMigrationSrcPerform:6234 : driver=0x7fb7600223f0, conn=0x7fb73c0152d0, vm=0x7fb76008e840, xmlin=<domain type='kvm'> > <name>rhel-9.2</name> > <uuid>e46b3d21-99dd-4be9-92ff-78556c7234c4</uuid> > <metadata> > <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0"> > <libosinfo:os id="http://libosinfo.org/unknown"/> > </libosinfo:libosinfo> > </metadata> > <memory unit='KiB'>2097152</memory> > <currentMemory unit='KiB'>2097152</currentMemory> > <vcpu placement='static' current='1'>8</vcpu> > <resource> > <partition>/machine/test</partition> > </resource> > <os> > <type arch='x86_64' machine='pc-q35-rhel9.2.0'>hvm</type> > <boot dev='hd'/> > </os> > <features> > <acpi/> > <apic/> > </features> > <cpu mode='host-passthrough' check='none' migratable='on'/> > <clock offset='utc'> > <timer name='rtc' tickpolicy='catchup'/> > <timer name='pit' tickpolicy='delay'/> > <timer name='hpet' present='no'/> > </clock> > <on_poweroff>destroy</on_poweroff> > <on_reboot>restart</on_reboot> > <on_crash>destroy</on_crash> > <pm> > <suspend-to-mem enabled='no'/> > <suspend-to-disk enabled='no'/> > </pm> > <devices> > <emulator>/usr/libexec/qemu-kvm</emulator> > <disk type='file' device='disk'> > <driver name='qemu' type='qcow2' cache='none' io='io_uring' copy_on_read='on' ats='on' packed='on'/> > <source file='/var/lib/libvirt/images/rhel-9.2.qcow2'/> > <backingStore/> > <target dev='vda' bus='virtio'/> > <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> > </disk> > <disk type='network' device='disk'> > <driver name='qemu' type='raw'/> > <source protocol='nbd' name='foo'> > <host name='10.0.79.60' port='10809'/> > <reconnect delay='10'/> > </source> > <target dev='vdb' bus='virtio'/> > <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> > </disk> > <controller type='scsi' index='0' model='virtio-scsi'> > <driver queues='3' cmd_per_lun='10' max_sectors='512'/> > <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> > </controller> > <controller type='usb' index='0' model='ich9-ehci1'> > <address type='pci' domain='0x0000' bus='0x10' slot='0x01' function='0x7'/> > </controller> > <controller type='usb' index='0' model='ich9-uhci1'> > <master startport='0'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x0' multifunction='on'/> > </controller> > <controller type='usb' index='1' model='ich9-ehci1'> > <address type='pci' domain='0x0000' bus='0x10' slot='0x02' function='0x7'/> > </controller> > <controller type='usb' index='1' model='ich9-uhci2'> > <master startport='1'/> > <address type='pci' domain='0x0000' bus='0x10' slot='0x02' function='0x1'/> > </controller> > <controller type='pci' index='0' model='pcie-root'/> > <controller type='pci' index='1' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='1' port='0x10'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> > </controller> > <controller type='pci' index='2' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='2' port='0x11'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> > </controller> > <controller type='pci' index='3' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='3' port='0x12'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> > </controller> > <controller type='pci' index='4' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='4' port='0x13'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> > </controller> > <controller type='pci' index='5' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='5' port='0x14'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> > </controller> > <controller type='pci' index='6' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='6' port='0x15'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/> > </controller> > <controller type='pci' index='7' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='7' port='0x16'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/> > </controller> > <controller type='pci' index='8' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='8' port='0x17'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/> > </controller> > <controller type='pci' index='9' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='9' port='0x18'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/> > </controller> > <controller type='pci' index='10' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='10' port='0x19'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/> > </controller> > <controller type='pci' index='11' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='11' port='0x1a'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/> > </controller> > <controller type='pci' index='12' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='12' port='0x1b'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/> > </controller> > <controller type='pci' index='13' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='13' port='0x1c'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/> > </controller> > <controller type='pci' index='14' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='14' port='0x1d'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/> > </controller> > <controller type='pci' index='15' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='15' port='0x1e'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/> > </controller> > <controller type='pci' index='16' model='pcie-to-pci-bridge'> > <model name='pcie-pci-bridge'/> > <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> > </controller> > <controller type='sata' index='0'> > <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> > </controller> > <controller type='virtio-serial' index='0'> > <driver iommu='on' ats='on'/> > <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> > </controller> > <controller type='virtio-serial' index='1'> > <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> > </controller> > <interface type='network'> > <mac address='52:54:00:aa:2b:86'/> > <source network='default'/> > <model type='e1000e'/> > <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> > </interface> > <serial type='pty'> > <log file='/var/log/libvirt/qemu/guestname-serial0.log' append='off'/> > <target type='isa-serial' port='0'> > <model name='isa-serial'/> > </target> > </serial> > <serial type='dev'> > <source path='/dev/ttyS0'/> > <target type='isa-serial' port='2'> > <model name='isa-serial'/> > </target> > </serial> > <console type='pty'> > <log file='/var/log/libvirt/qemu/guestname-serial0.log' append='off'/> > <target type='serial' port='0'/> > </console> > <channel type='unix'> > <target type='virtio' name='org.qemu.guest_agent.0'/> > <address type='virtio-serial' controller='0' bus='0' port='1'/> > </channel> > <input type='mouse' bus='ps2'/> > <input type='keyboard' bus='usb'> > <address type='usb' bus='0' port='1'/> > </input> > <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'> > <listen type='address' address='0.0.0.0'/> > </graphics> > <video> > <model type='vga' vram='16384' heads='1' primary='yes'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> > </video> > <hostdev mode='subsystem' type='usb' managed='no'> > <source autoAddress='yes'> > <vendor id='0x0627'/> > <product id='0x0001'/> > <address bus='1' device='2'/> > </source> > <alias name='ua-hostdev046b7e883-8517-4e49-b2db-3b3a8d96ccab'/> > <address type='usb' bus='0' port='2'/> > </hostdev> > <memballoon model='virtio'> > <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> > </memballoon> > </devices> > <seclabel type='dynamic' model='selinux' relabel='yes'/> ></domain> >, dconnuri=<null>, uri=tcp:vm-10-0-79-186.hosted.upshift.rdu2.redhat.com:49152, graphicsuri=<null>, listenAddress=<null>, nmigrate_disks=0, migrate_disks=(nil), nbdPort=0, nbdURI=<null>, cookiein=<qemu-migration> > <name>rhel-9.2</name> > <uuid>e46b3d21-99dd-4be9-92ff-78556c7234c4</uuid> > <hostname>vm-10-0-79-186.hosted.upshift.rdu2.redhat.com</hostname> > <hostuuid>e5563a9d-d6bf-4de6-bf56-b43b77fe1690</hostuuid> > <nbd port='49153'> > <disk target='vda' capacity='10737418240'/> > <disk target='vdb' capacity='10485760'/> > </nbd> > <capabilities> > <cap name='xbzrle' auto='no'/> > <cap name='auto-converge' auto='no'/> > <cap name='rdma-pin-all' auto='no'/> > <cap name='postcopy-ram' auto='no'/> > <cap name='compress' auto='no'/> > <cap name='pause-before-switchover' auto='no'/> > <cap name='late-block-activate' auto='yes'/> > <cap name='multifd' auto='no'/> > <cap name='dirty-bitmaps' auto='no'/> > <cap name='return-path' auto='no'/> > <cap name='zero-copy-send' auto='no'/> > </capabilities> ></qemu-migration> >, cookieinlen=857, cookieout=0x7fb7007cf8b8, cookieoutlen=0x7fb7007cf8ac, flags=0x141, dname=<null>, resource=0, v3proto=1 >2023-07-26 02:12:20.807+0000: 386911: debug : qemuDomainObjStartJobPhase:588 : Starting phase 'perform3' of 'migration out' job >2023-07-26 02:12:20.807+0000: 386911: debug : qemuDomainObjSetJobPhase:558 : Setting 'migration out' phase to 'perform3' >2023-07-26 02:12:20.807+0000: 386911: debug : qemuMigrationSrcPerformNative:5130 : driver=0x7fb7600223f0, vm=0x7fb76008e840, uri=tcp:vm-10-0-79-186.hosted.upshift.rdu2.redhat.com:49152, cookiein=<qemu-migration> > <name>rhel-9.2</name> > <uuid>e46b3d21-99dd-4be9-92ff-78556c7234c4</uuid> > <hostname>vm-10-0-79-186.hosted.upshift.rdu2.redhat.com</hostname> > <hostuuid>e5563a9d-d6bf-4de6-bf56-b43b77fe1690</hostuuid> > <nbd port='49153'> > <disk target='vda' capacity='10737418240'/> > <disk target='vdb' capacity='10485760'/> > </nbd> > <capabilities> > <cap name='xbzrle' auto='no'/> > <cap name='auto-converge' auto='no'/> > <cap name='rdma-pin-all' auto='no'/> > <cap name='postcopy-ram' auto='no'/> > <cap name='compress' auto='no'/> > <cap name='pause-before-switchover' auto='no'/> > <cap name='late-block-activate' auto='yes'/> > <cap name='multifd' auto='no'/> > <cap name='dirty-bitmaps' auto='no'/> > <cap name='return-path' auto='no'/> > <cap name='zero-copy-send' auto='no'/> > </capabilities> ></qemu-migration> >, cookieinlen=857, cookieout=0x7fb7007cf8b8, cookieoutlen=0x7fb7007cf8ac, flags=0x141, resource=0, graphicsuri=<null>, nmigrate_disks=0 migrate_disks=(nil) >2023-07-26 02:12:20.807+0000: 386911: debug : qemuMigrationSrcRun:4733 : driver=0x7fb7600223f0, vm=0x7fb76008e840, cookiein=<qemu-migration> > <name>rhel-9.2</name> > <uuid>e46b3d21-99dd-4be9-92ff-78556c7234c4</uuid> > <hostname>vm-10-0-79-186.hosted.upshift.rdu2.redhat.com</hostname> > <hostuuid>e5563a9d-d6bf-4de6-bf56-b43b77fe1690</hostuuid> > <nbd port='49153'> > <disk target='vda' capacity='10737418240'/> > <disk target='vdb' capacity='10485760'/> > </nbd> > <capabilities> > <cap name='xbzrle' auto='no'/> > <cap name='auto-converge' auto='no'/> > <cap name='rdma-pin-all' auto='no'/> > <cap name='postcopy-ram' auto='no'/> > <cap name='compress' auto='no'/> > <cap name='pause-before-switchover' auto='no'/> > <cap name='late-block-activate' auto='yes'/> > <cap name='multifd' auto='no'/> > <cap name='dirty-bitmaps' auto='no'/> > <cap name='return-path' auto='no'/> > <cap name='zero-copy-send' auto='no'/> > </capabilities> ></qemu-migration> >, cookieinlen=857, cookieout=0x7fb7007cf8b8, cookieoutlen=0x7fb7007cf8ac, flags=0x141, resource=0, spec=0x7fb7007cf530 (dest=1, fwd=0), dconn=(nil), graphicsuri=<null>, nmigrate_disks=0, migrate_disks=(nil) >2023-07-26 02:12:20.807+0000: 386911: debug : qemuMigrationCookieParse:1510 : cookielen=857 cookie='<qemu-migration> > <name>rhel-9.2</name> > <uuid>e46b3d21-99dd-4be9-92ff-78556c7234c4</uuid> > <hostname>vm-10-0-79-186.hosted.upshift.rdu2.redhat.com</hostname> > <hostuuid>e5563a9d-d6bf-4de6-bf56-b43b77fe1690</hostuuid> > <nbd port='49153'> > <disk target='vda' capacity='10737418240'/> > <disk target='vdb' capacity='10485760'/> > </nbd> > <capabilities> > <cap name='xbzrle' auto='no'/> > <cap name='auto-converge' auto='no'/> > <cap name='rdma-pin-all' auto='no'/> > <cap name='postcopy-ram' auto='no'/> > <cap name='compress' auto='no'/> > <cap name='pause-before-switchover' auto='no'/> > <cap name='late-block-activate' auto='yes'/> > <cap name='multifd' auto='no'/> > <cap name='dirty-bitmaps' auto='no'/> > <cap name='return-path' auto='no'/> > <cap name='zero-copy-send' auto='no'/> > </capabilities> ></qemu-migration> >' >2023-07-26 02:12:20.807+0000: 386911: debug : qemuMigrationCookieXMLParseStr:1417 : xml=<qemu-migration> > <name>rhel-9.2</name> > <uuid>e46b3d21-99dd-4be9-92ff-78556c7234c4</uuid> > <hostname>vm-10-0-79-186.hosted.upshift.rdu2.redhat.com</hostname> > <hostuuid>e5563a9d-d6bf-4de6-bf56-b43b77fe1690</hostuuid> > <nbd port='49153'> > <disk target='vda' capacity='10737418240'/> > <disk target='vdb' capacity='10485760'/> > </nbd> > <capabilities> > <cap name='xbzrle' auto='no'/> > <cap name='auto-converge' auto='no'/> > <cap name='rdma-pin-all' auto='no'/> > <cap name='postcopy-ram' auto='no'/> > <cap name='compress' auto='no'/> > <cap name='pause-before-switchover' auto='no'/> > <cap name='late-block-activate' auto='yes'/> > <cap name='multifd' auto='no'/> > <cap name='dirty-bitmaps' auto='no'/> > <cap name='return-path' auto='no'/> > <cap name='zero-copy-send' auto='no'/> > </capabilities> ></qemu-migration> > >2023-07-26 02:12:20.807+0000: 386911: debug : qemuMigrationParamsCheck:1330 : Enabling migration capability 'pause-before-switchover' >2023-07-26 02:12:20.807+0000: 386911: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:20.807+0000: 386911: debug : qemuMonitorGetMigrationParams:2190 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:20.807+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"query-migrate-parameters","id":"libvirt-455"} > fd=-1 >2023-07-26 02:12:20.807+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"query-migrate-parameters","id":"libvirt-455"} > len=59 ret=59 errno=0 >2023-07-26 02:12:20.808+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {"cpu-throttle-tailslow": false, "xbzrle-cache-size": 67108864, "cpu-throttle-initial": 20, "announce-max": 550, "decompress-threads": 2, "compress-threads": 8, "compress-level": 1, "multifd-channels": 2, "multifd-zstd-level": 1, "announce-initial": 50, "block-incremental": false, "compress-wait-thread": true, "downtime-limit": 300, "tls-authz": "", "multifd-compression": "none", "announce-rounds": 5, "announce-step": 100, "tls-creds": "", "multifd-zlib-level": 1, "max-cpu-throttle": 99, "max-postcopy-bandwidth": 0, "tls-hostname": "", "throttle-trigger-threshold": 50, "max-bandwidth": 134217728, "x-checkpoint-delay": 20000, "cpu-throttle-increment": 10}, "id": "libvirt-455"}] >2023-07-26 02:12:20.808+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": {"cpu-throttle-tailslow": false, "xbzrle-cache-size": 67108864, "cpu-throttle-initial": 20, "announce-max": 550, "decompress-threads": 2, "compress-threads": 8, "compress-level": 1, "multifd-channels": 2, "multifd-zstd-level": 1, "announce-initial": 50, "block-incremental": false, "compress-wait-thread": true, "downtime-limit": 300, "tls-authz": "", "multifd-compression": "none", "announce-rounds": 5, "announce-step": 100, "tls-creds": "", "multifd-zlib-level": 1, "max-cpu-throttle": 99, "max-postcopy-bandwidth": 0, "tls-hostname": "", "throttle-trigger-threshold": 50, "max-bandwidth": 134217728, "x-checkpoint-delay": 20000, "cpu-throttle-increment": 10}, "id": "libvirt-455"} >2023-07-26 02:12:20.808+0000: 386911: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:20.809+0000: 386911: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:20.809+0000: 386911: debug : qemuMonitorSetMigrationCapabilities:3414 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:20.809+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"migrate-set-capabilities","arguments":{"capabilities":[{"capability":"xbzrle","state":false},{"capability":"auto-converge","state":false},{"capability":"rdma-pin-all","state":false},{"capability":"postcopy-ram","state":false},{"capability":"compress","state":false},{"capability":"pause-before-switchover","state":true},{"capability":"late-block-activate","state":false},{"capability":"multifd","state":false},{"capability":"dirty-bitmaps","state":false},{"capability":"return-path","state":true},{"capability":"zero-copy-send","state":false}]},"id":"libvirt-456"} > fd=-1 >2023-07-26 02:12:20.809+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"migrate-set-capabilities","arguments":{"capabilities":[{"capability":"xbzrle","state":false},{"capability":"auto-converge","state":false},{"capability":"rdma-pin-all","state":false},{"capability":"postcopy-ram","state":false},{"capability":"compress","state":false},{"capability":"pause-before-switchover","state":true},{"capability":"late-block-activate","state":false},{"capability":"multifd","state":false},{"capability":"dirty-bitmaps","state":false},{"capability":"return-path","state":true},{"capability":"zero-copy-send","state":false}]},"id":"libvirt-456"} > len=578 ret=578 errno=0 >2023-07-26 02:12:20.815+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {}, "id": "libvirt-456"}] >2023-07-26 02:12:20.815+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": {}, "id": "libvirt-456"} >2023-07-26 02:12:20.815+0000: 386911: debug : qemuMonitorSetMigrationParams:2209 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:20.815+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"migrate-set-parameters","arguments":{"tls-creds":"","tls-hostname":"","max-bandwidth":9223372036853727232},"id":"libvirt-457"} > fd=-1 >2023-07-26 02:12:20.815+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"migrate-set-parameters","arguments":{"tls-creds":"","tls-hostname":"","max-bandwidth":9223372036853727232},"id":"libvirt-457"} > len=140 ret=140 errno=0 >2023-07-26 02:12:20.816+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {}, "id": "libvirt-457"}] >2023-07-26 02:12:20.816+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": {}, "id": "libvirt-457"} >2023-07-26 02:12:20.816+0000: 386911: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:20.816+0000: 386911: debug : qemuMigrationSrcNBDStorageCopy:1167 : Starting drive mirrors for domain rhel-9.2 >2023-07-26 02:12:20.817+0000: 386911: debug : qemuBlockJobSyncBegin:1646 : disk=vda >2023-07-26 02:12:20.817+0000: 386911: debug : qemuMigrationSrcNBDStorageCopyBlockdev:1042 : starting blockdev mirror for disk=vda to host=vm-10-0-79-186.hosted.upshift.rdu2.redhat.com >2023-07-26 02:12:20.817+0000: 386911: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:20.817+0000: 386911: debug : qemuMonitorBlockdevAdd:3968 : props=0x7fb78c014ee0 (node-name=migration-vda-storage) >2023-07-26 02:12:20.817+0000: 386911: debug : qemuMonitorBlockdevAdd:3971 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:20.817+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"blockdev-add","arguments":{"driver":"nbd","server":{"type":"inet","host":"vm-10-0-79-186.hosted.upshift.rdu2.redhat.com","port":"49153"},"export":"drive-virtio-disk0","tls-hostname":"","node-name":"migration-vda-storage","read-only":false,"discard":"unmap"},"id":"libvirt-458"} > fd=-1 >2023-07-26 02:12:20.817+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"blockdev-add","arguments":{"driver":"nbd","server":{"type":"inet","host":"vm-10-0-79-186.hosted.upshift.rdu2.redhat.com","port":"49153"},"export":"drive-virtio-disk0","tls-hostname":"","node-name":"migration-vda-storage","read-only":false,"discard":"unmap"},"id":"libvirt-458"} > len=291 ret=291 errno=0 >2023-07-26 02:12:20.823+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {}, "id": "libvirt-458"}] >2023-07-26 02:12:20.823+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": {}, "id": "libvirt-458"} >2023-07-26 02:12:20.823+0000: 386911: debug : qemuMonitorBlockdevAdd:3968 : props=0x7fb78c0016d0 (node-name=migration-vda-format) >2023-07-26 02:12:20.823+0000: 386911: debug : qemuMonitorBlockdevAdd:3971 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:20.823+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"blockdev-add","arguments":{"node-name":"migration-vda-format","read-only":false,"driver":"raw","file":"migration-vda-storage"},"id":"libvirt-459"} > fd=-1 >2023-07-26 02:12:20.823+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"blockdev-add","arguments":{"node-name":"migration-vda-format","read-only":false,"driver":"raw","file":"migration-vda-storage"},"id":"libvirt-459"} > len=160 ret=160 errno=0 >2023-07-26 02:12:20.825+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {}, "id": "libvirt-459"}] >2023-07-26 02:12:20.825+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": {}, "id": "libvirt-459"} >2023-07-26 02:12:20.825+0000: 386911: debug : qemuMonitorBlockdevMirror:2779 : jobname=drive-virtio-disk0, persistjob=1, device=libvirt-CoR-vda, target=migration-vda-format, bandwidth=9223372036853727232, granularity=0, buf_size=0, shallow=0 syncWrite=0 >2023-07-26 02:12:20.825+0000: 386911: debug : qemuMonitorBlockdevMirror:2784 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:20.825+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"blockdev-mirror","arguments":{"job-id":"drive-virtio-disk0","device":"libvirt-CoR-vda","target":"migration-vda-format","speed":9223372036853727232,"sync":"full","auto-finalize":true,"auto-dismiss":false},"id":"libvirt-460"} > fd=-1 >2023-07-26 02:12:20.825+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"blockdev-mirror","arguments":{"job-id":"drive-virtio-disk0","device":"libvirt-CoR-vda","target":"migration-vda-format","speed":9223372036853727232,"sync":"full","auto-finalize":true,"auto-dismiss":false},"id":"libvirt-460"} > len=237 ret=237 errno=0 >2023-07-26 02:12:20.828+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337540, "microseconds": 828071}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "drive-virtio-disk0"}}] >2023-07-26 02:12:20.828+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337540, "microseconds": 828071}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "drive-virtio-disk0"}} >2023-07-26 02:12:20.828+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb774007620 >2023-07-26 02:12:20.828+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:20.828+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:20.828+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb77401a6f0 >2023-07-26 02:12:20.828+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:20.828+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk0'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'created'(1) >2023-07-26 02:12:20.828+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337540, "microseconds": 828315}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "drive-virtio-disk0"}}] >2023-07-26 02:12:20.828+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337540, "microseconds": 828315}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "drive-virtio-disk0"}} >2023-07-26 02:12:20.828+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb774007620 >2023-07-26 02:12:20.828+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:20.828+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:20.828+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb7740188c0 >2023-07-26 02:12:20.828+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:20.828+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk0'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'running'(2) >2023-07-26 02:12:20.833+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {}, "id": "libvirt-460"}] >2023-07-26 02:12:20.833+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": {}, "id": "libvirt-460"} >2023-07-26 02:12:20.833+0000: 386911: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:20.834+0000: 386911: debug : qemuBlockJobSyncBegin:1646 : disk=vdb >2023-07-26 02:12:20.834+0000: 386911: debug : qemuMigrationSrcNBDStorageCopyBlockdev:1042 : starting blockdev mirror for disk=vdb to host=vm-10-0-79-186.hosted.upshift.rdu2.redhat.com >2023-07-26 02:12:20.834+0000: 386911: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:20.834+0000: 386911: debug : qemuMonitorBlockdevAdd:3968 : props=0x7fb78c00cf30 (node-name=migration-vdb-storage) >2023-07-26 02:12:20.834+0000: 386911: debug : qemuMonitorBlockdevAdd:3971 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:20.834+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"blockdev-add","arguments":{"driver":"nbd","server":{"type":"inet","host":"vm-10-0-79-186.hosted.upshift.rdu2.redhat.com","port":"49153"},"export":"drive-virtio-disk1","tls-hostname":"","node-name":"migration-vdb-storage","read-only":false,"discard":"unmap"},"id":"libvirt-461"} > fd=-1 >2023-07-26 02:12:20.834+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"blockdev-add","arguments":{"driver":"nbd","server":{"type":"inet","host":"vm-10-0-79-186.hosted.upshift.rdu2.redhat.com","port":"49153"},"export":"drive-virtio-disk1","tls-hostname":"","node-name":"migration-vdb-storage","read-only":false,"discard":"unmap"},"id":"libvirt-461"} > len=291 ret=291 errno=0 >2023-07-26 02:12:20.843+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {}, "id": "libvirt-461"}] >2023-07-26 02:12:20.843+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": {}, "id": "libvirt-461"} >2023-07-26 02:12:20.843+0000: 386911: debug : qemuMonitorBlockdevAdd:3968 : props=0x7fb78c0188f0 (node-name=migration-vdb-format) >2023-07-26 02:12:20.843+0000: 386911: debug : qemuMonitorBlockdevAdd:3971 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:20.843+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"blockdev-add","arguments":{"node-name":"migration-vdb-format","read-only":false,"driver":"raw","file":"migration-vdb-storage"},"id":"libvirt-462"} > fd=-1 >2023-07-26 02:12:20.843+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"blockdev-add","arguments":{"node-name":"migration-vdb-format","read-only":false,"driver":"raw","file":"migration-vdb-storage"},"id":"libvirt-462"} > len=160 ret=160 errno=0 >2023-07-26 02:12:20.845+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {}, "id": "libvirt-462"}] >2023-07-26 02:12:20.845+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": {}, "id": "libvirt-462"} >2023-07-26 02:12:20.845+0000: 386911: debug : qemuMonitorBlockdevMirror:2779 : jobname=drive-virtio-disk1, persistjob=1, device=libvirt-1-format, target=migration-vdb-format, bandwidth=9223372036853727232, granularity=0, buf_size=0, shallow=0 syncWrite=0 >2023-07-26 02:12:20.845+0000: 386911: debug : qemuMonitorBlockdevMirror:2784 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:20.845+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"blockdev-mirror","arguments":{"job-id":"drive-virtio-disk1","device":"libvirt-1-format","target":"migration-vdb-format","speed":9223372036853727232,"sync":"full","auto-finalize":true,"auto-dismiss":false},"id":"libvirt-463"} > fd=-1 >2023-07-26 02:12:20.845+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"blockdev-mirror","arguments":{"job-id":"drive-virtio-disk1","device":"libvirt-1-format","target":"migration-vdb-format","speed":9223372036853727232,"sync":"full","auto-finalize":true,"auto-dismiss":false},"id":"libvirt-463"} > len=238 ret=238 errno=0 >2023-07-26 02:12:20.848+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337540, "microseconds": 848467}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "drive-virtio-disk1"}}] >2023-07-26 02:12:20.848+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337540, "microseconds": 848467}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "drive-virtio-disk1"}} >2023-07-26 02:12:20.848+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb77401a370 >2023-07-26 02:12:20.848+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:20.848+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:20.848+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb7740080d0 >2023-07-26 02:12:20.848+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:20.848+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk1'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'created'(1) >2023-07-26 02:12:20.848+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337540, "microseconds": 848803}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "drive-virtio-disk1"}}] >2023-07-26 02:12:20.848+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337540, "microseconds": 848803}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "drive-virtio-disk1"}} >2023-07-26 02:12:20.848+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb77401a370 >2023-07-26 02:12:20.848+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:20.848+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:20.848+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb774008700 >2023-07-26 02:12:20.848+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:20.848+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk1'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'running'(2) >2023-07-26 02:12:20.852+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {}, "id": "libvirt-463"}] >2023-07-26 02:12:20.852+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": {}, "id": "libvirt-463"} >2023-07-26 02:12:20.852+0000: 386911: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:20.853+0000: 386911: debug : qemuMigrationSrcNBDStorageCopyReady:723 : Waiting for 2 disk mirrors to get ready >2023-07-26 02:12:20.859+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337540, "microseconds": 859711}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "drive-virtio-disk1"}}] >2023-07-26 02:12:20.859+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337540, "microseconds": 859711}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "drive-virtio-disk1"}} >2023-07-26 02:12:20.859+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb774028be0 >2023-07-26 02:12:20.859+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:20.859+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:20.859+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb77400b060 >2023-07-26 02:12:20.859+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:20.859+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk1'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'ready'(4) >2023-07-26 02:12:20.859+0000: 387763: debug : qemuProcessHandleJobStatusChange:915 : job 'drive-virtio-disk1' handled synchronously >2023-07-26 02:12:20.859+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337540, "microseconds": 859769}, "event": "BLOCK_JOB_READY", "data": {"device": "drive-virtio-disk1", "len": 10485760, "offset": 10485760, "speed": 9223372036853727232, "type": "mirror"}}] >2023-07-26 02:12:20.860+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337540, "microseconds": 859769}, "event": "BLOCK_JOB_READY", "data": {"device": "drive-virtio-disk1", "len": 10485760, "offset": 10485760, "speed": 9223372036853727232, "type": "mirror"}} >2023-07-26 02:12:20.860+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb77401a340 >2023-07-26 02:12:20.860+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=BLOCK_JOB_READY >2023-07-26 02:12:20.860+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:20.860+0000: 386911: debug : qemuMigrationSrcNBDStorageCopyReady:723 : Waiting for 1 disk mirrors to get ready >2023-07-26 02:12:24.319+0000: 386924: debug : virThreadJobSet:93 : Thread 386924 (prio-rpc-virtqemud) is now running job remoteDispatchAuthList >2023-07-26 02:12:24.319+0000: 386924: debug : virThreadJobClear:118 : Thread 386924 (prio-rpc-virtqemud) finished job remoteDispatchAuthList with ret=0 >2023-07-26 02:12:24.319+0000: 386925: debug : virThreadJobSet:93 : Thread 386925 (prio-rpc-virtqemud) is now running job remoteDispatchConnectSupportsFeature >2023-07-26 02:12:24.319+0000: 386925: debug : virThreadJobClear:118 : Thread 386925 (prio-rpc-virtqemud) finished job remoteDispatchConnectSupportsFeature with ret=0 >2023-07-26 02:12:24.319+0000: 386914: debug : virThreadJobSet:93 : Thread 386914 (rpc-virtqemud) is now running job remoteDispatchConnectOpen >2023-07-26 02:12:24.319+0000: 386914: debug : virConnectOpenAuth:1277 : name=, auth=(nil), flags=0x0 >2023-07-26 02:12:24.319+0000: 386914: debug : virConfLoadConfig:1515 : Loading config file '/etc/libvirt/libvirt.conf' >2023-07-26 02:12:24.319+0000: 386914: debug : virConfReadFile:723 : filename=/etc/libvirt/libvirt.conf >2023-07-26 02:12:24.319+0000: 386914: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-26 02:12:24.319+0000: 386914: debug : virConnectOpenInternal:933 : Trying to probe for default URI >2023-07-26 02:12:24.319+0000: 386914: debug : virConnectOpenInternal:938 : QEMU driver URI probe returned 'qemu:///system' >2023-07-26 02:12:24.319+0000: 386914: debug : virConfGetValueStringList:913 : Get value string list (nil) 0 >2023-07-26 02:12:24.319+0000: 386914: debug : virConnectOpenInternal:966 : Split "qemu:///system" to URI components: > scheme qemu > server <null> > user <null> > port 0 > path /system >2023-07-26 02:12:24.319+0000: 386914: debug : virConnectOpenInternal:1072 : trying driver 0 (Test) ... >2023-07-26 02:12:24.319+0000: 386914: debug : virConnectOpenInternal:1094 : Checking for supported URI schemes >2023-07-26 02:12:24.319+0000: 386914: debug : virConnectOpenInternal:1103 : No matching URI scheme >2023-07-26 02:12:24.319+0000: 386914: debug : virConnectOpenInternal:1072 : trying driver 1 (ESX) ... >2023-07-26 02:12:24.319+0000: 386914: debug : virConnectOpenInternal:1094 : Checking for supported URI schemes >2023-07-26 02:12:24.319+0000: 386914: debug : virConnectOpenInternal:1103 : No matching URI scheme >2023-07-26 02:12:24.319+0000: 386914: debug : virConnectOpenInternal:1072 : trying driver 2 (remote) ... >2023-07-26 02:12:24.319+0000: 386914: debug : virConnectOpenInternal:1111 : Matching any URI scheme for 'qemu' >2023-07-26 02:12:24.319+0000: 386914: debug : virConnectOpenInternal:1137 : driver 2 remote returned DECLINED >2023-07-26 02:12:24.319+0000: 386914: debug : virConnectOpenInternal:1072 : trying driver 3 (QEMU) ... >2023-07-26 02:12:24.319+0000: 386914: debug : virConnectOpenInternal:1094 : Checking for supported URI schemes >2023-07-26 02:12:24.319+0000: 386914: debug : virConnectOpenInternal:1097 : Matched URI scheme 'qemu' >2023-07-26 02:12:24.319+0000: 386914: debug : virConnectOpenInternal:1137 : driver 3 QEMU returned SUCCESS >2023-07-26 02:12:24.319+0000: 386914: debug : virConnectGetType:163 : conn=0x7fb73c0158d0 >2023-07-26 02:12:24.319+0000: 386914: debug : virThreadJobClear:118 : Thread 386914 (rpc-virtqemud) finished job remoteDispatchConnectOpen with ret=0 >2023-07-26 02:12:24.320+0000: 386922: debug : virThreadJobSet:93 : Thread 386922 (prio-rpc-virtqemud) is now running job remoteDispatchConnectGetURI >2023-07-26 02:12:24.320+0000: 386922: debug : virConnectGetURI:316 : conn=0x7fb73c0158d0 >2023-07-26 02:12:24.320+0000: 386922: debug : virThreadJobClear:118 : Thread 386922 (prio-rpc-virtqemud) finished job remoteDispatchConnectGetURI with ret=0 >2023-07-26 02:12:24.320+0000: 386916: debug : virThreadJobSet:93 : Thread 386916 (rpc-virtqemud) is now running job remoteDispatchConnectSupportsFeature >2023-07-26 02:12:24.320+0000: 386916: debug : virThreadJobClear:118 : Thread 386916 (rpc-virtqemud) finished job remoteDispatchConnectSupportsFeature with ret=0 >2023-07-26 02:12:24.320+0000: 386924: debug : virThreadJobSet:93 : Thread 386924 (prio-rpc-virtqemud) is now running job remoteDispatchConnectSupportsFeature >2023-07-26 02:12:24.320+0000: 386924: debug : virThreadJobClear:118 : Thread 386924 (prio-rpc-virtqemud) finished job remoteDispatchConnectSupportsFeature with ret=0 >2023-07-26 02:12:24.331+0000: 386925: debug : virThreadJobSet:93 : Thread 386925 (prio-rpc-virtqemud) is now running job remoteDispatchConnectRegisterCloseCallback >2023-07-26 02:12:24.331+0000: 386925: debug : virConnectRegisterCloseCallback:1501 : conn=0x7fb73c0158d0 >2023-07-26 02:12:24.331+0000: 386925: debug : virThreadJobClear:118 : Thread 386925 (prio-rpc-virtqemud) finished job remoteDispatchConnectRegisterCloseCallback with ret=0 >2023-07-26 02:12:24.331+0000: 386919: debug : virThreadJobSet:93 : Thread 386919 (rpc-virtqemud) is now running job remoteDispatchDomainLookupByName >2023-07-26 02:12:24.331+0000: 386919: debug : virDomainLookupByName:449 : conn=0x7fb73c0158d0, name=rhel-9.2 >2023-07-26 02:12:24.331+0000: 386919: debug : virThreadJobClear:118 : Thread 386919 (rpc-virtqemud) finished job remoteDispatchDomainLookupByName with ret=0 >2023-07-26 02:12:24.331+0000: 386920: debug : virThreadJobSet:93 : Thread 386920 (rpc-virtqemud) is now running job remoteDispatchDomainBlockJobAbort >2023-07-26 02:12:24.331+0000: 386920: debug : virDomainBlockJobAbort:10578 : dom=0x7fb718005150, (VM: name=rhel-9.2, uuid=e46b3d21-99dd-4be9-92ff-78556c7234c4), disk=vda, flags=0x0 >2023-07-26 02:12:30.990+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337550, "microseconds": 989945}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "drive-virtio-disk0"}}] >2023-07-26 02:12:30.990+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337550, "microseconds": 989945}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "drive-virtio-disk0"}} >2023-07-26 02:12:30.990+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb77401a340 >2023-07-26 02:12:30.990+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:30.990+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:30.990+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb77401fc40 >2023-07-26 02:12:30.990+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:30.990+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk0'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'ready'(4) >2023-07-26 02:12:30.990+0000: 387763: debug : qemuProcessHandleJobStatusChange:915 : job 'drive-virtio-disk0' handled synchronously >2023-07-26 02:12:30.990+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337550, "microseconds": 990625}, "event": "BLOCK_JOB_READY", "data": {"device": "drive-virtio-disk0", "len": 10737418240, "offset": 10737418240, "speed": 9223372036853727232, "type": "mirror"}}] >2023-07-26 02:12:30.990+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337550, "microseconds": 990625}, "event": "BLOCK_JOB_READY", "data": {"device": "drive-virtio-disk0", "len": 10737418240, "offset": 10737418240, "speed": 9223372036853727232, "type": "mirror"}} >2023-07-26 02:12:30.990+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb77401a340 >2023-07-26 02:12:30.990+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=BLOCK_JOB_READY >2023-07-26 02:12:30.990+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:30.991+0000: 386911: debug : qemuMigrationSrcNBDStorageCopyReady:726 : All disk mirrors are ready >2023-07-26 02:12:30.991+0000: 386911: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:30.991+0000: 386911: debug : qemuMonitorGetAllBlockJobInfo:2933 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:30.991+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"query-block-jobs","id":"libvirt-464"} > fd=-1 >2023-07-26 02:12:30.991+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"query-block-jobs","id":"libvirt-464"} > len=51 ret=51 errno=0 >2023-07-26 02:12:30.992+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": [{"auto-finalize": true, "io-status": "ok", "device": "drive-virtio-disk1", "auto-dismiss": false, "busy": false, "len": 10485760, "offset": 10485760, "status": "ready", "paused": false, "speed": 9223372036853727232, "ready": true, "type": "mirror"}, {"auto-finalize": true, "io-status": "ok", "device": "drive-virtio-disk0", "auto-dismiss": false, "busy": false, "len": 10737418240, "offset": 10737418240, "status": "ready", "paused": false, "speed": 9223372036853727232, "ready": true, "type": "mirror"}], "id": "libvirt-464"}] >2023-07-26 02:12:30.992+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": [{"auto-finalize": true, "io-status": "ok", "device": "drive-virtio-disk1", "auto-dismiss": false, "busy": false, "len": 10485760, "offset": 10485760, "status": "ready", "paused": false, "speed": 9223372036853727232, "ready": true, "type": "mirror"}, {"auto-finalize": true, "io-status": "ok", "device": "drive-virtio-disk0", "auto-dismiss": false, "busy": false, "len": 10737418240, "offset": 10737418240, "status": "ready", "paused": false, "speed": 9223372036853727232, "ready": true, "type": "mirror"}], "id": "libvirt-464"} >2023-07-26 02:12:30.992+0000: 386911: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:30.992+0000: 386911: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:30.995+0000: 386911: debug : qemuDomainLogAppendMessage:7202 : Append log message (vm='rhel-9.2' message='2023-07-26 02:12:30.995+0000: initiating migration >) stdioLogD=1 >2023-07-26 02:12:30.997+0000: 386911: debug : qemuMonitorMigrateToFd:2235 : fd=29 flags=0x0 >2023-07-26 02:12:30.997+0000: 386911: debug : qemuMonitorMigrateToFd:2237 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:30.997+0000: 386911: debug : qemuMonitorSendFileHandle:2474 : fdname=migrate fd=29 >2023-07-26 02:12:30.997+0000: 386911: debug : qemuMonitorSendFileHandle:2476 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:30.997+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"getfd","arguments":{"fdname":"migrate"},"id":"libvirt-465"} > fd=29 >2023-07-26 02:12:30.997+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"getfd","arguments":{"fdname":"migrate"},"id":"libvirt-465"} > len=73 ret=73 errno=0 >2023-07-26 02:12:30.997+0000: 387763: info : qemuMonitorIOWrite:371 : QEMU_MONITOR_IO_SEND_FD: mon=0x7fb72003d010 fd=29 ret=73 errno=0 >2023-07-26 02:12:30.998+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {}, "id": "libvirt-465"}] >2023-07-26 02:12:30.998+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": {}, "id": "libvirt-465"} >2023-07-26 02:12:30.998+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"migrate","arguments":{"detach":true,"resume":false,"uri":"fd:migrate"},"id":"libvirt-466"} > fd=-1 >2023-07-26 02:12:30.998+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"migrate","arguments":{"detach":true,"resume":false,"uri":"fd:migrate"},"id":"libvirt-466"} > len=104 ret=104 errno=0 >2023-07-26 02:12:30.999+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337550, "microseconds": 999705}, "event": "MIGRATION", "data": {"status": "setup"}}] >2023-07-26 02:12:30.999+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337550, "microseconds": 999705}, "event": "MIGRATION", "data": {"status": "setup"}} >2023-07-26 02:12:30.999+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb7740200a0 >2023-07-26 02:12:30.999+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=MIGRATION >2023-07-26 02:12:30.999+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:30.999+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle MIGRATION handler=0x7fb7a872fd20 data=0x7fb774002e60 >2023-07-26 02:12:30.999+0000: 387763: debug : qemuMonitorEmitMigrationStatus:1325 : mon=0x7fb72003d010, status=setup >2023-07-26 02:12:30.999+0000: 387763: debug : qemuProcessHandleMigrationStatus:1455 : Migration of domain 0x7fb76008e840 rhel-9.2 changed state to setup >2023-07-26 02:12:31.000+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {}, "id": "libvirt-466"}] >2023-07-26 02:12:31.000+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": {}, "id": "libvirt-466"} >2023-07-26 02:12:31.000+0000: 386911: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:31.000+0000: 386911: debug : qemuMigrationSrcNBDStorageCopyReady:726 : All disk mirrors are ready >2023-07-26 02:12:31.004+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 4089}, "event": "MIGRATION_PASS", "data": {"pass": 1}}] >2023-07-26 02:12:31.004+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 4089}, "event": "MIGRATION_PASS", "data": {"pass": 1}} >2023-07-26 02:12:31.004+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb7740291f0 >2023-07-26 02:12:31.004+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=MIGRATION_PASS >2023-07-26 02:12:31.004+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.004+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle MIGRATION_PASS handler=0x7fb7a8730610 data=0x7fb77400a4a0 >2023-07-26 02:12:31.004+0000: 387763: debug : qemuMonitorEmitMigrationPass:1336 : mon=0x7fb72003d010, pass=1 >2023-07-26 02:12:31.004+0000: 387763: debug : qemuProcessHandleMigrationPass:1571 : Migrating domain 0x7fb76008e840 rhel-9.2, iteration 1 >2023-07-26 02:12:31.004+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 4217}, "event": "MIGRATION", "data": {"status": "active"}}] >2023-07-26 02:12:31.004+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 4217}, "event": "MIGRATION", "data": {"status": "active"}} >2023-07-26 02:12:31.004+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb7740291f0 >2023-07-26 02:12:31.004+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=MIGRATION >2023-07-26 02:12:31.004+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.004+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle MIGRATION handler=0x7fb7a872fd20 data=0x7fb77400e640 >2023-07-26 02:12:31.004+0000: 387763: debug : qemuMonitorEmitMigrationStatus:1325 : mon=0x7fb72003d010, status=active >2023-07-26 02:12:31.004+0000: 387763: debug : qemuProcessHandleMigrationStatus:1455 : Migration of domain 0x7fb76008e840 rhel-9.2 changed state to active >2023-07-26 02:12:31.004+0000: 386911: debug : qemuMigrationSrcNBDStorageCopyReady:726 : All disk mirrors are ready >2023-07-26 02:12:31.466+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 466543}, "event": "MIGRATION_PASS", "data": {"pass": 2}}] >2023-07-26 02:12:31.466+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 466543}, "event": "MIGRATION_PASS", "data": {"pass": 2}} >2023-07-26 02:12:31.466+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb7740291f0 >2023-07-26 02:12:31.466+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=MIGRATION_PASS >2023-07-26 02:12:31.466+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.466+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle MIGRATION_PASS handler=0x7fb7a8730610 data=0x7fb77400a480 >2023-07-26 02:12:31.466+0000: 387763: debug : qemuMonitorEmitMigrationPass:1336 : mon=0x7fb72003d010, pass=2 >2023-07-26 02:12:31.466+0000: 387763: debug : qemuProcessHandleMigrationPass:1571 : Migrating domain 0x7fb76008e840 rhel-9.2, iteration 2 >2023-07-26 02:12:31.466+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 466830}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "drive-virtio-disk1"}}] >2023-07-26 02:12:31.467+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 466830}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "drive-virtio-disk1"}} >2023-07-26 02:12:31.467+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb7740291f0 >2023-07-26 02:12:31.467+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:31.467+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.467+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb77401b570 >2023-07-26 02:12:31.467+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:31.467+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk1'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'standby'(5) >2023-07-26 02:12:31.467+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 466857}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "drive-virtio-disk1"}}] >2023-07-26 02:12:31.467+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 466857}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "drive-virtio-disk1"}} >2023-07-26 02:12:31.467+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb7740291f0 >2023-07-26 02:12:31.467+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:31.467+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.467+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb77401a6f0 >2023-07-26 02:12:31.467+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:31.467+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk1'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'ready'(4) >2023-07-26 02:12:31.467+0000: 387763: debug : qemuProcessHandleJobStatusChange:915 : job 'drive-virtio-disk1' handled synchronously >2023-07-26 02:12:31.467+0000: 386911: debug : qemuMigrationSrcNBDStorageCopyReady:726 : All disk mirrors are ready >2023-07-26 02:12:31.470+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 470785}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "drive-virtio-disk0"}}] >2023-07-26 02:12:31.471+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 470785}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "drive-virtio-disk0"}} >2023-07-26 02:12:31.471+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb7740291f0 >2023-07-26 02:12:31.471+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:31.471+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.471+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb774009d90 >2023-07-26 02:12:31.471+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:31.471+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk0'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'standby'(5) >2023-07-26 02:12:31.471+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 470840}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "drive-virtio-disk0"}}] >2023-07-26 02:12:31.471+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 470840}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "drive-virtio-disk0"}} >2023-07-26 02:12:31.471+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb7740291f0 >2023-07-26 02:12:31.471+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:31.471+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.471+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb774007220 >2023-07-26 02:12:31.471+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:31.471+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk0'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'ready'(4) >2023-07-26 02:12:31.471+0000: 387763: debug : qemuProcessHandleJobStatusChange:915 : job 'drive-virtio-disk0' handled synchronously >2023-07-26 02:12:31.471+0000: 386911: debug : qemuMigrationSrcNBDStorageCopyReady:726 : All disk mirrors are ready >2023-07-26 02:12:31.477+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 477211}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "drive-virtio-disk0"}}] >2023-07-26 02:12:31.477+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 477211}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "drive-virtio-disk0"}} >2023-07-26 02:12:31.477+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb77400e640 >2023-07-26 02:12:31.477+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:31.477+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.477+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb7740200f0 >2023-07-26 02:12:31.477+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:31.477+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk0'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'standby'(5) >2023-07-26 02:12:31.477+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 477259}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "drive-virtio-disk1"}}] >2023-07-26 02:12:31.477+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 477259}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "drive-virtio-disk1"}} >2023-07-26 02:12:31.477+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb77400e640 >2023-07-26 02:12:31.477+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:31.477+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.477+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb774012d20 >2023-07-26 02:12:31.477+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:31.477+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk1'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'standby'(5) >2023-07-26 02:12:31.477+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 477282}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "drive-virtio-disk0"}}] >2023-07-26 02:12:31.477+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 477282}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "drive-virtio-disk0"}} >2023-07-26 02:12:31.477+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb77400e640 >2023-07-26 02:12:31.477+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:31.477+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.477+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb774020680 >2023-07-26 02:12:31.477+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:31.477+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk0'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'ready'(4) >2023-07-26 02:12:31.477+0000: 387763: debug : qemuProcessHandleJobStatusChange:915 : job 'drive-virtio-disk0' handled synchronously >2023-07-26 02:12:31.477+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 477298}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "drive-virtio-disk1"}}] >2023-07-26 02:12:31.477+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 477298}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "drive-virtio-disk1"}} >2023-07-26 02:12:31.477+0000: 386911: debug : qemuMigrationSrcNBDStorageCopyReady:726 : All disk mirrors are ready >2023-07-26 02:12:31.477+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb77400e640 >2023-07-26 02:12:31.477+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:31.477+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.477+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb7740200f0 >2023-07-26 02:12:31.477+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:31.477+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk1'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'ready'(4) >2023-07-26 02:12:31.477+0000: 387763: debug : qemuProcessHandleJobStatusChange:915 : job 'drive-virtio-disk1' handled synchronously >2023-07-26 02:12:31.477+0000: 386911: debug : qemuMigrationSrcNBDStorageCopyReady:726 : All disk mirrors are ready >2023-07-26 02:12:31.485+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 485013}, "event": "STOP"}] >2023-07-26 02:12:31.485+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 485013}, "event": "STOP"} >2023-07-26 02:12:31.485+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb77400e640 >2023-07-26 02:12:31.485+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=STOP >2023-07-26 02:12:31.485+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.485+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle STOP handler=0x7fb7a872e6e0 data=(nil) >2023-07-26 02:12:31.485+0000: 387763: debug : qemuMonitorEmitStop:1100 : mon=0x7fb72003d010 >2023-07-26 02:12:31.485+0000: 387763: debug : qemuProcessHandleStop:659 : Transitioned guest rhel-9.2 to paused state, reason migration, event detail 1 >2023-07-26 02:12:31.485+0000: 387763: debug : qemuProcessHandleStop:678 : Preserving lock state '<null>' >2023-07-26 02:12:31.486+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 485071}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "drive-virtio-disk0"}}] >2023-07-26 02:12:31.486+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 485071}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "drive-virtio-disk0"}} >2023-07-26 02:12:31.486+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb77400e640 >2023-07-26 02:12:31.486+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:31.486+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.486+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb774017550 >2023-07-26 02:12:31.486+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:31.486+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk0'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'standby'(5) >2023-07-26 02:12:31.486+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 485095}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "drive-virtio-disk1"}}] >2023-07-26 02:12:31.486+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 485095}, "event": "JOB_STATUS_CHANGE", "data": {"status": "standby", "id": "drive-virtio-disk1"}} >2023-07-26 02:12:31.486+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb77400e640 >2023-07-26 02:12:31.486+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:31.486+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.486+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb77400c310 >2023-07-26 02:12:31.486+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:31.486+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk1'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'standby'(5) >2023-07-26 02:12:31.486+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 485116}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "drive-virtio-disk0"}}] >2023-07-26 02:12:31.486+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 485116}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "drive-virtio-disk0"}} >2023-07-26 02:12:31.486+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb77400e640 >2023-07-26 02:12:31.486+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:31.486+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.486+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb774009d90 >2023-07-26 02:12:31.486+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:31.486+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk0'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'ready'(4) >2023-07-26 02:12:31.486+0000: 387763: debug : qemuProcessHandleJobStatusChange:915 : job 'drive-virtio-disk0' handled synchronously >2023-07-26 02:12:31.486+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 485132}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "drive-virtio-disk1"}}] >2023-07-26 02:12:31.486+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 485132}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "drive-virtio-disk1"}} >2023-07-26 02:12:31.486+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb77400e640 >2023-07-26 02:12:31.486+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:31.486+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.486+0000: 386911: debug : qemuMigrationSrcNBDStorageCopyReady:726 : All disk mirrors are ready >2023-07-26 02:12:31.486+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb774017550 >2023-07-26 02:12:31.486+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:31.486+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk1'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'ready'(4) >2023-07-26 02:12:31.486+0000: 387763: debug : qemuProcessHandleJobStatusChange:915 : job 'drive-virtio-disk1' handled synchronously >2023-07-26 02:12:31.486+0000: 386911: debug : qemuMigrationSrcNBDStorageCopyReady:726 : All disk mirrors are ready >2023-07-26 02:12:31.489+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 489574}, "event": "MIGRATION", "data": {"status": "pre-switchover"}}] >2023-07-26 02:12:31.489+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 489574}, "event": "MIGRATION", "data": {"status": "pre-switchover"}} >2023-07-26 02:12:31.489+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb77400e640 >2023-07-26 02:12:31.489+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=MIGRATION >2023-07-26 02:12:31.489+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.489+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle MIGRATION handler=0x7fb7a872fd20 data=0x7fb7740219b0 >2023-07-26 02:12:31.489+0000: 387763: debug : qemuMonitorEmitMigrationStatus:1325 : mon=0x7fb72003d010, status=pre-switchover >2023-07-26 02:12:31.489+0000: 387763: debug : qemuProcessHandleMigrationStatus:1455 : Migration of domain 0x7fb76008e840 rhel-9.2 changed state to pre-switchover >2023-07-26 02:12:31.489+0000: 386911: debug : qemuMigrationSrcNBDStorageCopyReady:726 : All disk mirrors are ready >2023-07-26 02:12:31.489+0000: 386911: debug : qemuMigrationAnyCompleted:1966 : Migration paused before switchover >2023-07-26 02:12:31.489+0000: 386911: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:31.489+0000: 386911: debug : qemuMonitorGetMigrationStats:2220 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:31.489+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"query-migrate","id":"libvirt-467"} > fd=-1 >2023-07-26 02:12:31.489+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"query-migrate","id":"libvirt-467"} > len=48 ret=48 errno=0 >2023-07-26 02:12:31.490+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {"expected-downtime": 300, "status": "pre-switchover", "setup-time": 4, "total-time": 491, "ram": {"total": 2165121024, "postcopy-requests": 0, "dirty-sync-count": 2, "multifd-bytes": 0, "pages-per-second": 2476757, "downtime-bytes": 0, "page-size": 4096, "remaining": 27029504, "postcopy-bytes": 0, "mbps": 3813.0990291262137, "transferred": 546664228, "dirty-sync-missed-zero-copy": 0, "precopy-bytes": 546664228, "duplicate": 389773, "dirty-pages-rate": 0, "skipped": 0, "normal-bytes": 542097408, "normal": 132348}}, "id": "libvirt-467"}] >2023-07-26 02:12:31.490+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": {"expected-downtime": 300, "status": "pre-switchover", "setup-time": 4, "total-time": 491, "ram": {"total": 2165121024, "postcopy-requests": 0, "dirty-sync-count": 2, "multifd-bytes": 0, "pages-per-second": 2476757, "downtime-bytes": 0, "page-size": 4096, "remaining": 27029504, "postcopy-bytes": 0, "mbps": 3813.0990291262137, "transferred": 546664228, "dirty-sync-missed-zero-copy": 0, "precopy-bytes": 546664228, "duplicate": 389773, "dirty-pages-rate": 0, "skipped": 0, "normal-bytes": 542097408, "normal": 132348}}, "id": "libvirt-467"} >2023-07-26 02:12:31.490+0000: 386911: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:31.490+0000: 386911: debug : qemuMigrationSrcNBDCopyCancel:886 : Cancelling drive mirrors for domain rhel-9.2 >2023-07-26 02:12:31.490+0000: 386911: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:31.490+0000: 386911: debug : qemuMonitorBlockJobCancel:2908 : jobname=drive-virtio-disk0 force=0 >2023-07-26 02:12:31.490+0000: 386911: debug : qemuMonitorBlockJobCancel:2910 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:31.490+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"block-job-cancel","arguments":{"device":"drive-virtio-disk0"},"id":"libvirt-468"} > fd=-1 >2023-07-26 02:12:31.490+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"block-job-cancel","arguments":{"device":"drive-virtio-disk0"},"id":"libvirt-468"} > len=95 ret=95 errno=0 >2023-07-26 02:12:31.491+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {}, "id": "libvirt-468"}] >2023-07-26 02:12:31.491+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": {}, "id": "libvirt-468"} >2023-07-26 02:12:31.491+0000: 386911: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:31.491+0000: 386911: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:31.492+0000: 386911: debug : qemuMonitorBlockJobCancel:2908 : jobname=drive-virtio-disk1 force=0 >2023-07-26 02:12:31.492+0000: 386911: debug : qemuMonitorBlockJobCancel:2910 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:31.492+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"block-job-cancel","arguments":{"device":"drive-virtio-disk1"},"id":"libvirt-469"} > fd=-1 >2023-07-26 02:12:31.492+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"block-job-cancel","arguments":{"device":"drive-virtio-disk1"},"id":"libvirt-469"} > len=95 ret=95 errno=0 >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 493007}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "drive-virtio-disk0"}}] >2023-07-26 02:12:31.493+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 493007}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "drive-virtio-disk0"}} >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb77401a710 >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:31.493+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb774009b80 >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:31.493+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk0'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'waiting'(6) >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 493048}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "drive-virtio-disk0"}}] >2023-07-26 02:12:31.493+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 493048}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "drive-virtio-disk0"}} >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb77401a710 >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:31.493+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb774017550 >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:31.493+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk0'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'pending'(7) >2023-07-26 02:12:31.493+0000: 387763: debug : qemuProcessHandleJobStatusChange:915 : job 'drive-virtio-disk0' handled synchronously >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 493134}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "drive-virtio-disk0", "len": 10737418240, "offset": 10737418240, "speed": 9223372036853727232, "type": "mirror"}}] >2023-07-26 02:12:31.493+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 493134}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "drive-virtio-disk0", "len": 10737418240, "offset": 10737418240, "speed": 9223372036853727232, "type": "mirror"}} >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb77401a710 >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=BLOCK_JOB_COMPLETED >2023-07-26 02:12:31.493+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 493154}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "drive-virtio-disk0"}}] >2023-07-26 02:12:31.493+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 493154}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "drive-virtio-disk0"}} >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb77401a710 >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:31.493+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb774009250 >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:31.493+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk0'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'concluded'(9) >2023-07-26 02:12:31.493+0000: 387763: debug : qemuProcessHandleJobStatusChange:915 : job 'drive-virtio-disk0' handled synchronously >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {}, "id": "libvirt-469"}] >2023-07-26 02:12:31.493+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": {}, "id": "libvirt-469"} >2023-07-26 02:12:31.493+0000: 386911: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:31.493+0000: 386911: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:31.493+0000: 386911: debug : qemuMonitorGetJobInfo:4132 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:31.493+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"query-jobs","id":"libvirt-470"} > fd=-1 >2023-07-26 02:12:31.493+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"query-jobs","id":"libvirt-470"} > len=45 ret=45 errno=0 >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 493840}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "drive-virtio-disk1"}}] >2023-07-26 02:12:31.493+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 493840}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "drive-virtio-disk1"}} >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb774029bf0 >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:31.493+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb77401a6f0 >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:31.493+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk1'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'waiting'(6) >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 493872}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "drive-virtio-disk1"}}] >2023-07-26 02:12:31.493+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 493872}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "drive-virtio-disk1"}} >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb774029bf0 >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:31.493+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb7740200a0 >2023-07-26 02:12:31.493+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:31.494+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk1'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'pending'(7) >2023-07-26 02:12:31.494+0000: 387763: debug : qemuProcessHandleJobStatusChange:915 : job 'drive-virtio-disk1' handled synchronously >2023-07-26 02:12:31.494+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 494054}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "drive-virtio-disk1", "len": 10485760, "offset": 10485760, "speed": 9223372036853727232, "type": "mirror"}}] >2023-07-26 02:12:31.494+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 494054}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "drive-virtio-disk1", "len": 10485760, "offset": 10485760, "speed": 9223372036853727232, "type": "mirror"}} >2023-07-26 02:12:31.494+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb774029bf0 >2023-07-26 02:12:31.494+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=BLOCK_JOB_COMPLETED >2023-07-26 02:12:31.494+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.494+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 494080}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "drive-virtio-disk1"}}] >2023-07-26 02:12:31.494+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 494080}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "drive-virtio-disk1"}} >2023-07-26 02:12:31.494+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb774029bf0 >2023-07-26 02:12:31.494+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:31.494+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.494+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb774009d90 >2023-07-26 02:12:31.494+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:31.494+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk1'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'concluded'(9) >2023-07-26 02:12:31.494+0000: 387763: debug : qemuProcessHandleJobStatusChange:915 : job 'drive-virtio-disk1' handled synchronously >2023-07-26 02:12:31.494+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": [{"current-progress": 10485760, "status": "concluded", "total-progress": 10485760, "type": "mirror", "id": "drive-virtio-disk1"}, {"current-progress": 10737418240, "status": "concluded", "total-progress": 10737418240, "type": "mirror", "id": "drive-virtio-disk0"}], "id": "libvirt-470"}] >2023-07-26 02:12:31.494+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": [{"current-progress": 10485760, "status": "concluded", "total-progress": 10485760, "type": "mirror", "id": "drive-virtio-disk1"}, {"current-progress": 10737418240, "status": "concluded", "total-progress": 10737418240, "type": "mirror", "id": "drive-virtio-disk0"}], "id": "libvirt-470"} >2023-07-26 02:12:31.494+0000: 386911: debug : qemuMonitorJobDismiss:2942 : jobname=drive-virtio-disk0 >2023-07-26 02:12:31.494+0000: 386911: debug : qemuMonitorJobDismiss:2944 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:31.494+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"job-dismiss","arguments":{"id":"drive-virtio-disk0"},"id":"libvirt-471"} > fd=-1 >2023-07-26 02:12:31.494+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"job-dismiss","arguments":{"id":"drive-virtio-disk0"},"id":"libvirt-471"} > len=86 ret=86 errno=0 >2023-07-26 02:12:31.495+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 495320}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "drive-virtio-disk0"}}] >2023-07-26 02:12:31.495+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 495320}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "drive-virtio-disk0"}} >2023-07-26 02:12:31.495+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb774009250 >2023-07-26 02:12:31.495+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:31.495+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.495+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb7740282d0 >2023-07-26 02:12:31.495+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:31.495+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk0'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'null'(11) >2023-07-26 02:12:31.495+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {}, "id": "libvirt-471"}] >2023-07-26 02:12:31.495+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": {}, "id": "libvirt-471"} >2023-07-26 02:12:31.495+0000: 386911: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:31.496+0000: 386911: debug : qemuBlockJobEventProcessConcluded:1513 : handling job 'drive-virtio-disk0' state '3' newstate '0' >2023-07-26 02:12:31.496+0000: 386911: debug : qemuBlockJobProcessEventConcludedCopyAbort:1254 : copy job 'drive-virtio-disk0' on VM 'rhel-9.2' aborted >2023-07-26 02:12:31.500+0000: 386911: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:31.500+0000: 386911: debug : qemuMonitorGetJobInfo:4132 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:31.500+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"query-jobs","id":"libvirt-472"} > fd=-1 >2023-07-26 02:12:31.500+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"query-jobs","id":"libvirt-472"} > len=45 ret=45 errno=0 >2023-07-26 02:12:31.500+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": [{"current-progress": 10485760, "status": "concluded", "total-progress": 10485760, "type": "mirror", "id": "drive-virtio-disk1"}], "id": "libvirt-472"}] >2023-07-26 02:12:31.500+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": [{"current-progress": 10485760, "status": "concluded", "total-progress": 10485760, "type": "mirror", "id": "drive-virtio-disk1"}], "id": "libvirt-472"} >2023-07-26 02:12:31.500+0000: 386911: debug : qemuMonitorJobDismiss:2942 : jobname=drive-virtio-disk1 >2023-07-26 02:12:31.500+0000: 386911: debug : qemuMonitorJobDismiss:2944 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:31.500+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"job-dismiss","arguments":{"id":"drive-virtio-disk1"},"id":"libvirt-473"} > fd=-1 >2023-07-26 02:12:31.500+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"job-dismiss","arguments":{"id":"drive-virtio-disk1"},"id":"libvirt-473"} > len=86 ret=86 errno=0 >2023-07-26 02:12:31.502+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 502002}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "drive-virtio-disk1"}}] >2023-07-26 02:12:31.502+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 502002}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "drive-virtio-disk1"}} >2023-07-26 02:12:31.502+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb774020680 >2023-07-26 02:12:31.502+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=JOB_STATUS_CHANGE >2023-07-26 02:12:31.502+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.502+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle JOB_STATUS_CHANGE handler=0x7fb7a872f4a0 data=0x7fb77402ac10 >2023-07-26 02:12:31.502+0000: 387763: debug : qemuMonitorEmitJobStatusChange:1223 : mon=0x7fb72003d010 >2023-07-26 02:12:31.502+0000: 387763: debug : qemuProcessHandleJobStatusChange:900 : job 'drive-virtio-disk1'(domain: 0x7fb76008e840,rhel-9.2) state changed to 'null'(11) >2023-07-26 02:12:31.502+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {}, "id": "libvirt-473"}] >2023-07-26 02:12:31.502+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": {}, "id": "libvirt-473"} >2023-07-26 02:12:31.502+0000: 386911: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:31.502+0000: 386911: debug : qemuBlockJobEventProcessConcluded:1513 : handling job 'drive-virtio-disk1' state '3' newstate '0' >2023-07-26 02:12:31.502+0000: 386911: debug : qemuBlockJobProcessEventConcludedCopyAbort:1254 : copy job 'drive-virtio-disk1' on VM 'rhel-9.2' aborted >2023-07-26 02:12:31.506+0000: 386911: debug : qemuMigrationSrcNBDCopyCancelled:811 : All disk mirrors are gone >2023-07-26 02:12:31.506+0000: 386911: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:31.506+0000: 386911: debug : qemuMonitorBlockdevDel:3994 : nodename=migration-vda-format >2023-07-26 02:12:31.506+0000: 386911: debug : qemuMonitorBlockdevDel:3996 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:31.506+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"blockdev-del","arguments":{"node-name":"migration-vda-format"},"id":"libvirt-474"} > fd=-1 >2023-07-26 02:12:31.506+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"blockdev-del","arguments":{"node-name":"migration-vda-format"},"id":"libvirt-474"} > len=96 ret=96 errno=0 >2023-07-26 02:12:31.507+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {}, "id": "libvirt-474"}] >2023-07-26 02:12:31.507+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": {}, "id": "libvirt-474"} >2023-07-26 02:12:31.508+0000: 386911: debug : qemuMonitorBlockdevDel:3994 : nodename=migration-vda-storage >2023-07-26 02:12:31.508+0000: 386911: debug : qemuMonitorBlockdevDel:3996 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:31.508+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"blockdev-del","arguments":{"node-name":"migration-vda-storage"},"id":"libvirt-475"} > fd=-1 >2023-07-26 02:12:31.508+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"blockdev-del","arguments":{"node-name":"migration-vda-storage"},"id":"libvirt-475"} > len=97 ret=97 errno=0 >2023-07-26 02:12:31.509+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {}, "id": "libvirt-475"}] >2023-07-26 02:12:31.509+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": {}, "id": "libvirt-475"} >2023-07-26 02:12:31.509+0000: 386911: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:31.509+0000: 386911: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:31.509+0000: 386911: debug : qemuMonitorBlockdevDel:3994 : nodename=migration-vdb-format >2023-07-26 02:12:31.509+0000: 386911: debug : qemuMonitorBlockdevDel:3996 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:31.509+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"blockdev-del","arguments":{"node-name":"migration-vdb-format"},"id":"libvirt-476"} > fd=-1 >2023-07-26 02:12:31.509+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"blockdev-del","arguments":{"node-name":"migration-vdb-format"},"id":"libvirt-476"} > len=96 ret=96 errno=0 >2023-07-26 02:12:31.510+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {}, "id": "libvirt-476"}] >2023-07-26 02:12:31.510+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": {}, "id": "libvirt-476"} >2023-07-26 02:12:31.511+0000: 386911: debug : qemuMonitorBlockdevDel:3994 : nodename=migration-vdb-storage >2023-07-26 02:12:31.511+0000: 386911: debug : qemuMonitorBlockdevDel:3996 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:31.511+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"blockdev-del","arguments":{"node-name":"migration-vdb-storage"},"id":"libvirt-477"} > fd=-1 >2023-07-26 02:12:31.511+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"blockdev-del","arguments":{"node-name":"migration-vdb-storage"},"id":"libvirt-477"} > len=97 ret=97 errno=0 >2023-07-26 02:12:31.512+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {}, "id": "libvirt-477"}] >2023-07-26 02:12:31.512+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": {}, "id": "libvirt-477"} >2023-07-26 02:12:31.512+0000: 386911: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:31.512+0000: 386911: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:31.513+0000: 386911: debug : qemuMonitorMigrateContinue:3810 : status=pre-switchover >2023-07-26 02:12:31.513+0000: 386911: debug : qemuMonitorMigrateContinue:3812 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:31.513+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"migrate-continue","arguments":{"state":"pre-switchover"},"id":"libvirt-478"} > fd=-1 >2023-07-26 02:12:31.513+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"migrate-continue","arguments":{"state":"pre-switchover"},"id":"libvirt-478"} > len=90 ret=90 errno=0 >2023-07-26 02:12:31.514+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {}, "id": "libvirt-478"}] >2023-07-26 02:12:31.514+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": {}, "id": "libvirt-478"} >2023-07-26 02:12:31.514+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 514014}, "event": "MIGRATION", "data": {"status": "device"}}] >2023-07-26 02:12:31.514+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 514014}, "event": "MIGRATION", "data": {"status": "device"}} >2023-07-26 02:12:31.514+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb774017550 >2023-07-26 02:12:31.514+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=MIGRATION >2023-07-26 02:12:31.514+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.514+0000: 386911: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:31.514+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle MIGRATION handler=0x7fb7a872fd20 data=0x7fb774019900 >2023-07-26 02:12:31.514+0000: 386911: debug : qemuMigrationSrcNBDStorageCopyReady:726 : All disk mirrors are ready >2023-07-26 02:12:31.514+0000: 387763: debug : qemuMonitorEmitMigrationStatus:1325 : mon=0x7fb72003d010, status=device >2023-07-26 02:12:31.514+0000: 387763: debug : qemuProcessHandleMigrationStatus:1455 : Migration of domain 0x7fb76008e840 rhel-9.2 changed state to device >2023-07-26 02:12:31.514+0000: 386911: debug : qemuMigrationSrcNBDStorageCopyReady:726 : All disk mirrors are ready >2023-07-26 02:12:31.514+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 514589}, "event": "MIGRATION_PASS", "data": {"pass": 3}}] >2023-07-26 02:12:31.514+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 514589}, "event": "MIGRATION_PASS", "data": {"pass": 3}} >2023-07-26 02:12:31.514+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb77400b5a0 >2023-07-26 02:12:31.514+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=MIGRATION_PASS >2023-07-26 02:12:31.514+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.514+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle MIGRATION_PASS handler=0x7fb7a8730610 data=0x7fb774007620 >2023-07-26 02:12:31.514+0000: 387763: debug : qemuMonitorEmitMigrationPass:1336 : mon=0x7fb72003d010, pass=3 >2023-07-26 02:12:31.514+0000: 387763: debug : qemuProcessHandleMigrationPass:1571 : Migrating domain 0x7fb76008e840 rhel-9.2, iteration 3 >2023-07-26 02:12:31.577+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1690337551, "microseconds": 577364}, "event": "MIGRATION", "data": {"status": "completed"}}] >2023-07-26 02:12:31.577+0000: 387763: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x7fb72003d010 event={"timestamp": {"seconds": 1690337551, "microseconds": 577364}, "event": "MIGRATION", "data": {"status": "completed"}} >2023-07-26 02:12:31.577+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x7fb72003d010 obj=0x7fb77400b5a0 >2023-07-26 02:12:31.577+0000: 387763: debug : qemuMonitorEmitEvent:1072 : mon=0x7fb72003d010 event=MIGRATION >2023-07-26 02:12:31.577+0000: 387763: debug : qemuProcessHandleEvent:546 : vm=0x7fb76008e840 >2023-07-26 02:12:31.577+0000: 387763: debug : qemuMonitorJSONIOProcessEvent:177 : handle MIGRATION handler=0x7fb7a872fd20 data=0x7fb774009250 >2023-07-26 02:12:31.577+0000: 387763: debug : qemuMonitorEmitMigrationStatus:1325 : mon=0x7fb72003d010, status=completed >2023-07-26 02:12:31.577+0000: 387763: debug : qemuProcessHandleMigrationStatus:1455 : Migration of domain 0x7fb76008e840 rhel-9.2 changed state to completed >2023-07-26 02:12:31.577+0000: 386911: debug : qemuMigrationSrcNBDStorageCopyReady:726 : All disk mirrors are ready >2023-07-26 02:12:31.577+0000: 386911: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:31.577+0000: 386911: debug : qemuMonitorGetMigrationStats:2220 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:31.577+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"query-migrate","id":"libvirt-479"} > fd=-1 >2023-07-26 02:12:31.577+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"query-migrate","id":"libvirt-479"} > len=48 ret=48 errno=0 >2023-07-26 02:12:31.578+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {"status": "completed", "setup-time": 4, "downtime": 111, "total-time": 578, "ram": {"total": 2165121024, "postcopy-requests": 0, "dirty-sync-count": 3, "multifd-bytes": 0, "pages-per-second": 2476757, "downtime-bytes": 9768792, "page-size": 4096, "remaining": 0, "postcopy-bytes": 0, "mbps": 7763.9337003484325, "transferred": 556433020, "dirty-sync-missed-zero-copy": 0, "precopy-bytes": 546664228, "duplicate": 394001, "dirty-pages-rate": 0, "skipped": 0, "normal-bytes": 551809024, "normal": 134719}}, "id": "libvirt-479"}] >2023-07-26 02:12:31.578+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": {"status": "completed", "setup-time": 4, "downtime": 111, "total-time": 578, "ram": {"total": 2165121024, "postcopy-requests": 0, "dirty-sync-count": 3, "multifd-bytes": 0, "pages-per-second": 2476757, "downtime-bytes": 9768792, "page-size": 4096, "remaining": 0, "postcopy-bytes": 0, "mbps": 7763.9337003484325, "transferred": 556433020, "dirty-sync-missed-zero-copy": 0, "precopy-bytes": 546664228, "duplicate": 394001, "dirty-pages-rate": 0, "skipped": 0, "normal-bytes": 551809024, "normal": 134719}}, "id": "libvirt-479"} >2023-07-26 02:12:31.578+0000: 386911: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:31.578+0000: 386911: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:31.578+0000: 386911: debug : qemuMonitorBlockStatsUpdateCapacityBlockdev:1996 : stats=0x7fb78000d760 >2023-07-26 02:12:31.578+0000: 386911: debug : qemuMonitorBlockStatsUpdateCapacityBlockdev:1998 : mon:0x7fb72003d010 vm:0x7fb76008e840 fd:25 >2023-07-26 02:12:31.578+0000: 386911: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x7fb72003d010 msg={"execute":"query-named-block-nodes","arguments":{"flat":true},"id":"libvirt-480"} > fd=-1 >2023-07-26 02:12:31.578+0000: 387763: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x7fb72003d010 buf={"execute":"query-named-block-nodes","arguments":{"flat":true},"id":"libvirt-480"} > len=84 ret=84 errno=0 >2023-07-26 02:12:31.580+0000: 387763: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": [{"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10485760, "filename": "nbd://10.0.79.60:10809/foo", "format": "raw"}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-format", "backing_file_depth": 0, "drv": "raw", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "nbd://10.0.79.60:10809/foo"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10485760, "filename": "nbd://10.0.79.60:10809/foo", "format": "nbd"}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-storage", "backing_file_depth": 0, "drv": "nbd", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "nbd://10.0.79.60:10809/foo"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10737418240, "filename": "json:{\"driver\": \"copy-on-read\", \"file\": {\"driver\": \"qcow2\", \"file\": {\"driver\": \"file\", \"filename\": \"/var/lib/libvirt/images/rhel-9.2.qcow2\"}}}", "cluster-size": 65536, "format": "copy-on-read", "actual-size": 2300813312, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-CoR-vda", "backing_file_depth": 0, "drv": "copy-on-read", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "json:{\"driver\": \"copy-on-read\", \"file\": {\"driver\": \"qcow2\", \"file\": {\"driver\": \"file\", \"filename\": \"/var/lib/libvirt/images/rhel-9.2.qcow2\"}}}"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10737418240, "filename": "/var/lib/libvirt/images/rhel-9.2.qcow2", "cluster-size": 65536, "format": "qcow2", "actual-size": 2300813312, "format-specific": {"type": "qcow2", "data": {"compat": "1.1", "compression-type": "zlib", "lazy-refcounts": false, "refcount-bits": 16, "corrupt": false, "extended-l2": false}}, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-2-format", "backing_file_depth": 0, "drv": "qcow2", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": true, "writeback": true}, "file": "/var/lib/libvirt/images/rhel-9.2.qcow2"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 2300837888, "filename": "/var/lib/libvirt/images/rhel-9.2.qcow2", "format": "file", "actual-size": 2300813312, "format-specific": {"type": "file", "data": {}}, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-2-storage", "backing_file_depth": 0, "drv": "file", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": true, "writeback": true}, "file": "/var/lib/libvirt/images/rhel-9.2.qcow2"}], "id": "libvirt-480"}] >2023-07-26 02:12:31.580+0000: 387763: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x7fb72003d010 reply={"return": [{"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10485760, "filename": "nbd://10.0.79.60:10809/foo", "format": "raw"}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-format", "backing_file_depth": 0, "drv": "raw", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "nbd://10.0.79.60:10809/foo"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10485760, "filename": "nbd://10.0.79.60:10809/foo", "format": "nbd"}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-storage", "backing_file_depth": 0, "drv": "nbd", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "nbd://10.0.79.60:10809/foo"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10737418240, "filename": "json:{\"driver\": \"copy-on-read\", \"file\": {\"driver\": \"qcow2\", \"file\": {\"driver\": \"file\", \"filename\": \"/var/lib/libvirt/images/rhel-9.2.qcow2\"}}}", "cluster-size": 65536, "format": "copy-on-read", "actual-size": 2300813312, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-CoR-vda", "backing_file_depth": 0, "drv": "copy-on-read", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "json:{\"driver\": \"copy-on-read\", \"file\": {\"driver\": \"qcow2\", \"file\": {\"driver\": \"file\", \"filename\": \"/var/lib/libvirt/images/rhel-9.2.qcow2\"}}}"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 10737418240, "filename": "/var/lib/libvirt/images/rhel-9.2.qcow2", "cluster-size": 65536, "format": "qcow2", "actual-size": 2300813312, "format-specific": {"type": "qcow2", "data": {"compat": "1.1", "compression-type": "zlib", "lazy-refcounts": false, "refcount-bits": 16, "corrupt": false, "extended-l2": false}}, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-2-format", "backing_file_depth": 0, "drv": "qcow2", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": true, "writeback": true}, "file": "/var/lib/libvirt/images/rhel-9.2.qcow2"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 2300837888, "filename": "/var/lib/libvirt/images/rhel-9.2.qcow2", "format": "file", "actual-size": 2300813312, "format-specific": {"type": "file", "data": {}}, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-2-storage", "backing_file_depth": 0, "drv": "file", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": true, "writeback": true}, "file": "/var/lib/libvirt/images/rhel-9.2.qcow2"}], "id": "libvirt-480"} >2023-07-26 02:12:31.580+0000: 386911: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x7fb72003d010 vm=0x7fb76008e840 name=rhel-9.2) >2023-07-26 02:12:31.580+0000: 386911: debug : qemuMigrationCookieFormat:1484 : cookielen=1290 cookie=<qemu-migration> > <name>rhel-9.2</name> > <uuid>e46b3d21-99dd-4be9-92ff-78556c7234c4</uuid> > <hostname>vm-10-0-79-60.hosted.upshift.rdu2.redhat.com</hostname> > <hostuuid>1e240ff7-4a95-4c66-b0a8-828a77d2394d</hostuuid> > <nbd> > <disk target='vda' capacity='10737418240'/> > <disk target='vdb' capacity='10485760'/> > </nbd> > <statistics> > <started>1690337539896</started> > <stopped>1690337551485</stopped> > <sent>1690337551578</sent> > <time_elapsed>11682</time_elapsed> > <downtime>93</downtime> > <setup_time>4</setup_time> > <memory_total>2165121024</memory_total> > <memory_processed>556433020</memory_processed> > <memory_remaining>0</memory_remaining> > <memory_bps>970491712</memory_bps> > <memory_constant>394001</memory_constant> > <memory_normal>134719</memory_normal> > <memory_normal_bytes>551809024</memory_normal_bytes> > <memory_dirty_rate>0</memory_dirty_rate> > <memory_iteration>3</memory_iteration> > <memory_postcopy_requests>0</memory_postcopy_requests> > <memory_page_size>4096</memory_page_size> > <disk_total>0</disk_total> > <disk_processed>0</disk_processed> > <disk_remaining>0</disk_remaining> > <disk_bps>0</disk_bps> > <auto_converge_throttle>0</auto_converge_throttle> > </statistics> ></qemu-migration> > >2023-07-26 02:12:31.580+0000: 386911: debug : qemuDomainObjSetJobPhase:558 : Setting 'migration out' phase to 'perform3_done' >2023-07-26 02:12:31.580+0000: 386911: debug : qemuDomainCleanupAdd:7612 : vm=rhel-9.2, cb=0x7fb7a8748560 >2023-07-26 02:12:31.580+0000: 386911: debug : qemuDomainObjReleaseAsyncJob:628 : Releasing ownership of 'migration out' async job >2023-07-26 02:12:31.580+0000: 386911: debug : virThreadJobClear:118 : Thread 386911 (rpc-virtqemud) finished job remoteDispatchDomainMigratePerform3Params with ret=0 >2023-07-26 02:12:31.906+0000: 386921: debug : virThreadJobSet:93 : Thread 386921 (rpc-virtqemud) is now running job remoteDispatchDomainMigrateConfirm3Params >2023-07-26 02:12:31.906+0000: 386921: debug : virDomainMigrateConfirm3Params:5468 : dom=0x7fb728009810, (VM: name=rhel-9.2, uuid=e46b3d21-99dd-4be9-92ff-78556c7234c4), params=0x7fb72800a7b0, nparams=3, cookiein=0x7fb728008a00, cookieinlen=1207, flags=0x141, cancelled=0 >2023-07-26 02:12:31.906+0000: 386921: debug : virDomainMigrateConfirm3Params:5471 : params["destination_xml"]=(string)<domain type='kvm'> > <name>rhel-9.2</name> > <uuid>e46b3d21-99dd-4be9-92ff-78556c7234c4</uuid> > <metadata> > <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0"> > <libosinfo:os id="http://libosinfo.org/unknown"/> > </libosinfo:libosinfo> > </metadata> > <memory unit='KiB'>2097152</memory> > <currentMemory unit='KiB'>2097152</currentMemory> > <vcpu placement='static' current='1'>8</vcpu> > <resource> > <partition>/machine/test</partition> > </resource> > <os> > <type arch='x86_64' machine='pc-q35-rhel9.2.0'>hvm</type> > <boot dev='hd'/> > </os> > <features> > <acpi/> > <apic/> > </features> > <cpu mode='host-passthrough' check='none' migratable='on'/> > <clock offset='utc'> > <timer name='rtc' tickpolicy='catchup'/> > <timer name='pit' tickpolicy='delay'/> > <timer name='hpet' present='no'/> > </clock> > <on_poweroff>destroy</on_poweroff> > <on_reboot>restart</on_reboot> > <on_crash>destroy</on_crash> > <pm> > <suspend-to-mem enabled='no'/> > <suspend-to-disk enabled='no'/> > </pm> > <devices> > <emulator>/usr/libexec/qemu-kvm</emulator> > <disk type='file' device='disk'> > <driver name='qemu' type='qcow2' cache='none' io='io_uring' copy_on_read='on' ats='on' packed='on'/> > <source file='/var/lib/libvirt/images/rhel-9.2.qcow2'/> > <backingStore/> > <target dev='vda' bus='virtio'/> > <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> > </disk> > <disk type='network' device='disk'> > <driver name='qemu' type='raw'/> > <source protocol='nbd' name='foo'> > <host name='10.0.79.60' port='10809'/> > <reconnect delay='10'/> > </source> > <target dev='vdb' bus='virtio'/> > <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> > </disk> > <controller type='scsi' index='0' model='virtio-scsi'> > <driver queues='3' cmd_per_lun='10' max_sectors='512'/> > <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> > </controller> > <controller type='usb' index='0' model='ich9-ehci1'> > <address type='pci' domain='0x0000' bus='0x10' slot='0x01' function='0x7'/> > </controller> > <controller type='usb' index='0' model='ich9-uhci1'> > <master startport='0'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x0' multifunction='on'/> > </controller> > <controller type='usb' index='1' model='ich9-ehci1'> > <address type='pci' domain='0x0000' bus='0x10' slot='0x02' function='0x7'/> > </controller> > <controller type='usb' index='1' model='ich9-uhci2'> > <master startport='1'/> > <address type='pci' domain='0x0000' bus='0x10' slot='0x02' function='0x1'/> > </controller> > <controller type='pci' index='0' model='pcie-root'/> > <controller type='pci' index='1' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='1' port='0x10'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> > </controller> > <controller type='pci' index='2' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='2' port='0x11'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> > </controller> > <controller type='pci' index='3' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='3' port='0x12'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> > </controller> > <controller type='pci' index='4' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='4' port='0x13'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> > </controller> > <controller type='pci' index='5' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='5' port='0x14'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> > </controller> > <controller type='pci' index='6' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='6' port='0x15'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/> > </controller> > <controller type='pci' index='7' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='7' port='0x16'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/> > </controller> > <controller type='pci' index='8' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='8' port='0x17'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/> > </controller> > <controller type='pci' index='9' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='9' port='0x18'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/> > </controller> > <controller type='pci' index='10' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='10' port='0x19'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/> > </controller> > <controller type='pci' index='11' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='11' port='0x1a'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/> > </controller> > <controller type='pci' index='12' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='12' port='0x1b'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/> > </controller> > <controller type='pci' index='13' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='13' port='0x1c'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/> > </controller> > <controller type='pci' index='14' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='14' port='0x1d'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/> > </controller> > <controller type='pci' index='15' model='pcie-root-port'> > <model name='pcie-root-port'/> > <target chassis='15' port='0x1e'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/> > </controller> > <controller type='pci' index='16' model='pcie-to-pci-bridge'> > <model name='pcie-pci-bridge'/> > <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> > </controller> > <controller type='sata' index='0'> > <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> > </controller> > <controller type='virtio-serial' index='0'> > <driver iommu='on' ats='on'/> > <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> > </controller> > <controller type='virtio-serial' index='1'> > <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> > </controller> > <interface type='network'> > <mac address='52:54:00:aa:2b:86'/> > <source network='default'/> > <model type='e1000e'/> > <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> > </interface> > <serial type='pty'> > <log file='/var/log/libvirt/qemu/guestname-serial0.log' append='off'/> > <target type='isa-serial' port='0'> > <model name='isa-serial'/> > </target> > </serial> > <serial type='dev'> > <source path='/dev/ttyS0'/> > <target type='isa-serial' port='2'> > <model name='isa-serial'/> > </target> > </serial> > <console type='pty'> > <log file='/var/log/libvirt/qemu/guestname-serial0.log' append='off'/> > <target type='serial' port='0'/> > </console> > <channel type='unix'> > <target type='virtio' name='org.qemu.guest_agent.0'/> > <address type='virtio-serial' controller='0' bus='0' port='1'/> > </channel> > <input type='mouse' bus='ps2'/> > <input type='keyboard' bus='usb'> > <address type='usb' bus='0' port='1'/> > </input> > <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'> > <listen type='address' address='0.0.0.0'/> > </graphics> > <video> > <model type='vga' vram='16384' heads='1' primary='yes'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> > </video> > <hostdev mode='subsystem' type='usb' managed='no'> > <source autoAddress='yes'> > <vendor id='0x0627'/> > <product id='0x0001'/> > <address bus='1' device='2'/> > </source> > <alias name='ua-hostdev046b7e883-8517-4e49-b2db-3b3a8d96ccab'/> > <address type='usb' bus='0' port='2'/> > </hostdev> > <memballoon model='virtio'> > <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> > </memballoon> > </devices> > <seclabel type='dynamic' model='selinux' relabel='yes'/> ></domain> > >2023-07-26 02:12:31.906+0000: 386921: debug : virDomainMigrateConfirm3Params:5471 : params["migrate_uri"]=(string)tcp:vm-10-0-79-186.hosted.upshift.rdu2.redhat.com:49152 >2023-07-26 02:12:31.906+0000: 386921: debug : virDomainMigrateConfirm3Params:5471 : params["destination_name"]=(string)rhel-9.2 >2023-07-26 02:12:31.906+0000: 386921: debug : qemuMigrationSrcConfirm:4020 : vm=0x7fb76008e840, flags=0x141, cancelled=0 >2023-07-26 02:12:31.906+0000: 386921: debug : qemuDomainObjStartJobPhase:588 : Starting phase 'confirm3' of 'migration out' job >2023-07-26 02:12:31.906+0000: 386921: debug : qemuDomainObjSetJobPhase:558 : Setting 'migration out' phase to 'confirm3' >2023-07-26 02:12:31.907+0000: 386921: debug : qemuDomainCleanupRemove:7633 : vm=rhel-9.2, cb=0x7fb7a8748560 >2023-07-26 02:12:31.907+0000: 386921: debug : qemuMigrationSrcConfirmPhase:3918 : driver=0x7fb7600223f0, vm=0x7fb76008e840, cookiein=<qemu-migration> > <name>rhel-9.2</name> > <uuid>e46b3d21-99dd-4be9-92ff-78556c7234c4</uuid> > <hostname>vm-10-0-79-186.hosted.upshift.rdu2.redhat.com</hostname> > <hostuuid>e5563a9d-d6bf-4de6-bf56-b43b77fe1690</hostuuid> > <statistics> > <started>1690337539896</started> > <stopped>1690337551485</stopped> > <sent>1690337551578</sent> > <delta>1199</delta> > <time_elapsed>13191</time_elapsed> > <downtime>1602</downtime> > <setup_time>4</setup_time> > <memory_total>2165121024</memory_total> > <memory_processed>556433020</memory_processed> > <memory_remaining>0</memory_remaining> > <memory_bps>970491712</memory_bps> > <memory_constant>394001</memory_constant> > <memory_normal>134719</memory_normal> > <memory_normal_bytes>551809024</memory_normal_bytes> > <memory_dirty_rate>0</memory_dirty_rate> > <memory_iteration>3</memory_iteration> > <memory_postcopy_requests>0</memory_postcopy_requests> > <memory_page_size>4096</memory_page_size> > <disk_total>0</disk_total> > <disk_processed>0</disk_processed> > <disk_remaining>0</disk_remaining> > <disk_bps>0</disk_bps> > <auto_converge_throttle>0</auto_converge_throttle> > </statistics> ></qemu-migration> >, cookieinlen=1207, flags=0x141, retcode=0 >2023-07-26 02:12:31.907+0000: 386921: debug : qemuDomainObjStartJobPhase:588 : Starting phase 'confirm3' of 'migration out' job >2023-07-26 02:12:31.907+0000: 386921: debug : qemuDomainObjSetJobPhase:558 : Setting 'migration out' phase to 'confirm3' >2023-07-26 02:12:31.907+0000: 386921: debug : qemuMigrationCookieParse:1510 : cookielen=1207 cookie='<qemu-migration> > <name>rhel-9.2</name> > <uuid>e46b3d21-99dd-4be9-92ff-78556c7234c4</uuid> > <hostname>vm-10-0-79-186.hosted.upshift.rdu2.redhat.com</hostname> > <hostuuid>e5563a9d-d6bf-4de6-bf56-b43b77fe1690</hostuuid> > <statistics> > <started>1690337539896</started> > <stopped>1690337551485</stopped> > <sent>1690337551578</sent> > <delta>1199</delta> > <time_elapsed>13191</time_elapsed> > <downtime>1602</downtime> > <setup_time>4</setup_time> > <memory_total>2165121024</memory_total> > <memory_processed>556433020</memory_processed> > <memory_remaining>0</memory_remaining> > <memory_bps>970491712</memory_bps> > <memory_constant>394001</memory_constant> > <memory_normal>134719</memory_normal> > <memory_normal_bytes>551809024</memory_normal_bytes> > <memory_dirty_rate>0</memory_dirty_rate> > <memory_iteration>3</memory_iteration> > <memory_postcopy_requests>0</memory_postcopy_requests> > <memory_page_size>4096</memory_page_size> > <disk_total>0</disk_total> > <disk_processed>0</disk_processed> > <disk_remaining>0</disk_remaining> > <disk_bps>0</disk_bps> > <auto_converge_throttle>0</auto_converge_throttle> > </statistics> ></qemu-migration> >' >2023-07-26 02:12:31.907+0000: 386921: debug : qemuMigrationCookieXMLParseStr:1417 : xml=<qemu-migration> > <name>rhel-9.2</name> > <uuid>e46b3d21-99dd-4be9-92ff-78556c7234c4</uuid> > <hostname>vm-10-0-79-186.hosted.upshift.rdu2.redhat.com</hostname> > <hostuuid>e5563a9d-d6bf-4de6-bf56-b43b77fe1690</hostuuid> > <statistics> > <started>1690337539896</started> > <stopped>1690337551485</stopped> > <sent>1690337551578</sent> > <delta>1199</delta> > <time_elapsed>13191</time_elapsed> > <downtime>1602</downtime> > <setup_time>4</setup_time> > <memory_total>2165121024</memory_total> > <memory_processed>556433020</memory_processed> > <memory_remaining>0</memory_remaining> > <memory_bps>970491712</memory_bps> > <memory_constant>394001</memory_constant> > <memory_normal>134719</memory_normal> > <memory_normal_bytes>551809024</memory_normal_bytes> > <memory_dirty_rate>0</memory_dirty_rate> > <memory_iteration>3</memory_iteration> > <memory_postcopy_requests>0</memory_postcopy_requests> > <memory_page_size>4096</memory_page_size> > <disk_total>0</disk_total> > <disk_processed>0</disk_processed> > <disk_remaining>0</disk_remaining> > <disk_bps>0</disk_bps> > <auto_converge_throttle>0</auto_converge_throttle> > </statistics> ></qemu-migration> > >2023-07-26 02:12:31.907+0000: 386921: debug : qemuProcessStop:8259 : Shutting down vm=0x7fb76008e840 name=rhel-9.2 id=2 pid=387748, reason=migrated, asyncJob=migration out, flags=0x1 >2023-07-26 02:12:31.908+0000: 386921: debug : qemuDomainLogAppendMessage:7202 : Append log message (vm='rhel-9.2' message='2023-07-26 02:12:31.908+0000: shutting down, reason=migrated >) stdioLogD=1 >2023-07-26 02:12:31.909+0000: 386921: debug : qemuAgentClose:689 : agent=0x7fb76c015170 >2023-07-26 02:12:31.909+0000: 386921: debug : qemuAgentDispose:139 : agent=0x7fb76c015170 >2023-07-26 02:12:31.909+0000: 386921: info : qemuMonitorClose:785 : QEMU_MONITOR_CLOSE: mon=0x7fb72003d010 >2023-07-26 02:12:31.909+0000: 386921: debug : qemuMonitorDispose:214 : mon=0x7fb72003d010 >2023-07-26 02:12:31.910+0000: 386921: debug : qemuProcessKill:8175 : vm=0x7fb76008e840 name=rhel-9.2 pid=387748 flags=0x5 >2023-07-26 02:12:31.910+0000: 386921: debug : virProcessKillPainfullyDelay:377 : vpid=387748 force=1 extradelay=2 group=0 >2023-07-26 02:12:31.927+0000: 386821: debug : virNetlinkEventCallback:857 : dispatching to max 0 clients, called from event watch 6 >2023-07-26 02:12:31.927+0000: 386821: debug : virNetlinkEventCallback:873 : event not handled. >2023-07-26 02:12:31.927+0000: 386821: debug : virNetlinkEventCallback:857 : dispatching to max 0 clients, called from event watch 6 >2023-07-26 02:12:31.927+0000: 386821: debug : virNetlinkEventCallback:873 : event not handled. >2023-07-26 02:12:31.927+0000: 386821: debug : virNetlinkEventCallback:857 : dispatching to max 0 clients, called from event watch 6 >2023-07-26 02:12:31.927+0000: 386821: debug : virNetlinkEventCallback:873 : event not handled. >2023-07-26 02:12:31.937+0000: 386821: debug : virStreamRecv:275 : stream=0x7fb75c002860, data=0x55a9d4f9ae10, nbytes=262120 >2023-07-26 02:12:31.937+0000: 386821: debug : virStreamEventUpdateCallback:1128 : stream=0x7fb75c002860, events=0 >2023-07-26 02:12:31.937+0000: 386821: debug : virStreamEventUpdateCallback:1128 : stream=0x7fb75c002860, events=0 >2023-07-26 02:12:31.937+0000: 386821: debug : virStreamEventUpdateCallback:1128 : stream=0x7fb75c002860, events=2 >2023-07-26 02:12:31.937+0000: 386821: debug : virStreamEventRemoveCallback:1164 : stream=0x7fb75c002860 >2023-07-26 02:12:31.937+0000: 386821: debug : virStreamFinish:1210 : stream=0x7fb75c002860 >2023-07-26 02:12:31.938+0000: 386822: debug : virThreadJobSet:93 : Thread 386822 (rpc-virtqemud) is now running job remoteDispatchConnectUnregisterCloseCallback >2023-07-26 02:12:31.938+0000: 386822: debug : virConnectUnregisterCloseCallback:1538 : conn=0x7fb738006e60 >2023-07-26 02:12:31.938+0000: 386822: debug : virThreadJobClear:118 : Thread 386822 (rpc-virtqemud) finished job remoteDispatchConnectUnregisterCloseCallback with ret=0 >2023-07-26 02:12:31.938+0000: 386823: debug : virThreadJobSet:93 : Thread 386823 (rpc-virtqemud) is now running job remoteDispatchConnectClose >2023-07-26 02:12:31.938+0000: 386823: debug : virThreadJobClear:118 : Thread 386823 (rpc-virtqemud) finished job remoteDispatchConnectClose with ret=0 >2023-07-26 02:12:31.939+0000: 386821: debug : virConnectClose:1320 : conn=0x7fb738006e60 >2023-07-26 02:12:31.939+0000: 386821: debug : virCloseCallbacksDomainRunForConn:346 : conn=0x7fb738006e60 >2023-07-26 02:12:32.050+0000: 386821: debug : virNetlinkEventCallback:857 : dispatching to max 0 clients, called from event watch 6 >2023-07-26 02:12:32.050+0000: 386821: debug : virNetlinkEventCallback:873 : event not handled. >2023-07-26 02:12:32.196+0000: 386821: debug : virNetlinkEventCallback:857 : dispatching to max 0 clients, called from event watch 6 >2023-07-26 02:12:32.196+0000: 386821: debug : virNetlinkEventCallback:873 : event not handled. >2023-07-26 02:12:32.201+0000: 386821: debug : virNetlinkEventCallback:857 : dispatching to max 0 clients, called from event watch 6 >2023-07-26 02:12:32.201+0000: 386821: debug : virNetlinkEventCallback:873 : event not handled. >2023-07-26 02:12:32.201+0000: 386821: debug : virNetlinkEventCallback:857 : dispatching to max 0 clients, called from event watch 6 >2023-07-26 02:12:32.201+0000: 386821: debug : virNetlinkEventCallback:873 : event not handled. >2023-07-26 02:12:32.201+0000: 386821: debug : virNetlinkEventCallback:857 : dispatching to max 0 clients, called from event watch 6 >2023-07-26 02:12:32.201+0000: 386821: debug : virNetlinkEventCallback:873 : event not handled. >2023-07-26 02:12:32.202+0000: 386821: debug : virNetlinkEventCallback:857 : dispatching to max 0 clients, called from event watch 6 >2023-07-26 02:12:32.202+0000: 386821: debug : virNetlinkEventCallback:873 : event not handled. >2023-07-26 02:12:32.203+0000: 386821: debug : virNetlinkEventCallback:857 : dispatching to max 0 clients, called from event watch 6 >2023-07-26 02:12:32.203+0000: 386821: debug : virNetlinkEventCallback:873 : event not handled. >2023-07-26 02:12:32.203+0000: 386821: debug : virNetlinkEventCallback:857 : dispatching to max 0 clients, called from event watch 6 >2023-07-26 02:12:32.203+0000: 386821: debug : virNetlinkEventCallback:873 : event not handled. >2023-07-26 02:12:32.203+0000: 386821: debug : virNetlinkEventCallback:857 : dispatching to max 0 clients, called from event watch 6 >2023-07-26 02:12:32.203+0000: 386821: debug : virNetlinkEventCallback:873 : event not handled. >2023-07-26 02:12:32.219+0000: 386821: debug : virNetlinkEventCallback:857 : dispatching to max 0 clients, called from event watch 6 >2023-07-26 02:12:32.219+0000: 386821: debug : virNetlinkEventCallback:873 : event not handled. >2023-07-26 02:12:32.310+0000: 386921: debug : qemuDomainCleanupRun:7652 : driver=0x7fb7600223f0, vm=rhel-9.2 >2023-07-26 02:12:32.311+0000: 386921: debug : virFileIsSharedFSType:3488 : Check if path /var/lib/libvirt/images/rhel-9.2.qcow2 with FS magic 1481003842 is shared >2023-07-26 02:12:32.311+0000: 386921: debug : virUSBDeviceNew:346 : 1 2 001:002: initialized >2023-07-26 02:12:32.311+0000: 386921: debug : virUSBDeviceFree:356 : 1 2 001:002: freeing >2023-07-26 02:12:32.311+0000: 386921: debug : virFileIsSharedFSType:3488 : Check if path /var/lib/libvirt/images/rhel-9.2.qcow2 with FS magic 1481003842 is shared >2023-07-26 02:12:32.311+0000: 386921: debug : virUSBDeviceNew:346 : 1 2 001:002: initialized >2023-07-26 02:12:32.311+0000: 386921: debug : virUSBDeviceFree:356 : 1 2 001:002: freeing >2023-07-26 02:12:32.322+0000: 386921: debug : virUSBDeviceNew:346 : 1 2 001:002: initialized >2023-07-26 02:12:32.322+0000: 386921: debug : virHostdevReAttachUSBDevices:1839 : Removing 001.002 dom=rhel-9.2 from activeUSBHostdevs >2023-07-26 02:12:32.322+0000: 386921: debug : virUSBDeviceFree:356 : 1 2 001:002: freeing >2023-07-26 02:12:32.322+0000: 386921: debug : virUSBDeviceFree:356 : 1 2 001:002: freeing >2023-07-26 02:12:32.322+0000: 386921: debug : virConnectOpen:1204 : name=network:///system >2023-07-26 02:12:32.322+0000: 386921: debug : virConfLoadConfig:1515 : Loading config file '/etc/libvirt/libvirt.conf' >2023-07-26 02:12:32.322+0000: 386921: debug : virConfReadFile:723 : filename=/etc/libvirt/libvirt.conf >2023-07-26 02:12:32.323+0000: 386921: debug : virConfGetValueStringList:913 : Get value string list (nil) 0 >2023-07-26 02:12:32.323+0000: 386921: debug : virConnectOpenInternal:966 : Split "network:///system" to URI components: > scheme network > server <null> > user <null> > port 0 > path /system >2023-07-26 02:12:32.323+0000: 386921: debug : virConnectOpenInternal:1072 : trying driver 0 (Test) ... >2023-07-26 02:12:32.323+0000: 386921: debug : virConnectOpenInternal:1094 : Checking for supported URI schemes >2023-07-26 02:12:32.323+0000: 386921: debug : virConnectOpenInternal:1103 : No matching URI scheme >2023-07-26 02:12:32.323+0000: 386921: debug : virConnectOpenInternal:1072 : trying driver 1 (ESX) ... >2023-07-26 02:12:32.323+0000: 386921: debug : virConnectOpenInternal:1094 : Checking for supported URI schemes >2023-07-26 02:12:32.323+0000: 386921: debug : virConnectOpenInternal:1103 : No matching URI scheme >2023-07-26 02:12:32.323+0000: 386921: debug : virConnectOpenInternal:1072 : trying driver 2 (remote) ... >2023-07-26 02:12:32.323+0000: 386921: debug : virConnectOpenInternal:1111 : Matching any URI scheme for 'network' >2023-07-26 02:12:32.323+0000: 386921: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-26 02:12:32.323+0000: 386921: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-26 02:12:32.725+0000: 386921: debug : virConnectOpenInternal:1137 : driver 2 remote returned SUCCESS >2023-07-26 02:12:32.725+0000: 386921: debug : virConnectSetIdentity:99 : conn=0x7fb73c015ed0 params=0x7fb728019fe0 nparams=7 flags=0x0 >2023-07-26 02:12:32.725+0000: 386921: debug : virConnectSetIdentity:100 : params["user-name"]=(string)root >2023-07-26 02:12:32.725+0000: 386921: debug : virConnectSetIdentity:100 : params["unix-user-id"]=(ullong)0 >2023-07-26 02:12:32.725+0000: 386921: debug : virConnectSetIdentity:100 : params["group-name"]=(string)root >2023-07-26 02:12:32.725+0000: 386921: debug : virConnectSetIdentity:100 : params["unix-group-id"]=(ullong)0 >2023-07-26 02:12:32.725+0000: 386921: debug : virConnectSetIdentity:100 : params["process-id"]=(llong)389409 >2023-07-26 02:12:32.725+0000: 386921: debug : virConnectSetIdentity:100 : params["process-time"]=(ullong)77219259 >2023-07-26 02:12:32.725+0000: 386921: debug : virConnectSetIdentity:100 : params["selinux-context"]=(string)unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 >2023-07-26 02:12:32.726+0000: 386921: debug : virNetworkLookupByName:306 : conn=0x7fb73c015ed0, name=default >2023-07-26 02:12:32.726+0000: 386921: debug : virNetworkPortLookupByUUID:1439 : conn=0x7fb7280099a0, uuid=6b18b1c4-6f81-4951-8466-9db2661e490c >2023-07-26 02:12:32.726+0000: 386921: debug : virNetworkPortDelete:1789 : port=0x7fb780044950, flags=0x0 >2023-07-26 02:12:32.733+0000: 386921: debug : virSystemdTerminateMachine:585 : Attempting to terminate machine via systemd >2023-07-26 02:12:32.744+0000: 386921: debug : virThreadJobClear:118 : Thread 386921 (rpc-virtqemud) finished job remoteDispatchDomainMigrateConfirm3Params with ret=0 >2023-07-26 02:12:32.744+0000: 386920: debug : virDomainObjCheckActive:5229 : Requested operation is not valid: domain is not running >2023-07-26 02:12:32.744+0000: 386920: debug : virThreadJobClear:118 : Thread 386920 (rpc-virtqemud) finished job remoteDispatchDomainBlockJobAbort with ret=-1 >2023-07-26 02:12:32.746+0000: 386824: debug : virThreadJobSet:93 : Thread 386824 (rpc-virtqemud) is now running job remoteDispatchConnectUnregisterCloseCallback >2023-07-26 02:12:32.746+0000: 386824: debug : virConnectUnregisterCloseCallback:1538 : conn=0x7fb73c0158d0 >2023-07-26 02:12:32.746+0000: 386824: debug : virThreadJobClear:118 : Thread 386824 (rpc-virtqemud) finished job remoteDispatchConnectUnregisterCloseCallback with ret=0 >2023-07-26 02:12:32.746+0000: 386825: debug : virThreadJobSet:93 : Thread 386825 (rpc-virtqemud) is now running job remoteDispatchConnectClose >2023-07-26 02:12:32.746+0000: 386825: debug : virThreadJobClear:118 : Thread 386825 (rpc-virtqemud) finished job remoteDispatchConnectClose with ret=0 >2023-07-26 02:12:32.748+0000: 386821: debug : virConnectClose:1320 : conn=0x7fb73c0158d0 >2023-07-26 02:12:32.748+0000: 386821: debug : virCloseCallbacksDomainRunForConn:346 : conn=0x7fb73c0158d0 >2023-07-26 02:12:32.748+0000: 386926: debug : virThreadJobSet:93 : Thread 386926 (prio-rpc-virtqemud) is now running job remoteDispatchConnectUnregisterCloseCallback >2023-07-26 02:12:32.748+0000: 386926: debug : virConnectUnregisterCloseCallback:1538 : conn=0x7fb73c0152d0 >2023-07-26 02:12:32.748+0000: 386926: debug : virThreadJobClear:118 : Thread 386926 (prio-rpc-virtqemud) finished job remoteDispatchConnectUnregisterCloseCallback with ret=0 >2023-07-26 02:12:32.748+0000: 386922: debug : virThreadJobSet:93 : Thread 386922 (prio-rpc-virtqemud) is now running job remoteDispatchConnectClose >2023-07-26 02:12:32.748+0000: 386922: debug : virThreadJobClear:118 : Thread 386922 (prio-rpc-virtqemud) finished job remoteDispatchConnectClose with ret=0 >2023-07-26 02:12:32.749+0000: 386821: debug : virConnectClose:1320 : conn=0x7fb73c0152d0 >2023-07-26 02:12:32.749+0000: 386821: debug : virCloseCallbacksDomainRunForConn:346 : conn=0x7fb73c0152d0
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 2226576
: 1979993