During implementation of post copy migration we added code to handle libvirt notification about the job. Unfortunately it now reacts on every libvirt event which is wrong since there are other types unrelated to migration This is very low risk to fix, so let'sdo that in 4.1.1 to avoid surprising regressions in other flows
Verify with: Engine: Red Hat Virtualization Manager Version: 4.1.1.4-0.1.el7 Host: OS Version:RHEL - 7.3 - 7.el7 Kernel Version:3.10.0 - 550.el7.x86_64 KVM Version:2.6.0 - 28.el7_3.3.1 LIBVIRT Version:libvirt-2.0.0-10.el7_3.5 VDSM Version:vdsm-4.19.7-1.el7ev Steps: Migration VM with post copy policy.And monitor logs Vdsm log: 2017-03-14 16:42:36,637+0200 INFO (migmon/c41851df) [vdsm.api] START switch_migration_to_post_copy args=(<virt.vm.Vm object at 0x3f2b610>,) kwargs={} (api:37) 2017-03-14 16:42:36,637+0200 INFO (migmon/c41851df) [virt.vm] (vmId='c41851df-cf66-4f81-80d1-808ae579d2c5') Switching to post-copy migration (vm:1584) 2017-03-14 16:42:36,639+0200 INFO (migmon/c41851df) [vdsm.api] FINISH switch_migration_to_post_copy return=True (api:43) Libvirt log: 2017-03-14 14:42:36.701+0000: 23668: info : qemuMonitorIOProcess:429 : QEMU_MONITOR_IO_PROCESS: mon=0x7f6f100064c0 buf={"return": {"expected-downtime": 1489, "status": "postcopy-active", "setup-time": 37, "total-time": 459555, "ram": {"total": 10813784064, "postcopy-requests": 0, "dirty-sync-count": 36, "remaining": 4214784, "mbps": 17.07408, "transferred": 979448465, "duplicate": 3535324, "dirty-pages-rate": 776, "skipped": 0, "normal-bytes": 945745920, "normal": 230895}}, "id": "libvirt-213"} 2017-03-14 14:42:36.701+0000: 23668: info : qemuMonitorJSONIOProcessLine:211 : QEMU_MONITOR_RECV_REPLY: mon=0x7f6f100064c0 reply={"return": {"expected-downtime": 1489, "status": "postcopy-active", "setup-time": 37, "total-time": 459555, "ram": {"total": 10813784064, "postcopy-requests": 0, "dirty-sync-count": 36, "remaining": 4214784, "mbps": 17.07408, "transferred": 979448465, "duplicate": 3535324, "dirty-pages-rate": 776, "skipped": 0, "normal-bytes": 945745920, "normal": 230895}}, "id": "libvirt-213"} 2017-03-14 14:42:36.785+0000: 23668: info : qemuMonitorIOProcess:429 : QEMU_MONITOR_IO_PROCESS: mon=0x7f6f100064c0 buf={"return": {"expected-downtime": 1489, "status": "postcopy-active", "setup-time": 37, "total-time": 459640, "ram": {"total": 10813784064, "postcopy-requests": 0, "dirty-sync-count": 37, "remaining": 0, "mbps": 17.07408, "transferred": 985881873, "duplicate": 3535590, "dirty-pages-rate": 776, "skipped": 0, "normal-bytes": 952164352, "normal": 232462}}, "id": "libvirt-214"} 2017-03-14 14:42:36.786+0000: 23668: info : qemuMonitorJSONIOProcessLine:211 : QEMU_MONITOR_RECV_REPLY: mon=0x7f6f100064c0 reply={"return": {"expected-downtime": 1489, "status": "postcopy-active", "setup-time": 37, "total-time": 459640, "ram": {"total": 10813784064, "postcopy-requests": 0, "dirty-sync-count": 37, "remaining": 0, "mbps": 17.07408, "transferred": 985881873, "duplicate": 3535590, "dirty-pages-rate": 776, "skipped": 0, "normal-bytes": 952164352, "normal": 232462}}, "id": "libvirt-214"} 2017-03-14 14:42:36.947+0000: 23671: info : virFirewallApplyRule:838 : Applying rule '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j libvirt-P-vnet0' 2017-03-14 14:42:36.973+0000: 23671: info : virFirewallApplyRule:838 : Applying rule '/usr/sbin/iptables -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT' 2017-03-14 14:42:36.995+0000: 23671: info : virFirewallApplyRule:838 : Applying rule '/usr/sbin/ip6tables -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT' 2017-03-14 14:42:37.012+0000: 23671: info : virFirewallApplyRule:838 : Applying rule '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j libvirt-O-vnet0'