Created attachment 1667711 [details] source and target pods logs Description of problem: Migration with container disk is failing with: Message='no connection driver available for storage:///system' From the log: {"component":"virt-launcher","kind":"","level":"error","msg":"Live migration failed","name":"vmi-migratable","namespace":"default","pos":"manager.go:509","reason":"virError(Code=5, Domain=0, Message='no connection driver available for storage:///system')","timestamp":"2020-03-05T09:49:10.733367Z","uid":"71cc574e-1b80-4c83-81ed-00f1a61deaf8"} {"component":"virt-launcher","level":"info","msg":"DomainLifecycle event 0 with reason 1 received","pos":"client.go:259","timestamp":"2020-03-05T09:49:10.740597Z"} {"component":"virt-launcher","level":"info","msg":"kubevirt domain status: Running(1):Unknown(1)","pos":"client.go:180","timestamp":"2020-03-05T09:49:10.742229Z"} {"component":"virt-launcher","kind":"","level":"error","msg":"Failed to migrate vmi","name":"vmi-migratable","namespace":"default","pos":"server.go:105","reason":"migration job already execu ted","timestamp":"2020-03-05T09:49:10.746950Z","uid":"71cc574e-1b80-4c83-81ed-00f1a61deaf8"} Version-Release number of selected component (if applicable): CNV 2.3 How reproducible: 100% Steps to Reproduce: 1. Create vm with container disk (like: https://raw.githubusercontent.com/kubevirt/kubevirt/master/examples/vmi-migratable.yaml) 2. Migrate VM Actual results: Migration failing, Target pod is in error state.
Created attachment 1667723 [details] source and target pods logs - nfs
Manual Verify with: oc get kv -n openshift-cnv -o yaml | grep -i operatorversion operatorVersion: v0.26.2 Steps: 1. Create vm with container disk (like: https://raw.githubusercontent.com/kubevirt/kubevirt/master/examples/vmi-migratable.yaml) 2. Migrate vm Results: Migration Pass Libvirt packages on compute pod: [root@virt-launcher-vmi-migratable-8jbqz /]# rpm -qa | grep libvirt libvirt-daemon-driver-secret-5.6.0-10.module+el8.1.1+5309+6d656f05.x86_64 libvirt-daemon-driver-nwfilter-5.6.0-10.module+el8.1.1+5309+6d656f05.x86_64 libvirt-daemon-driver-storage-core-5.6.0-10.module+el8.1.1+5309+6d656f05.x86_64 libvirt-daemon-driver-storage-iscsi-5.6.0-10.module+el8.1.1+5309+6d656f05.x86_64 libvirt-daemon-driver-storage-rbd-5.6.0-10.module+el8.1.1+5309+6d656f05.x86_64 libvirt-daemon-driver-storage-mpath-5.6.0-10.module+el8.1.1+5309+6d656f05.x86_64 libvirt-daemon-driver-storage-iscsi-direct-5.6.0-10.module+el8.1.1+5309+6d656f05.x86_64 libvirt-daemon-kvm-5.6.0-10.module+el8.1.1+5309+6d656f05.x86_64 libvirt-bash-completion-5.6.0-10.module+el8.1.1+5309+6d656f05.x86_64 libvirt-libs-5.6.0-10.module+el8.1.1+5309+6d656f05.x86_64 libvirt-daemon-5.6.0-10.module+el8.1.1+5309+6d656f05.x86_64 libvirt-daemon-driver-nodedev-5.6.0-10.module+el8.1.1+5309+6d656f05.x86_64 libvirt-daemon-driver-interface-5.6.0-10.module+el8.1.1+5309+6d656f05.x86_64 libvirt-daemon-driver-network-5.6.0-10.module+el8.1.1+5309+6d656f05.x86_64 libvirt-daemon-driver-storage-disk-5.6.0-10.module+el8.1.1+5309+6d656f05.x86_64 libvirt-daemon-driver-storage-logical-5.6.0-10.module+el8.1.1+5309+6d656f05.x86_64 libvirt-daemon-driver-storage-scsi-5.6.0-10.module+el8.1.1+5309+6d656f05.x86_64 libvirt-daemon-driver-storage-gluster-5.6.0-10.module+el8.1.1+5309+6d656f05.x86_64 libvirt-daemon-driver-qemu-5.6.0-10.module+el8.1.1+5309+6d656f05.x86_64 libvirt-daemon-driver-storage-5.6.0-10.module+el8.1.1+5309+6d656f05.x86_64 libvirt-client-5.6.0-10.module+el8.1.1+5309+6d656f05.x86_64 Will move to verify after Tier1 and Tier2 migration tests will pass.
Base on the Tier1 results of : KubeVirt: v0.26.4 Moving to verify
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:2011