Description of problem: VMware vmimport CR has no "failed" status as expected after deleting the importing VM during disk copy/conversion image stage It seems in UI with status "importing" Version-Release number of selected component (if applicable): CNV 2.5.0-413 (iib-24150) OCP 4.6.1 How reproducible: 100% Steps to Reproduce: 1. Have a running VM in VMware 2. Create VMimport CR via API 3. After source VM is powered off and the disk copy was started -> delete the importing VM: oc delete vm <vm-name> Actual results: VMimport status is "Processing" UI: VMimport is displayed in VM page with status "importing" VMimport stays in this situation till deleting VMimport CR Expected results: VMimport should have status "failed" due to VM deletion Additional info: * Tested using NFS storage class * Regarding pods created during import: - Deleting VM in disk_copy stage -> import pod still running, at some point it is removed (as it was completed) but no vmimport.v2v.kubevirt pod is created after. - Deleting VM in conversion stage -> vmimport.v2v.kubevirt pod still running and got completed * Source VM is off till deleting the VMimport CR itself Attachments: logs: vm-import-controller, importer pod vm-import-controller yaml vmimport CR describe
Created attachment 1726847 [details] vm-import-controller yaml
Created attachment 1726848 [details] vmware-vmimport-1-describe
Created attachment 1726849 [details] vm-import-controller log
Created attachment 1726850 [details] importer-vmware-import-1-harddisk1 pod log
In the vm-import-controller log, we can see the following message: {"level":"error","ts":1604566865.2775548,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"virtualmachineimport-controller","name":"vmware-vmimport-1","namespace":"default","error":"VirtualMachine.kubevirt.io \"vmware-import-1\" not found","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/kubevirt/vm-import-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/kubevirt/vm-import-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:248\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/kubevirt/vm-import-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:222\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/src/github.com/kubevirt/vm-import-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:201\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/kubevirt/vm-import-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/kubevirt/vm-import-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/kubevirt/vm-import-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} So, the vm-import-controller knows that the VM has been deleted. It could then delete the DataVolume and mark the import as failed with a meaningful message. When the DataVolume is deleted, I guess that the importer pod is terminated too. Something to verify.
(In reply to Fabien Dupont from comment #5) > In the vm-import-controller log, we can see the following message: > > {"level":"error","ts":1604566865.2775548,"logger":"controller-runtime. > controller","msg":"Reconciler > error","controller":"virtualmachineimport-controller","name":"vmware- > vmimport-1","namespace":"default","error":"VirtualMachine.kubevirt.io > \"vmware-import-1\" not > found","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/ > github.com/kubevirt/vm-import-operator/vendor/github.com/go-logr/zapr/zapr. > go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller). > reconcileHandler\n\t/go/src/github.com/kubevirt/vm-import-operator/vendor/ > sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go: > 248\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller). > processNextWorkItem\n\t/go/src/github.com/kubevirt/vm-import-operator/vendor/ > sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go: > 222\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller). > worker\n\t/go/src/github.com/kubevirt/vm-import-operator/vendor/sigs.k8s.io/ > controller-runtime/pkg/internal/controller/controller.go:201\nk8s.io/ > apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/kubevirt/ > vm-import-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\nk8s. > io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/kubevirt/vm- > import-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\nk8s.io/ > apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/kubevirt/vm-import- > operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} > > So, the vm-import-controller knows that the VM has been deleted. It could > then delete the DataVolume and mark the import as failed with a meaningful > message. > When the DataVolume is deleted, I guess that the importer pod is terminated > too. Something to verify. It seems as the importer pod is terminated too. Please see "Additional info" in bug description
It is not related to a specific provider. Marking BZ#1894900 as duplicate to reduce admin work.
*** Bug 1894900 has been marked as a duplicate of this bug. ***
@slucidi do you think this could be fixed in CNV 2.6.0? If not, do you think it's worth fixing it in CNV at all?
I think I'll have time to fix it for 2.6.
The fix should be in hco-bundle-registry-container-v2.6.0-521 and onwards. Moving to ON_QA.
verified build: iib-42945 hco-v2.6.0-523 ovirt+vmware VMNotFound: target VM XXX-for-tests not found
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Virtualization 2.6.0 security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:0799