Created attachment 1792627 [details] vms_not_loaded Description of problem: UI: VM import from VMware, after entering VMware provider connection details, the VMs list is not loaded. There is an endless rotating spinner besides a message on "checking vCenter credentials". (screenshot "vms_not_loaded" attached) I tried 2 different VMware providers and got the same result. Version-Release number of selected component (if applicable): OCP-4.8/CNV-4.8 Additional info: For a RHV provider the VMs list is loaded successfully.
After a while this warning appeared in UI, while the "checking vCenter credentials" is still pending: Danger alert:Could not load V2VVmware check-administrator-10-8-58-136-l7dct in default namespace V2vvmwares.v2v.kubevirt.io "check-administrator-10-8-58-136-l7dct" not found I'm not sure if this is related or not to this bug. A "warning" screenshot that shows it
I've checked the cluster and everything looks fine from a backend perspective. When I enter the credentials for vCenter and click "Check and Save", the UI create the v2vvmware CR and a v2v-vmware-xxxxx-xxx pod is created to populate it with the list of VMs. I have created my own namespace, called "fdupont", and I can see the pod and the CR contains the list of VMs: $ oc get v2vvmwares.v2v.kubevirt.io check-administrator-10-8-58-136-7p2jx -n fdupont -o yaml From the OpenShift console, I've opened the developer tools and I can see that it retrieves the CR and that it's not truncated, so it seems that the wizard doesn't refresh itself.
Ronen, could you please check this bug whether it is the blocker?
Please provide container logs so I can understand what the issue is.
It seems like a status field is missing from the response. I've attached the response object and logs of v2v container.
i found this line in logs, it might help.. {"level":"error","ts":1624444210.745817,"logger":"controller_v2vvmware","msg":"Failed to update V2VVmware status. Intended to write phase: 'ConnectionVerified'","error":"Operation cannot be fulfilled on v2vvmwares.v2v.kubevirt.io \"check-administrator-10-8-58-136-hvrz7\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/pkg/controller/v2vvmware.updateStatusPhaseRetry\n\t/go/src/github.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/pkg/controller/v2vvmware/actions.go:186\ngithub.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/pkg/controller/v2vvmware.updateStatusPhase\n\t/go/src/github.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/pkg/controller/v2vvmware/actions.go:167\ngithub.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/pkg/controller/v2vvmware.readVmsList\n\t/go/src/github.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/pkg/controller/v2vvmware/actions.go:66\ngithub.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/pkg/controller/v2vvmware.(*ReconcileV2VVmware).Reconcile\n\t/go/src/github.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/pkg/controller/v2vvmware/v2vvmware_controller.go:109\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
I was able to reproduce the issue and I think the issue is here(https://github.com/ManageIQ/manageiq-v2v-conversion_host/blob/master/vm-import-provider/pkg/controller/v2vvmware/actions.go#L184) and should be fixed like here(https://github.com/ManageIQ/manageiq-v2v-conversion_host/blob/master/vm-import-provider/pkg/controller/ovirtprovider/ovirtprovider_controller.go#L303)
https://github.com/ManageIQ/manageiq-v2v-conversion_host/pull/104
*** Bug 1977281 has been marked as a duplicate of this bug. ***
Verified on OCP-4.8-rc.1/CNV-4.8.0-451 (iib:86746). Now VM import from VMware dialog in UI passes the "check and save" stage, and it is possible to view the VMware VMs listed in UI.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Virtualization 4.8.0 Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2920