Bug 1974297 - [v2v][VM import from VMware dialog via UI] VMs list is not loaded
Summary: [v2v][VM import from VMware dialog via UI] VMs list is not loaded
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: V2V
Version: 4.8.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.8.0
Assignee: Fabien Dupont
QA Contact: Daniel Gur
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-06-21 10:09 UTC by Ilanit Stein
Modified: 2021-08-22 09:28 UTC (History)
12 users (show)

Fixed In Version: kubevirt-vmware-container-v4.8.0-11
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-07-27 14:32:39 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
vms_not_loaded (55.32 KB, image/png)
2021-06-21 10:09 UTC, Ilanit Stein
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2021:2920 0 None None None 2021-07-27 14:33:33 UTC

Description Ilanit Stein 2021-06-21 10:09:15 UTC
Created attachment 1792627 [details]
vms_not_loaded

Description of problem:
UI: VM import from VMware, after entering VMware provider connection details, the VMs list is not loaded. There is an endless rotating spinner besides a message on "checking vCenter credentials".
(screenshot "vms_not_loaded" attached)
I tried 2 different VMware providers and got the same result. 

Version-Release number of selected component (if applicable):
OCP-4.8/CNV-4.8

Additional info:
For a RHV provider the VMs list is loaded successfully.

Comment 1 Ilanit Stein 2021-06-21 14:37:35 UTC
After a while this warning appeared in UI, while the "checking vCenter credentials" is still pending:
Danger alert:Could not load V2VVmware check-administrator-10-8-58-136-l7dct in default namespace
V2vvmwares.v2v.kubevirt.io "check-administrator-10-8-58-136-l7dct" not found

I'm not sure if this is related or not to this bug.
A "warning" screenshot that shows it

Comment 3 Fabien Dupont 2021-06-22 15:42:34 UTC
I've checked the cluster and everything looks fine from a backend perspective.
When I enter the credentials for vCenter and click "Check and Save", the UI create the v2vvmware CR and a v2v-vmware-xxxxx-xxx pod is created to populate it with the list of VMs.
I have created my own namespace, called "fdupont", and I can see the pod and the CR contains the list of VMs:

$ oc get v2vvmwares.v2v.kubevirt.io check-administrator-10-8-58-136-7p2jx -n fdupont -o yaml

From the OpenShift console, I've opened the developer tools and I can see that it retrieves the CR and that it's not truncated, so it seems that the wizard doesn't refresh itself.

Comment 4 Ying Cui 2021-06-23 08:04:36 UTC
Ronen, could you please check this bug whether it is the blocker?

Comment 5 Piotr Kliczewski 2021-06-23 13:55:49 UTC
Please provide container logs so I can understand what the issue is.

Comment 8 Matan Schatzman 2021-06-23 14:07:54 UTC
It seems like a status field is missing from the response. I've attached the response object and logs of v2v container.

Comment 10 Matan Schatzman 2021-06-23 14:11:48 UTC
i found this line in logs, it might help.. 

{"level":"error","ts":1624444210.745817,"logger":"controller_v2vvmware","msg":"Failed to update V2VVmware status. Intended to write phase: 'ConnectionVerified'","error":"Operation cannot be fulfilled on v2vvmwares.v2v.kubevirt.io \"check-administrator-10-8-58-136-hvrz7\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/pkg/controller/v2vvmware.updateStatusPhaseRetry\n\t/go/src/github.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/pkg/controller/v2vvmware/actions.go:186\ngithub.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/pkg/controller/v2vvmware.updateStatusPhase\n\t/go/src/github.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/pkg/controller/v2vvmware/actions.go:167\ngithub.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/pkg/controller/v2vvmware.readVmsList\n\t/go/src/github.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/pkg/controller/v2vvmware/actions.go:66\ngithub.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/pkg/controller/v2vvmware.(*ReconcileV2VVmware).Reconcile\n\t/go/src/github.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/pkg/controller/v2vvmware/v2vvmware_controller.go:109\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/ManageIQ/manageiq-v2v-conversion_host/kubevirt-vmware/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}

Comment 14 Daniel Gur 2021-06-29 12:26:57 UTC
*** Bug 1977281 has been marked as a duplicate of this bug. ***

Comment 15 Ilanit Stein 2021-07-01 11:13:07 UTC
Verified on OCP-4.8-rc.1/CNV-4.8.0-451 (iib:86746).

Now VM import from VMware dialog in UI passes the "check and save" stage, and it is possible to view the VMware VMs listed in UI.

Comment 18 errata-xmlrpc 2021-07-27 14:32:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Virtualization 4.8.0 Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2920


Note You need to log in before you can comment on or make changes to this bug.