Bug 1870116 - Clicking on Imprt VM card on the Developer Console is displaying blank page
Summary: Clicking on Imprt VM card on the Developer Console is displaying blank page
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Console Kubevirt Plugin
Version: 4.6
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.6.0
Assignee: Tomas Jelinek
QA Contact: Guohua Ouyang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-08-19 11:36 UTC by Gajanan More
Modified: 2020-09-08 06:41 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-09-08 06:41:51 UTC
Target Upstream Version:


Attachments (Terms of Use)
Cluster-issue (139.78 KB, image/png)
2020-08-19 11:47 UTC, Gajanan More
no flags Details

Description Gajanan More 2020-08-19 11:36:03 UTC
From OCP4.5, we have provided a feature to import a VM in DevConsole from RHV or VMWare as those are the only options provided. Along with importing a VM, we have provided some features that we can perform on the VM on the DevConsole Topology page. If you want to have a look at the test cases, please visit[0]. 
We are following the steps[1] to install CNV operator.

Currently, we are facing some problems from the CNV 2.4 version. 

Steps to Reproduce:
1. Install CNV 2.4 operator
2. Create HyperConverged cluster
3. Go to Developer perspective
4. Click on Import VM card and we are getting blank page

We are using the latest OCP 4.6 CI or nightly builds for testing. 

Even when we try to visit the Create Virtual Machine page from Virtualization tab under Workloads nav item from Administrator perspective, we get the same error.

Build: 4.6.0-0.nightly-2020-08-02-091622
Browser: Google Chrome

[0] https://docs.google.com/spreadsheets/d/1vmaSWSE9syjWHX6haEcNs8lOTxaXA5MSWbjNVDbVCCU/edit?usp=sharing
[1] https://docs.google.com/document/d/1fgnOFDFs0l3gzYfR9EkkBn9qLpIKfhLUXIX1GLAHcVQ/edit?usp=sharing

Comment 1 Tomas Jelinek 2020-08-19 11:39:01 UTC
its worth mentioning that the cluster was not installed completely. E.g. the VM CRD was already available, but parts of the cluster were not yet.

Comment 2 Gajanan More 2020-08-19 11:47:47 UTC
Created attachment 1711863 [details]
Cluster-issue

Comment 3 Simone Tiraboschi 2020-08-19 11:52:44 UTC
The deployment didn't completed because it was an AWS based cluster without nested virtualization capabilities so no nodes satisfied the scheduling constraints

Comment 6 Yaacov Zamir 2020-08-19 14:19:19 UTC
Can't reproduce on current master (4.6)

Comment 7 cvogt 2020-08-19 16:47:59 UTC
We should not encounter a blank screen / type error in the application if there's a mis configuration. Instead the error case should be handled gracefully. I feel something needs to be done in the wizard to prevent the blank screen.

Comment 8 Matt 2020-08-19 21:59:23 UTC
I agree that if we can detect the error the user has made and provide an error that helps them solve the issue we should do this. Is this the result of trying to install CNV on a cluster that can't support it? If that's the case we need to ensure it's clear what the requirements are for the operator.

Comment 9 Tomas Jelinek 2020-08-20 07:08:08 UTC
> We should not encounter a blank screen / type error in the application if there's a mis configuration. Instead the error case should be handled gracefully. I feel something needs to be done in the wizard to prevent the blank screen. 

correct, and I believe it is fixed on the current master. Or do you still see the issue?

> If that's the case we need to ensure it's clear what the requirements are for the operator.

thats a good point, probably the operator should let you know that the installation is not possible. That sounds like a nice enhancement.
@Simone, do you think this is possible to detect before the installation?

Comment 10 Simone Tiraboschi 2020-08-20 08:21:52 UTC
(In reply to Tomas Jelinek from comment #9)
> > If that's the case we need to ensure it's clear what the requirements are for the operator.
> @Simone, do you think this is possible to detect before the installation?

No,
node details are collected by node-labeller which is deployed with the operator and triggered by the operator only once the user creates the CR for the operator.
And maybe deploying CNV and adding virtualization capable nodes later on is also a valid use case.

In my opinion we should just make the issue more visible to the user.

Comment 11 Tomas Jelinek 2020-08-20 08:24:31 UTC
@Simone, how could the UI detect this so we can show a meaningful error/help? Especially a non-admin user which can not access the operator CRs

Comment 14 Simone Tiraboschi 2020-08-20 12:05:58 UTC
(In reply to Tomas Jelinek from comment #11)
> @Simone, how could the UI detect this so we can show a meaningful
> error/help? Especially a non-admin user which can not access the operator CRs

I think it's more a question for Omer,
by the way I'm also a bit confused because in this case we detected it on virt-template-validator, not on node-labeller.

Comment 15 Omer Yahud 2020-08-30 08:38:30 UTC
(In reply to Tomas Jelinek from comment #11)
> @Simone, how could the UI detect this so we can show a meaningful
> error/help? Especially a non-admin user which can not access the operator CRs

If there are no schedulable nodes, none of the cluster nodes would have the 'kubevirt.io/schedulable: "true"' label (which is added by virt-handler, not node-labeller),
so maybe the UI can use this label when deciding if the cluster is ready to accept VMs.

(In reply to Simone Tiraboschi from comment #14)
> (In reply to Tomas Jelinek from comment #11)
> > @Simone, how could the UI detect this so we can show a meaningful
> > error/help? Especially a non-admin user which can not access the operator CRs
> 
> I think it's more a question for Omer,
> by the way I'm also a bit confused because in this case we detected it on
> virt-template-validator, not on node-labeller.

node-labeller only adds cpu and kvm metadata (if kvm resources are exposed on the node by virt-handler) and is not responsible for exporting the virtualization ability of the node.
What do you mean by 'we detected it on virt-template-validator, not on node-labeller'?

Comment 16 Simone Tiraboschi 2020-08-31 10:31:37 UTC
> What do you mean by 'we detected it on virt-template-validator, not on node-labeller'?

In this specific case virt-template-validator wasn't progressing due to 0/5 nodes available: insufficient CPU...

But the user can can see it only checking conditions on HCO CR seeing that HCO CR is not available because template-validator one is not (HCO is aggregating the conditions providing "meaningful" error messages), so the user should check the conditions on template-validator CR and then finally the events on templater validator pod.
This is not that visible/easy to understand for new users.

We should find a way (and event on the CSV object that is really visible?) to say to the user: CNV deployment is not progressing because you don't have any virtualization capable host.

Comment 17 Guohua Ouyang 2020-09-04 02:03:18 UTC
Cannot reproduce the origin issue on CNV 2.4.1 + OCP 4.5 and console master branch.
@Tomas, The bug is moved to ON_QA but has no actual fixes, how should QE process the issue?

Comment 18 Tomas Jelinek 2020-09-04 06:59:29 UTC
@Guohua: on 2.4 + ocp 4.5 it should be possible to reproduce. But you need to have the CNV not correctly installed (like an AWS). E.g. if you try it on some env where the CNV can not be installed, but you try anyway, you should be able to reproduce. And in 4.6 you should not be able to.
Was this what you were doing?

Comment 19 Guohua Ouyang 2020-09-08 02:21:16 UTC
I was trying it on a health cluster, will try it again on a cluster which has the CNV is not installed completely next time.

Comment 20 Tomas Jelinek 2020-09-08 06:41:51 UTC
Actually, in the light of https://bugzilla.redhat.com/show_bug.cgi?id=1876377 this can be closed as wont fix - e.g. this functionality will have to be removed from the dev perspective


Note You need to log in before you can comment on or make changes to this bug.