Bug 1870116
Summary: | Clicking on Imprt VM card on the Developer Console is displaying blank page | ||||||
---|---|---|---|---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Gajanan More <gamore> | ||||
Component: | Console Kubevirt Plugin | Assignee: | Tomas Jelinek <tjelinek> | ||||
Status: | CLOSED WONTFIX | QA Contact: | Guohua Ouyang <gouyang> | ||||
Severity: | high | Docs Contact: | |||||
Priority: | high | ||||||
Version: | 4.6 | CC: | aos-bugs, cvogt, gouyang, mcarleto, ncredi, oyahud, stirabos, tjelinek, yzamir | ||||
Target Milestone: | --- | ||||||
Target Release: | 4.6.0 | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2020-09-08 06:41:51 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Gajanan More
2020-08-19 11:36:03 UTC
its worth mentioning that the cluster was not installed completely. E.g. the VM CRD was already available, but parts of the cluster were not yet. Created attachment 1711863 [details]
Cluster-issue
The deployment didn't completed because it was an AWS based cluster without nested virtualization capabilities so no nodes satisfied the scheduling constraints Can't reproduce on current master (4.6) We should not encounter a blank screen / type error in the application if there's a mis configuration. Instead the error case should be handled gracefully. I feel something needs to be done in the wizard to prevent the blank screen. I agree that if we can detect the error the user has made and provide an error that helps them solve the issue we should do this. Is this the result of trying to install CNV on a cluster that can't support it? If that's the case we need to ensure it's clear what the requirements are for the operator. > We should not encounter a blank screen / type error in the application if there's a mis configuration. Instead the error case should be handled gracefully. I feel something needs to be done in the wizard to prevent the blank screen. correct, and I believe it is fixed on the current master. Or do you still see the issue? > If that's the case we need to ensure it's clear what the requirements are for the operator. thats a good point, probably the operator should let you know that the installation is not possible. That sounds like a nice enhancement. @Simone, do you think this is possible to detect before the installation? (In reply to Tomas Jelinek from comment #9) > > If that's the case we need to ensure it's clear what the requirements are for the operator. > @Simone, do you think this is possible to detect before the installation? No, node details are collected by node-labeller which is deployed with the operator and triggered by the operator only once the user creates the CR for the operator. And maybe deploying CNV and adding virtualization capable nodes later on is also a valid use case. In my opinion we should just make the issue more visible to the user. @Simone, how could the UI detect this so we can show a meaningful error/help? Especially a non-admin user which can not access the operator CRs (In reply to Tomas Jelinek from comment #11) > @Simone, how could the UI detect this so we can show a meaningful > error/help? Especially a non-admin user which can not access the operator CRs I think it's more a question for Omer, by the way I'm also a bit confused because in this case we detected it on virt-template-validator, not on node-labeller. (In reply to Tomas Jelinek from comment #11) > @Simone, how could the UI detect this so we can show a meaningful > error/help? Especially a non-admin user which can not access the operator CRs If there are no schedulable nodes, none of the cluster nodes would have the 'kubevirt.io/schedulable: "true"' label (which is added by virt-handler, not node-labeller), so maybe the UI can use this label when deciding if the cluster is ready to accept VMs. (In reply to Simone Tiraboschi from comment #14) > (In reply to Tomas Jelinek from comment #11) > > @Simone, how could the UI detect this so we can show a meaningful > > error/help? Especially a non-admin user which can not access the operator CRs > > I think it's more a question for Omer, > by the way I'm also a bit confused because in this case we detected it on > virt-template-validator, not on node-labeller. node-labeller only adds cpu and kvm metadata (if kvm resources are exposed on the node by virt-handler) and is not responsible for exporting the virtualization ability of the node. What do you mean by 'we detected it on virt-template-validator, not on node-labeller'? > What do you mean by 'we detected it on virt-template-validator, not on node-labeller'?
In this specific case virt-template-validator wasn't progressing due to 0/5 nodes available: insufficient CPU...
But the user can can see it only checking conditions on HCO CR seeing that HCO CR is not available because template-validator one is not (HCO is aggregating the conditions providing "meaningful" error messages), so the user should check the conditions on template-validator CR and then finally the events on templater validator pod.
This is not that visible/easy to understand for new users.
We should find a way (and event on the CSV object that is really visible?) to say to the user: CNV deployment is not progressing because you don't have any virtualization capable host.
Cannot reproduce the origin issue on CNV 2.4.1 + OCP 4.5 and console master branch. @Tomas, The bug is moved to ON_QA but has no actual fixes, how should QE process the issue? @Guohua: on 2.4 + ocp 4.5 it should be possible to reproduce. But you need to have the CNV not correctly installed (like an AWS). E.g. if you try it on some env where the CNV can not be installed, but you try anyway, you should be able to reproduce. And in 4.6 you should not be able to. Was this what you were doing? I was trying it on a health cluster, will try it again on a cluster which has the CNV is not installed completely next time. Actually, in the light of https://bugzilla.redhat.com/show_bug.cgi?id=1876377 this can be closed as wont fix - e.g. this functionality will have to be removed from the dev perspective |