All versions of 4.8 up to now don't show the menu "Workloads -> Virtualization" in the Web console if the Operator "OpenShift Virtualization" (KubeVirt) is installed. I couldn't find the menu anywhere. In 4.7 everything works as expected. Greetings, Josef
hi, when testing with current code (e.g. OCP 4.8.z + CNV 4.8.z and using kubevirt api version v1 VirtualMachines) the virtualization menu works for me, can you specify the versions of CNV + OCP you tested ? for example: CNV version | OCP version | VirtualMachine API | show virtualization menu item | 4.8 | 4.8 | v1 | yes / now |
waiting with blocker +/- , if we can reproduce we may ask for this to be blocker @gouyang hi, can you reproduce this bug ?
OpenShift Virtualization | OKD (!) version | VirtualMachine API | show virtualization menu item v2.6.5 (from Operatorhub catalog) | 4.8 nightly from today | v1alpha3 (gets installed with OpenShift Virtualization Operator) | no
Note about reproducing: I did not check, because I don't have an env with 2.6.5, We decide to show or not to show the menu item based on existence of VirtualMachine v1 https://github.com/openshift/console/blob/master/frontend/packages/kubevirt-plugin/console-extensions.json#L7 cc:// @ycui do you think this to be blocker (e.g. UI for 4.8 broken with CNV 2.6 ) ?
https://github.com/openshift/console/issues/9227
Created attachment 1790763 [details] Workaround: Create CRD for VirtualMachine with two versions: v1 and v1alpha3 A workaround for this problem is: - install OpenShift Virtualization operator - create "OpenShift Virtualization Deployment" object - wait until everything is installed - copy the CRD "VirtualMachine" - delete the OpenShift Virtualization Operator - add a new version "v1" to the VirtualMachine CRD text file (spec.versions) - set "spec.versions.v1.storage = false" - create a new CRD "VirtualMachine" with this two versions in it -> the Web Console shows the menu entry "Workload -> Virtualization" and the installation of the OpenShift Virtualization operator works.
(In reply to Josef Meier from comment #6) > Created attachment 1790763 [details] > Workaround: Create CRD for VirtualMachine with two versions: v1 and v1alpha3 > > A workaround for this problem is: > - install OpenShift Virtualization operator > - create "OpenShift Virtualization Deployment" object > - wait until everything is installed > - copy the CRD "VirtualMachine" > - delete the OpenShift Virtualization Operator > - add a new version "v1" to the VirtualMachine CRD text file (spec.versions) > - set "spec.versions.v1.storage = false" > - create a new CRD "VirtualMachine" with this two versions in it > > -> the Web Console shows the menu entry "Workload -> Virtualization" and the > installation of the OpenShift Virtualization operator works. But if I use the wizard to create a VM it tells me rather in the end that it is expecting VirtualMachine API version v1 and not v1alpha3. So the problem has to be fixed somehow else.
Note: OpenShift Virtualization 4.8 will be release in 2021-06-29 Tal hi, This BZ will be auto-fixed once 4.8 will be GA in two weeks ... is this a bug (or just a mismatch of released openshift virtualization used with mismatched version upstream ) ? cc:// @ycui
If the OpenShift Virtualization Operator will automatically upgrade the CRDs for v1 I think we have no problem. The result of upgrading OCP to 4.8 will be, that the Virtualization menu items will disappear until the OpenShift Virtualization operator also got upgraded. Users will be confused because of that and maybe need a hint to upgrade the operator. What's your opinion on that?
blocker - this bug will be fixed on release, it should not block release :-)
May be connected to: https://bugzilla.redhat.com/show_bug.cgi?id=1967885 Off Topic: Kubvirt and Kubevirt-UI get released on different dates using different release mechanisms a - supporting both API versions is not trivial. b - I don't know of a support metrics of version kubevirt + kubevirt-plugin that we should support (test?) together. We may want to look at mechanisms to release the kubevirt-UI together with kubevirt. FYI: @danken @tnisan @stirabos
Our typical upgrade process starts with OCP moving from 4.y to 4.y+1. At some point later, automatically or with a human approval, CNV would upgrade from 4.y to 4.y+1. The thinnest possible support matrix is hence CNV\UI 4.y 4.y+1 4.y + + 4.y+1 ? + There is no way around it: a distributed system such as OpenShift cannot upgrade instantaneously (it's a corollary of Special Relativity!), so each component must support at least two versions of the API of all components upgrading with and after it. This would not change even when kubevirt + kubevirt-plugin are deployed by HCO (it may be less apparent but we shouldn't be deceived by that). I am not in the position of declaring this a 4.8.0 blocker, but I wish I was. This is a big bug. It is an OpenShift design principle that services continue to serve during upgrade. We shouldn't tolerate a downtime that can span days or weeks (e.g if we fail to ship CNV on time). As I see it, the GUI - as any other component - must fall back to v1alpha3 if it cannot find v1.
The issue can be reproduced on CNV 2.6 cluster with OCP 4.8 UI. I think we should fix this, if it cannot find current api version it falls back to previous one. Except "the Workloads -> Virtualization" menu shows there, it should also be able to use the UI to create/delete virtual machines.
changing back to blocker ? , we will discuss this in next bug scrub changing to urgent , this will create a downtime that can span two weeks until versions align.
(In reply to Yaacov Zamir from comment #8) > Note: > > OpenShift Virtualization 4.8 will be release in 2021-06-29 > > Tal hi, This BZ will be auto-fixed once 4.8 will be GA in two weeks ... is > this a bug (or just a mismatch of released openshift virtualization used > with mismatched version upstream ) ? > > cc:// @ycui Yes, from QE point of view, it is blocker and regression.
fixed by: https://github.com/openshift/console/pull/9258 - merged upstream
The PR is in master but not in release-4.8 yet. Confirm the bug is gone on master branch.
There is the fix but not merge yet to 4.8, so moving the bug to POST.
It's in upstream release-4.8 now and works with old CNV. Move the bug to verified based on it.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days