Bug 1781336 - CDI CRD is been deleted from cluster without record on OLM
Summary: CDI CRD is been deleted from cluster without record on OLM
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Storage
Version: 2.2.0
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: 2.2.0
Assignee: Maya Rashish
QA Contact: Ying Cui
URL:
Whiteboard:
: 1751193 1786476 (view as bug list)
Depends On:
Blocks: 1782241
TreeView+ depends on / blocked
 
Reported: 2019-12-09 19:53 UTC by Israel Pinto
Modified: 2020-01-30 16:27 UTC (History)
15 users (show)

Fixed In Version: hco-bundle-registry-container-v2.2.0-225
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-01-30 16:27:36 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
hco-operator log (4.62 MB, text/plain)
2019-12-21 19:11 UTC, Israel Pinto
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt hyperconverged-cluster-operator pull 393 0 None closed Don't create problematic ownerReferences for CDI and NetworkAddonsConfig 2021-01-29 16:55:17 UTC
Red Hat Product Errata RHEA-2020:0307 0 None None None 2020-01-30 16:27:52 UTC

Description Israel Pinto 2019-12-09 19:53:47 UTC
Description of problem:
After working with CNV 2.2 i found out the CDI CRD deleted and no record found why if happened.
cdi-operator and hco-operator are with state:  CrashLoopBackOff  

in csv we see it been create and after 1 day not exist any more:
$ oc describe csv kubevirt-hyperconverged-operator.v2.2.0
- lastTransitionTime: "2019-12-04T20:59:51Z"
    lastUpdateTime: "2019-12-04T20:59:51Z"
    message: install strategy completed with no errors
    phase: Succeeded
    reason: InstallSucceeded
  - lastTransitionTime: "2019-12-05T11:10:32Z"
    lastUpdateTime: "2019-12-05T11:10:32Z"
    message: requirements no longer met
    phase: Failed
    reason: RequirementsNotMet
  - lastTransitionTime: "2019-12-05T11:10:36Z"
    lastUpdateTime: "2019-12-05T11:10:36Z"
    message: requirements not met
    phase: Pending
    reason: RequirementsNotMet
  lastTransitionTime: "2019-12-05T11:10:36Z"
  lastUpdateTime: "2019-12-05T11:10:39Z"
  message: one or more requirements couldn't be found
  phase: Pending
  reason: RequirementsNotMet
  requirementStatus:
  - group: apiextensions.k8s.io
    kind: CustomResourceDefinition
    message: CRD is not present
    name: cdis.cdi.kubevirt.io
    status: NotPresent
    version: v1beta1


Version-Release number of selected component (if applicable):
CDI:
OPERATOR_VERSION:          v2.2.0-3
      CONTROLLER_IMAGE:          registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-cdi-controller:v2.2.0-3
      IMPORTER_IMAGE:            registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-cdi-importer:v2.2.0-3
      CLONER_IMAGE:              registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-cdi-cloner:v2.2.0-3
      APISERVER_IMAGE:           registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-cdi-apiserver:v2.2.0-3
      UPLOAD_SERVER_IMAGE:       registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-cdi-uploadserver:v2.2.0-3
      UPLOAD_PROXY_IMAGE:        registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-cdi-uploadproxy:v2.2.0-3

HCO:
container-native-virtualization-hyperconverged-cluster-operator:v2.2.0-9

Cluster status:
$oc get crd | grep kubevirt
hostpathprovisioners.hostpathprovisioner.kubevirt.io             2019-12-03T18:24:49Z
hyperconvergeds.hco.kubevirt.io                                  2019-12-03T18:24:49Z
kubevirtcommontemplatesbundles.kubevirt.io                       2019-12-03T18:24:49Z
kubevirtmetricsaggregations.kubevirt.io                          2019-12-03T18:24:49Z
kubevirtnodelabellerbundles.kubevirt.io                          2019-12-03T18:24:49Z
kubevirts.kubevirt.io                                            2019-12-03T18:24:49Z
kubevirttemplatevalidators.kubevirt.io                           2019-12-03T18:24:49Z
networkaddonsconfigs.networkaddonsoperator.network.kubevirt.io   2019-12-03T18:24:49Z
nodemaintenances.kubevirt.io                                     2019-12-03T18:24:49Z
v2vvmwares.kubevirt.io                                           2019-12-03T18:24:49Z
virtualmachineinstancemigrations.kubevirt.io                     2019-12-04T04:09:22Z
virtualmachineinstancepresets.kubevirt.io                        2019-12-04T04:09:22Z
virtualmachineinstancereplicasets.kubevirt.io                    2019-12-04T04:09:22Z
virtualmachineinstances.kubevirt.io                              2019-12-04T04:09:22Z
virtualmachines.kubevirt.io                                      2019-12-04T04:09:22Z

$ oc logs  hco-operator-db6966d6-7wt96  -n openshift-cnv            
{"level":"info","ts":1575919283.594882,"logger":"cmd","msg":"Go Version: go1.12.8"}
{"level":"info","ts":1575919283.594966,"logger":"cmd","msg":"Go OS/Arch: linux/amd64"}
{"level":"info","ts":1575919283.5949717,"logger":"cmd","msg":"Version of operator-sdk: v0.10.0+git"}
{"level":"info","ts":1575919283.5957468,"logger":"leader","msg":"Trying to become the leader."}
{"level":"info","ts":1575919283.8036225,"logger":"leader","msg":"Found existing lock with my name. I was likely restarted."}
{"level":"info","ts":1575919283.803672,"logger":"leader","msg":"Continuing as the leader."}
{"level":"info","ts":1575919283.9728885,"logger":"cmd","msg":"Registering Components."}
{"level":"info","ts":1575919283.9732366,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"hyperconverged-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1575919283.9735765,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"hyperconverged-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1575919283.9737594,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"hyperconverged-controller","source":"kind source: /, Kind="}
{"level":"error","ts":1575919284.1081762,"logger":"kubebuilder.source","msg":"if kind is a CRD, it should be installed before calling Start","kind":"CDI.cdi.kubevirt.io","error":"no matches for kind \"CDI\" in version \"cdi.kubevirt.io/v1alpha1\"","stacktrace":"github.com/kubevirt/hyperconverged-cluster-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/kubevirt/hyperconverged-cluster-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/kubevirt/hyperconverged-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start\n\t/go/src/github.com/kubevirt/hyperconverged-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:89\ngithub.com/kubevirt/hyperconverged-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Watch\n\t/go/src/github.com/kubevirt/hyperconverged-cluster-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:122\ngithub.com/kubevirt/hyperconverged-cluster-operator/pkg/controller/hyperconverged.add\n\t/go/src/github.com/kubevirt/hyperconverged-cluster-operator/pkg/controller/hyperconverged/hyperconverged_controller.go:96\ngithub.com/kubevirt/hyperconverged-cluster-operator/pkg/controller/hyperconverged.Add\n\t/go/src/github.com/kubevirt/hyperconverged-cluster-operator/pkg/controller/hyperconverged/hyperconverged_controller.go:64\ngithub.com/kubevirt/hyperconverged-cluster-operator/pkg/controller.AddToManager\n\t/go/src/github.com/kubevirt/hyperconverged-cluster-operator/pkg/controller/controller.go:13\nmain.main\n\t/go/src/github.com/kubevirt/hyperconverged-cluster-operator/cmd/hyperconverged-cluster-operator/main.go:138\nruntime.main\n\t/usr/lib/golang/src/runtime/proc.go:200"}
{"level":"error","ts":1575919284.1083436,"logger":"cmd","msg":"","error":"no matches for kind \"CDI\" in version \"cdi.kubevirt.io/v1alpha1\"","stacktrace":"github.com/kubevirt/hyperconverged-cluster-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/kubevirt/hyperconverged-cluster-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nmain.main\n\t/go/src/github.com/kubevirt/hyperconverged-cluster-operator/cmd/hyperconverged-cluster-operator/main.go:139\nruntime.main\n\t/usr/lib/golang/src/runtime/proc.go:200"}

$ oc logs cdi-operator-599c5ddb9f-m82lm -n openshift-cnv                            
{"level":"info","ts":1575919174.1708534,"logger":"cmd","msg":"Go Version: go1.12.8"}
{"level":"info","ts":1575919174.1709845,"logger":"cmd","msg":"Go OS/Arch: linux/amd64"}
{"level":"info","ts":1575919174.3811693,"logger":"cmd","msg":"Registering Components."}
{"level":"info","ts":1575919174.3822324,"logger":"cdi-operator","msg":"","VARS":"{OperatorVersion:v2.2.0-3 ControllerImage:registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-cdi-controller:v2.2.0-3 DeployClusterResources:true ImporterImage:registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-cdi-importer:v2.2.0-3 ClonerImage:registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-cdi-cloner:v2.2.0-3 APIServerImage:registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-cdi-apiserver:v2.2.0-3 UploadProxyImage:registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-cdi-uploadproxy:v2.2.0-3 UploadServerImage:registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-cdi-uploadserver:v2.2.0-3 Verbosity:1 PullPolicy:IfNotPresent Namespace:openshift-cnv}"}
{"level":"info","ts":1575919174.382469,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"cdi-operator-controller","source":"kind source: /, Kind="}
{"level":"error","ts":1575919174.3825982,"logger":"kubebuilder.source","msg":"if kind is a CRD, it should be installed before calling Start","kind":"CDI.cdi.kubevirt.io","error":"no matches for kind \"CDI\" in version \"cdi.kubevirt.io/v1alpha1\"","stacktrace":"kubevirt.io/containerized-data-importer/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/kubevirt.io/containerized-data-importer/vendor/github.com/go-logr/zapr/zapr.go:128\nkubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start\n\t/go/src/kubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:89\nkubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Watch\n\t/go/src/kubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:122\nkubevirt.io/containerized-data-importer/pkg/operator/controller.(*ReconcileCDI).watch\n\t/go/src/kubevirt.io/containerized-data-importer/pkg/operator/controller/controller.go:612\nkubevirt.io/containerized-data-importer/pkg/operator/controller.(*ReconcileCDI).add\n\t/go/src/kubevirt.io/containerized-data-importer/pkg/operator/controller/controller.go:603\nkubevirt.io/containerized-data-importer/pkg/operator/controller.Add\n\t/go/src/kubevirt.io/containerized-data-importer/pkg/operator/controller/controller.go:75\nmain.main\n\t/go/src/kubevirt.io/containerized-data-importer/cmd/cdi-operator/operator.go:103\nruntime.main\n\t/usr/lib/golang/src/runtime/proc.go:200"}
{"level":"error","ts":1575919174.3827302,"logger":"cmd","msg":"","error":"no matches for kind \"CDI\" in version \"cdi.kubevirt.io/v1alpha1\"","stacktrace":"kubevirt.io/containerized-data-importer/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/kubevirt.io/containerized-data-importer/vendor/github.com/go-logr/zapr/zapr.go:128\nmain.main\n\t/go/src/kubevirt.io/containerized-data-importer/cmd/cdi-operator/operator.go:104\nruntime.main\n\t/usr/lib/golang/src/runtime/proc.go:200"}

I try to create must-gather logs but it failed:
oc adm must-gather --image=registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-cnv-must-gather-rhel8:v2.2.0-6 --dest-dir=/home/cnv-qe-jenkins/


Failed to run must-gather:
$ oc adm must-gather --image=registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-cnv-must-gather-rhel8:v2.2.0-6 --dest-dir=/home/cnv-qe-jenkins/
[must-gather      ] OUT Using must-gather plugin-in image: registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-cnv-must-gather-rhel8:v2.2.0-6
[must-gather      ] OUT namespace/openshift-must-gather-j4hqq created
[must-gather      ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-9ljv8 created
[must-gather      ] OUT pod for plug-in image registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-cnv-must-gather-rhel8:v2.2.0-6 created
[must-gather-h75c8] POD WARNING: openshift-must-gather has been DEPRECATED. Use `oc adm inspect` instead.
[must-gather-h75c8] POD error: there is no need to specify a resource type as a separate argument when passing arguments in resource/name form (e.g. 'oc get resource/<resource_name>' instead of 'oc get resource resource/<resource_name>'
[must-gather-h75c8] POD WARNING: openshift-must-gather has been DEPRECATED. Use `oc adm inspect` instead.
[must-gather-h75c8] POD error: there is no need to specify a resource type as a separate argument when passing arguments in resource/name form (e.g. 'oc get resource/<resource_name>' instead of 'oc get resource resource/<resource_name>'
[must-gather-h75c8] POD WARNING: openshift-must-gather has been DEPRECATED. Use `oc adm inspect` instead.
[must-gather-h75c8] POD error: there is no need to specify a resource type as a separate argument when passing arguments in resource/name form (e.g. 'oc get resource/<resource_name>' instead of 'oc get resource resource/<resource_name>'
[must-gather-h75c8] POD WARNING: openshift-must-gather has been DEPRECATED. Use `oc adm inspect` instead.
[must-gather-h75c8] POD error: there is no need to specify a resource type as a separate argument when passing arguments in resource/name form (e.g. 'oc get resource/<resource_name>' instead of 'oc get resource resource/<resource_name>'
[must-gather-h75c8] POD WARNING: openshift-must-gather has been DEPRECATED. Use `oc adm inspect` instead.
[must-gather-h75c8] POD error: there is no need to specify a resource type as a separate argument when passing arguments in resource/name form (e.g. 'oc get resource/<resource_name>' instead of 'oc get resource resource/<resource_name>'
[must-gather-h75c8] POD WARNING: openshift-must-gather has been DEPRECATED. Use `oc adm inspect` instead.
[must-gather-h75c8] POD error: there is no need to specify a resource type as a separate argument when passing arguments in resource/name form (e.g. 'oc get resource/<resource_name>' instead of 'oc get resource resource/<resource_name>'
[must-gather-h75c8] POD WARNING: openshift-must-gather has been DEPRECATED. Use `oc adm inspect` instead.
[must-gather-h75c8] POD error: there is no need to specify a resource type as a separate argument when passing arguments in resource/name form (e.g. 'oc get resource/<resource_name>' instead of 'oc get resource resource/<resource_name>'
[must-gather-h75c8] POD WARNING: openshift-must-gather has been DEPRECATED. Use `oc adm inspect` instead.
[must-gather-h75c8] POD error: there is no need to specify a resource type as a separate argument when passing arguments in resource/name form (e.g. 'oc get resource/<resource_name>' instead of 'oc get resource resource/<resource_name>'
[must-gather-h75c8] POD WARNING: openshift-must-gather has been DEPRECATED. Use `oc adm inspect` instead.
[must-gather-h75c8] POD error: the server doesn't have a resource type "inspect"
[must-gather-h75c8] POD WARNING: openshift-must-gather has been DEPRECATED. Use `oc adm inspect` instead.
[must-gather-h75c8] POD error: the server doesn't have a resource type "inspect"
[must-gather-h75c8] POD WARNING: openshift-must-gather has been DEPRECATED. Use `oc adm inspect` instead.
[must-gather-h75c8] POD error: the server doesn't have a resource type "inspect"
[must-gather-h75c8] POD WARNING: openshift-must-gather has been DEPRECATED. Use `oc adm inspect` instead.
[must-gather-h75c8] POD error: the server doesn't have a resource type "inspect"
[must-gather-h75c8] POD WARNING: openshift-must-gather has been DEPRECATED. Use `oc adm inspect` instead.
[must-gather-h75c8] POD error: the server doesn't have a resource type "inspect"
[must-gather-h75c8] POD WARNING: openshift-must-gather has been DEPRECATED. Use `oc adm inspect` instead.
[must-gather-h75c8] POD error: the server doesn't have a resource type "inspect"
[must-gather-h75c8] POD WARNING: openshift-must-gather has been DEPRECATED. Use `oc adm inspect` instead.
[must-gather-h75c8] POD error: the server doesn't have a resource type "inspect"
[must-gather-h75c8] POD WARNING: openshift-must-gather has been DEPRECATED. Use `oc adm inspect` instead.
[must-gather-h75c8] POD error: the server doesn't have a resource type "inspect"
[must-gather-h75c8] POD WARNING: openshift-must-gather has been DEPRECATED. Use `oc adm inspect` instead.
[must-gather-h75c8] POD error: the server doesn't have a resource type "inspect"
[must-gather-h75c8] POD WARNING: openshift-must-gather has been DEPRECATED. Use `oc adm inspect` instead.
[must-gather-h75c8] POD error: the server doesn't have a resource type "inspect"
[must-gather-h75c8] POD WARNING: openshift-must-gather has been DEPRECATED. Use `oc adm inspect` instead.
[must-gather-h75c8] POD error: there is no need to specify a resource type as a separate argument when passing arguments in resource/name form (e.g. 'oc get resource/<resource_name>' instead of 'oc get resource resource/<resource_name>'
[must-gather-h75c8] POD WARNING: openshift-must-gather has been DEPRECATED. Use `oc adm inspect` instead.
[must-gather-h75c8] POD error: the server doesn't have a resource type "inspect"
[must-gather-h75c8] POD WARNING: openshift-must-gather has been DEPRECATED. Use `oc adm inspect` instead.
[must-gather-h75c8] POD error: the server doesn't have a resource type "inspect"
[must-gather-h75c8] POD WARNING: openshift-must-gather has been DEPRECATED. Use `oc adm inspect` instead.
[must-gather-h75c8] POD error: the server doesn't have a resource type "inspect"
[must-gather-h75c8] OUT gather logs unavailable: unexpected EOF                                             
[must-gather-h75c8] OUT waiting for gather to complete
[must-gather-h75c8] OUT gather never finished: timed out waiting for the condition
[must-gather      ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-9ljv8 deleted
[must-gather      ] OUT namespace/openshift-must-gather-j4hqq deleted
error: gather never finished for pod must-gather-h75c8: timed out waiting for the condition

I will keep update if i will see this issue again on other setups.

Comment 1 Israel Pinto 2019-12-10 06:52:31 UTC
must-gather BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1781044

Comment 2 Ryan Hallisey 2019-12-10 13:51:40 UTC
The HCO doesn't manage any CRDs, it only watches the HCO CR and creates + watches component CRs.  OLM is responsible for creating and maintaining the CDI CRD.

If you're able to reproduce, can you post the CSV and OLM's logs? If the CDI CRD is in the CSV, then I think this should be re-targeted to OLM.

Comment 5 Israel Pinto 2019-12-21 19:09:48 UTC
We see it again on other setup, bare metal:.

cm, crd are deleted

 oc get storageclass
NAME                             PROVISIONER                        AGE
hostpath-provisioner (default)   kubevirt.io/hostpath-provisioner   10d
[root@f04-h07-000-1029u ~]# oc get dv -n openshift-cnv  
error: the server doesn't have a resource type "dv"
[root@f04-h07-000-1029u ~]# oc get crd -n openshift-cnv  
NAME                                                             CREATED AT
alertmanagers.monitoring.coreos.com                              2019-12-05T19:33:17Z
apiservers.config.openshift.io                                   2019-12-05T19:23:57Z
authentications.config.openshift.io                              2019-12-05T19:23:58Z
authentications.operator.openshift.io                            2019-12-05T19:24:32Z
baremetalhosts.metal3.io                                         2019-12-05T19:24:51Z
builds.config.openshift.io                                       2019-12-05T19:23:58Z
catalogsourceconfigs.operators.coreos.com                        2019-12-05T19:24:30Z
catalogsources.operators.coreos.com                              2019-12-05T19:24:38Z
cdis.cdi.kubevirt.io                                             2019-12-07T05:57:58Z
clusterautoscalers.autoscaling.openshift.io                      2019-12-05T19:24:30Z
clusternetworks.network.openshift.io                             2019-12-05T19:25:11Z
clusteroperators.config.openshift.io                             2019-12-05T19:23:56Z
clusterresourcequotas.quota.openshift.io                         2019-12-05T19:23:57Z
clusterserviceversions.operators.coreos.com                      2019-12-05T19:24:36Z
clusterversions.config.openshift.io                              2019-12-05T19:23:56Z
configs.imageregistry.operator.openshift.io                      2019-12-05T19:24:29Z
configs.samples.operator.openshift.io                            2019-12-05T19:24:29Z
consoleclidownloads.console.openshift.io                         2019-12-05T19:24:29Z
consoleexternalloglinks.console.openshift.io                     2019-12-05T19:24:29Z
consolelinks.console.openshift.io                                2019-12-05T19:24:29Z
consolenotifications.console.openshift.io                        2019-12-05T19:24:29Z
consoles.config.openshift.io                                     2019-12-05T19:23:58Z
consoles.operator.openshift.io                                   2019-12-05T19:24:29Z
consoleyamlsamples.console.openshift.io                          2019-12-05T19:24:29Z
containerruntimeconfigs.machineconfiguration.openshift.io        2019-12-05T19:28:36Z
controllerconfigs.machineconfiguration.openshift.io              2019-12-05T19:28:33Z
credentialsrequests.cloudcredential.openshift.io                 2019-12-05T19:24:01Z
dnses.config.openshift.io                                        2019-12-05T19:23:58Z
dnses.operator.openshift.io                                      2019-12-05T19:24:31Z
dnsrecords.ingress.operator.openshift.io                         2019-12-05T19:24:31Z
egressnetworkpolicies.network.openshift.io                       2019-12-05T19:25:11Z
featuregates.config.openshift.io                                 2019-12-05T19:23:58Z
hostpathprovisioners.hostpathprovisioner.kubevirt.io             2019-12-09T14:31:41Z
hostsubnets.network.openshift.io                                 2019-12-05T19:25:11Z
hyperconvergeds.hco.kubevirt.io                                  2019-12-07T05:57:58Z
imagecontentsourcepolicies.operator.openshift.io                 2019-12-05T19:23:59Z
images.config.openshift.io                                       2019-12-05T19:23:59Z
infrastructures.config.openshift.io                              2019-12-05T19:23:59Z
ingresscontrollers.operator.openshift.io                         2019-12-05T19:24:01Z
ingresses.config.openshift.io                                    2019-12-05T19:23:59Z
installplans.operators.coreos.com                                2019-12-05T19:24:36Z
kubeapiservers.operator.openshift.io                             2019-12-05T19:24:30Z
kubecontrollermanagers.operator.openshift.io                     2019-12-05T19:24:30Z
kubeletconfigs.machineconfiguration.openshift.io                 2019-12-05T19:28:35Z
kubeschedulers.operator.openshift.io                             2019-12-05T19:24:31Z
kubevirtcommontemplatesbundles.kubevirt.io                       2019-12-07T05:57:58Z
kubevirtmetricsaggregations.kubevirt.io                          2019-12-07T05:57:58Z
kubevirtnodelabellerbundles.kubevirt.io                          2019-12-07T05:57:58Z
kubevirts.kubevirt.io                                            2019-12-07T05:57:58Z
kubevirttemplatevalidators.kubevirt.io                           2019-12-07T05:57:58Z
machineautoscalers.autoscaling.openshift.io                      2019-12-05T19:24:33Z
machineconfigpools.machineconfiguration.openshift.io             2019-12-05T19:28:34Z
machineconfigs.machineconfiguration.openshift.io                 2019-12-05T19:28:32Z
machinehealthchecks.machine.openshift.io                         2019-12-05T19:24:51Z
machines.machine.openshift.io                                    2019-12-05T19:24:51Z
machinesets.machine.openshift.io                                 2019-12-05T19:24:51Z
mcoconfigs.machineconfiguration.openshift.io                     2019-12-05T19:24:35Z
netnamespaces.network.openshift.io                               2019-12-05T19:25:11Z
network-attachment-definitions.k8s.cni.cncf.io                   2019-12-05T19:25:05Z
networkaddonsconfigs.networkaddonsoperator.network.kubevirt.io   2019-12-07T05:57:58Z
networks.config.openshift.io                                     2019-12-05T19:23:59Z
networks.operator.openshift.io                                   2019-12-05T19:24:01Z
nodemaintenances.kubevirt.io                                     2019-12-07T05:57:58Z
nodenetworkconfigurationpolicies.nmstate.io                      2019-12-21T13:28:18Z
nodenetworkstates.nmstate.io                                     2019-12-21T13:28:17Z
oauths.config.openshift.io                                       2019-12-05T19:24:00Z
openshiftapiservers.operator.openshift.io                        2019-12-05T19:24:30Z
openshiftcontrollermanagers.operator.openshift.io                2019-12-05T19:24:32Z
operatorgroups.operators.coreos.com                              2019-12-05T19:24:39Z
operatorhubs.config.openshift.io                                 2019-12-05T19:23:57Z
operatorpkis.network.operator.openshift.io                       2019-12-05T19:24:42Z
operatorsources.operators.coreos.com                             2019-12-05T19:24:32Z
podmonitors.monitoring.coreos.com                                2019-12-05T19:33:17Z
projects.config.openshift.io                                     2019-12-05T19:24:00Z
prometheuses.monitoring.coreos.com                               2019-12-05T19:33:17Z
prometheusrules.monitoring.coreos.com                            2019-12-05T19:33:17Z
proxies.config.openshift.io                                      2019-12-05T19:23:57Z
rolebindingrestrictions.authorization.openshift.io               2019-12-05T19:23:56Z
schedulers.config.openshift.io                                   2019-12-05T19:24:00Z
securitycontextconstraints.security.openshift.io                 2019-12-05T19:23:57Z
servicecas.operator.openshift.io                                 2019-12-05T19:24:32Z
servicecatalogapiservers.operator.openshift.io                   2019-12-05T19:24:30Z
servicecatalogcontrollermanagers.operator.openshift.io           2019-12-05T19:24:31Z
servicemonitors.monitoring.coreos.com                            2019-12-05T19:33:17Z
subscriptions.operators.coreos.com                               2019-12-05T19:24:37Z
tuneds.tuned.openshift.io                                        2019-12-05T19:24:31Z
v2vvmwares.kubevirt.io                                           2019-12-07T05:57:58Z
virtualmachines.kubevirt.io                                      2019-12-10T18:17:38Z


oc get cm -n openshift-cnv
NAME                                               DATA   AGE
cdi-operator-leader-election-helper                0      10d
cluster-network-addons-operator-lock               0      10d
cluster-networks-addons-operator-applied-cluster   1      3h57m
hostpath-provisioner-operator-lock                 0      10d
hyperconverged-cluster-operator-lock               0      28m
kubemacpool-election                               0      10d
kubemacpool-mac-range-config                       2      3h57m
kubemacpool-vm-configmap                           0      10d
kubevirt-config                                    2      3h57m
kubevirt-cpu-plugin-configmap                      1      3h57m
kubevirt-install-strategy-5w4xz                    1      10d
kubevirt-ssp-operator-lock                         0      10d
kubevirt-storage-class-defaults                    4      3h57m
nmstate-config                                     2      3h57m
node-maintenance-operator-lock                     0      10d
v2v-vmware                                         3      3h57m


# oc logs cdi-operator-677b7c4744-7q7qq -n openshift-cnv
{"level":"info","ts":1576947305.2494988,"logger":"cmd","msg":"Go Version: go1.12.8"}
{"level":"info","ts":1576947305.2495549,"logger":"cmd","msg":"Go OS/Arch: linux/amd64"}
{"level":"info","ts":1576947305.3441696,"logger":"cmd","msg":"Registering Components."}
{"level":"info","ts":1576947305.3445733,"logger":"cdi-operator","msg":"","VARS":"{OperatorVersion:v2.2.0-3 ControllerImage:registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-cdi-controller:v2.2.0-3 DeployClusterResources:true ImporterImage:registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-cdi-importer:v2.2.0-3 ClonerImage:registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-cdi-cloner:v2.2.0-3 APIServerImage:registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-cdi-apiserver:v2.2.0-3 UploadProxyImage:registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-cdi-uploadproxy:v2.2.0-3 UploadServerImage:registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-cdi-uploadserver:v2.2.0-3 Verbosity:1 PullPolicy:IfNotPresent Namespace:openshift-cnv}"}
{"level":"info","ts":1576947305.3446798,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"cdi-operator-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1576947305.3449297,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"cdi-operator-controller","source":"kind source: rbac.authorization.k8s.io/v1, Kind=ClusterRole"}
{"level":"info","ts":1576947305.3449984,"logger":"cdi-operator","msg":"Watching","type":"*v1.ClusterRole"}
{"level":"info","ts":1576947305.3450089,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"cdi-operator-controller","source":"kind source: rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"}
{"level":"info","ts":1576947305.3450952,"logger":"cdi-operator","msg":"Watching","type":"*v1.ClusterRoleBinding"}
{"level":"info","ts":1576947305.3451023,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"cdi-operator-controller","source":"kind source: apiextensions.k8s.io/v1beta1, Kind=CustomResourceDefinition"}
{"level":"info","ts":1576947305.3451588,"logger":"cdi-operator","msg":"Watching","type":"*v1beta1.CustomResourceDefinition"}
{"level":"info","ts":1576947305.3451655,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"cdi-operator-controller","source":"kind source: /v1, Kind=ServiceAccount"}
{"level":"info","ts":1576947305.3452399,"logger":"cdi-operator","msg":"Watching","type":"*v1.ServiceAccount"}
{"level":"info","ts":1576947305.3452458,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"cdi-operator-controller","source":"kind source: apps/v1, Kind=Deployment"}
{"level":"info","ts":1576947305.3452995,"logger":"cdi-operator","msg":"Watching","type":"*v1.Deployment"}
{"level":"info","ts":1576947305.3453043,"logger":"cdi-operator","msg":"NOT Watching","type":"*v1.ConfigMap"}
{"level":"info","ts":1576947305.345308,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"cdi-operator-controller","source":"kind source: /v1, Kind=Service"}
{"level":"info","ts":1576947305.345359,"logger":"cdi-operator","msg":"Watching","type":"*v1.Service"}
{"level":"info","ts":1576947305.3453653,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"cdi-operator-controller","source":"kind source: rbac.authorization.k8s.io/v1, Kind=RoleBinding"}
{"level":"info","ts":1576947305.3454223,"logger":"cdi-operator","msg":"Watching","type":"*v1.RoleBinding"}
{"level":"info","ts":1576947305.3454287,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"cdi-operator-controller","source":"kind source: rbac.authorization.k8s.io/v1, Kind=Role"}
{"level":"info","ts":1576947305.3454814,"logger":"cdi-operator","msg":"Watching","type":"*v1.Role"}
{"level":"info","ts":1576947305.345488,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"cdi-operator-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1576947305.3455396,"logger":"cdi-operator","msg":"Watching","type":"*v1.Route"}
{"level":"info","ts":1576947305.34555,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"cdi-operator-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1576947305.345616,"logger":"cmd","msg":"Starting the Manager."}
{"level":"info","ts":1576947322.8387024,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"cdi-operator-controller"}
{"level":"info","ts":1576947322.9389231,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"cdi-operator-controller","worker count":1}
{"level":"info","ts":1576947322.939124,"logger":"cdi-operator","msg":"Reconciling CDI","Request.Namespace":"","Request.Name":"cdi-hyperconverged-cluster"}
{"level":"info","ts":1576947323.2394652,"logger":"cdi-operator","msg":"Reconciling to error state, no configmap","Request.Namespace":"","Request.Name":"cdi-hyperconverged-cluster"}
{"level":"info","ts":1576947323.2443163,"logger":"cdi-operator","msg":"Reconciling CDI","Request.Namespace":"","Request.Name":"cdi-hyperconverged-cluster"}
{"level":"info","ts":1576947323.2443535,"logger":"cdi-operator","msg":"Reconciling to error state, no configmap","Request.Namespace":"","Request.Name":"cdi-hyperconverged-cluster"}


oc describe  cdi.cdi.kubevirt.io/cdi-hyperconverged-cluster -n openshift-cnv
Name:         cdi-hyperconverged-cluster
Namespace:    
Labels:       app=hyperconverged-cluster
Annotations:  <none>
API Version:  cdi.kubevirt.io/v1alpha1
Kind:         CDI
Metadata:
  Creation Timestamp:  2019-12-21T13:28:10Z
  Generation:          14
  Owner References:
    API Version:           hco.kubevirt.io/v1alpha1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  HyperConverged
    Name:                  hyperconverged-cluster
    UID:                   0bdc57c2-f5f8-40c7-ab8b-b0bb71b777fd
  Resource Version:        18499642
  Self Link:               /apis/cdi.kubevirt.io/v1alpha1/cdis/cdi-hyperconverged-cluster
  UID:                     9e10d734-2a16-4dcb-a977-4dbf96024875
Spec:
Status:
  Conditions:
    Last Heartbeat Time:   2019-12-21T16:55:23Z
    Last Transition Time:  2019-12-21T13:28:16Z
    Status:                False
    Type:                  Available
    Last Heartbeat Time:   2019-12-21T16:55:23Z
    Last Transition Time:  2019-12-21T13:28:16Z
    Status:                False
    Type:                  Progressing
    Last Heartbeat Time:   2019-12-21T16:55:23Z
    Last Transition Time:  2019-12-21T13:28:16Z
    Message:               Reconciling to error state, no configmap
    Reason:                ConfigError
    Status:                True
    Type:                  Degraded
  Phase:                   Error
Events:                    <none>

Comment 6 Israel Pinto 2019-12-21 19:11:27 UTC
Created attachment 1647070 [details]
hco-operator log

Comment 8 Oren Cohen 2019-12-25 17:13:11 UTC
The issue of resources and CRDs deletion is caused by resources which are cluster-scoped that are owned by resources which are namespace-scoped.
This isn't allowed and will eventually be deleted by the kubernetes garbage collector (including all the instances of that custom resource)

for example,  "=>" describes "owns" :

hyperconverged-cluster (namespace: openshift-cnv, kind: HyperConverged) => cdi-hyperconverged-cluster (cluster scope, kind CDI) => datavolumes.cdi.kubevirt.io (cluster scoped - CRD)
hyperconverged-cluster (namespace: openshift-cnv, kind: HyperConverged) => cdi-hyperconverged-cluster (cluster scope, kind CDI) => cdiconfigs.cdi.kubevirt.io (cluster scoped - CRD)

kubevirt-hyperconverged-cluster (namespace: openshift-cnv, kind: KubeVirt) => virtualmachines.kubevirt.io (cluster-scope - CRD)


-------------------------------

We think that hyperconverged-cluster and kubevirt-hyperconverged-cluster should be cluster-scoped.

Comment 9 Dan Kenigsberg 2019-12-26 10:33:28 UTC
> We think that hyperconverged-cluster and kubevirt-hyperconverged-cluster should be cluster-scoped.

This is one option. Another option is that hyperconverged-cluster is not longer listed as an owner of the CRD. I don't know which option is better, but I do know that it is quite difficult to move HCO out of its namespace during upgrade.

Comment 10 Dan Kenigsberg 2019-12-26 11:29:57 UTC
For the second option, please see https://github.com/kubevirt/hyperconverged-cluster-operator/pull/122 as a reference of how to remove cross-namespace objects.

Comment 11 Fabian Deutsch 2019-12-27 08:51:07 UTC
> We think that hyperconverged-cluster and kubevirt-hyperconverged-cluster should be cluster-scoped.

Here I wonder if we will get into trouble with OLM. I'm not sure if it easily supports non-namespaced objects (iow I think I recall that it does not).

Another option to workaround this issue for the time being is to create a cluster scoped resource (i.e. HCOOwner) to act as the Owner of all HCO resources.
Then there would be no ownerReference between HCO CR (nspaced) and HCOOwner (non-nspaced).

Comment 13 Ryan Hallisey 2020-01-02 12:16:24 UTC
Does CDI's CR need to be globally scoped?

Comment 14 Adam Litke 2020-01-06 23:04:16 UTC
@Michael, can we fix this by scoping the CDI CR to the install namespace?

Comment 15 Adam Litke 2020-01-06 23:04:57 UTC
*** Bug 1786476 has been marked as a duplicate of this bug. ***

Comment 16 Michael Henriksen 2020-01-06 23:26:04 UTC
This is an HCO bug and is being addressed there.  See https://github.com/kubevirt/hyperconverged-cluster-operator/pull/393

And yes, @Ryan the CDI CRD should be cluster scoped.

Comment 17 Adam Litke 2020-01-07 19:31:46 UTC
*** Bug 1751193 has been marked as a duplicate of this bug. ***

Comment 18 Maya Rashish 2020-01-09 12:31:57 UTC
Fix merged and was backported to HCO release-2.2 branch

Comment 20 Israel Pinto 2020-01-15 09:44:20 UTC
Failed QA
We still CDI resources disappear, this time with kubevirt case.

Comment 30 Maya Rashish 2020-01-20 15:33:58 UTC
From Vasiliy: the kubevirt tests were run with --deploy-testing-infra, this flag tries to install CDI CRD and some other things for the test setup.
When the tests are done it deletes everything it should've installed, and deletes the CDI CRD.

Comment 31 Adam Litke 2020-01-20 15:34:19 UTC
This bug has been moved back to ON_QA.  In comment #20 this was marked FailedQA but we have determined that the original issue has in fact been resolved and the issue experienced by Israel was caused by a bug in the kubevirt functional tests.  That issue will be resolved in a separate bug because it is not related to the original bug report covered here.

Comment 32 Ying Cui 2020-01-23 12:30:21 UTC
(In reply to Adam Litke from comment #31)
> This bug has been moved back to ON_QA.  In comment #20 this was marked
> FailedQA but we have determined that the original issue has in fact been
> resolved and the issue experienced by Israel was caused by a bug in the
> kubevirt functional tests.  That issue will be resolved in a separate bug
> because it is not related to the original bug report covered here.

Separated Bug 1793087 - [Test-only] Fix deployOrWipeTestingInfrastrucure in BeforeTestSuitSetup for CDI

Comment 35 errata-xmlrpc 2020-01-30 16:27:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:0307


Note You need to log in before you can comment on or make changes to this bug.