Description of problem: Using v1 CRDs in an SLO's manifests causes cvo to panic. Version: 4.5 How reproducible: Always Steps to Reproduce: 1. Include a v1 CRD in /manifests of an SLO 2. Try to start CVO Actual results: E0418 07:39:47.803483 1 runtime.go:78] Observed a panic: &errors.errorString{s:"converting (v1.CustomResourceDefinition).Group to (v1beta1.CustomResourceDefinition).Group: Version not present in src"} (converting (v1.CustomResourceDefinition).Group to (v1beta1.CustomResourceDefinition).Group: Version not present in src) goroutine 252 [running]: k8s.io/apimachinery/pkg/util/runtime.logPanic(0x161e1c0, 0xc000cc61d0) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3 k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82 panic(0x161e1c0, 0xc000cc61d0) /usr/local/go/src/runtime/panic.go:679 +0x1b2 github.com/openshift/cluster-version-operator/lib/resourceread.ReadCustomResourceDefinitionOrDie(0xc000cea000, 0x1484, 0x1500, 0xc000cfeb70, 0xc000d0bc20) /go/src/github.com/openshift/cluster-version-operator/lib/resourceread/apiext.go:28 +0x188 github.com/openshift/cluster-version-operator/lib/resourcebuilder.(*crdBuilder).Do(0xc0019c7410, 0x1ab95c0, 0xc000d277c0, 0xc0019c7410, 0xc0019c7410) /go/src/github.com/openshift/cluster-version-operator/lib/resourcebuilder/apiext.go:46 +0x44 github.com/openshift/cluster-version-operator/pkg/cvo.(*resourceBuilder).Apply(0xc00087aab0, 0x1ab95c0, 0xc000d277c0, 0xc0015cf4e0, 0x2, 0x16159c0, 0xc0019c7320) /go/src/github.com/openshift/cluster-version-operator/pkg/cvo/cvo.go:688 +0xc2 github.com/openshift/cluster-version-operator/pkg/payload.(*Task).Run(0xc000a911d0, 0x1ab95c0, 0xc000d277c0, 0xc000afbbc0, 0x17, 0x1a75040, 0xc00087aab0, 0x2, 0x0, 0x0) /go/src/github.com/openshift/cluster-version-operator/pkg/payload/task.go:75 +0xaf github.com/openshift/cluster-version-operator/pkg/cvo.(*SyncWorker).apply.func2(0x1ab95c0, 0xc000d277c0, 0xc000c6ac68, 0x16, 0x273, 0x2, 0x2) /go/src/github.com/openshift/cluster-version-operator/pkg/cvo/sync_worker.go:654 +0x3a9 github.com/openshift/cluster-version-operator/pkg/payload.RunGraph.func2(0xc000f21cc0, 0x1ab95c0, 0xc000d277c0, 0xc000727740, 0xc000ce5170, 0xc000f18000, 0xc0007277a0, 0x26) /go/src/github.com/openshift/cluster-version-operator/pkg/payload/task_graph.go:576 +0x2b1 created by github.com/openshift/cluster-version-operator/pkg/payload.RunGraph /go/src/github.com/openshift/cluster-version-operator/pkg/payload/task_graph.go:562 +0x26c panic: converting (v1.CustomResourceDefinition).Group to (v1beta1.CustomResourceDefinition).Group: Version not present in src [recovered] panic: converting (v1.CustomResourceDefinition).Group to (v1beta1.CustomResourceDefinition).Group: Version not present in src goroutine 252 [running]: k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x105 panic(0x161e1c0, 0xc000cc61d0) /usr/local/go/src/runtime/panic.go:679 +0x1b2 github.com/openshift/cluster-version-operator/lib/resourceread.ReadCustomResourceDefinitionOrDie(0xc000cea000, 0x1484, 0x1500, 0xc000cfeb70, 0xc000d0bc20) /go/src/github.com/openshift/cluster-version-operator/lib/resourceread/apiext.go:28 +0x188 github.com/openshift/cluster-version-operator/lib/resourcebuilder.(*crdBuilder).Do(0xc0019c7410, 0x1ab95c0, 0xc000d277c0, 0xc0019c7410, 0xc0019c7410) /go/src/github.com/openshift/cluster-version-operator/lib/resourcebuilder/apiext.go:46 +0x44 github.com/openshift/cluster-version-operator/pkg/cvo.(*resourceBuilder).Apply(0xc00087aab0, 0x1ab95c0, 0xc000d277c0, 0xc0015cf4e0, 0x2, 0x16159c0, 0xc0019c7320) /go/src/github.com/openshift/cluster-version-operator/pkg/cvo/cvo.go:688 +0xc2 github.com/openshift/cluster-version-operator/pkg/payload.(*Task).Run(0xc000a911d0, 0x1ab95c0, 0xc000d277c0, 0xc000afbbc0, 0x17, 0x1a75040, 0xc00087aab0, 0x2, 0x0, 0x0) /go/src/github.com/openshift/cluster-version-operator/pkg/payload/task.go:75 +0xaf github.com/openshift/cluster-version-operator/pkg/cvo.(*SyncWorker).apply.func2(0x1ab95c0, 0xc000d277c0, 0xc000c6ac68, 0x16, 0x273, 0x2, 0x2) /go/src/github.com/openshift/cluster-version-operator/pkg/cvo/sync_worker.go:654 +0x3a9 github.com/openshift/cluster-version-operator/pkg/payload.RunGraph.func2(0xc000f21cc0, 0x1ab95c0, 0xc000d277c0, 0xc000727740, 0xc000ce5170, 0xc000f18000, 0xc0007277a0, 0x26) /go/src/github.com/openshift/cluster-version-operator/pkg/payload/task_graph.go:576 +0x2b1 created by github.com/openshift/cluster-version-operator/pkg/payload.RunGraph /go/src/github.com/openshift/cluster-version-operator/pkg/payload/task_graph.go:562 +0x26c Expected results: v1 CRDs are created
Failed e2e log is here. https://gcsweb-ci.svc.ci.openshift.org/gcs/origin-ci-test/pr-logs/pull/operator-framework_operator-lifecycle-manager/1453/pull-ci-operator-framework-operator-lifecycle-manager-master-e2e-aws-olm/5047/artifacts/e2e-aws-olm/installer/ OLM shipped v1 CRD in v4.5 which should be supported in CVO. Will check the latest e2e job to verify the bug.
The issue has not occurred in the latest e2e job against pr1453. https://storage.googleapis.com/origin-ci-test/pr-logs/pull/operator-framework_operator-lifecycle-manager/1453/pull-ci-operator-framework-operator-lifecycle-manager-master-e2e-aws-olm/5085/artifacts/e2e-aws-olm/installer/.openshift_install.log time="2020-04-19T15:11:01Z" level=info msg="API v1.18.0-rc.1 up" time="2020-04-19T15:11:01Z" level=info msg="Waiting up to 40m0s for bootstrapping to complete..." time="2020-04-19T15:19:34Z" level=debug msg="Bootstrap status: complete" time="2020-04-19T15:19:34Z" level=info msg="Destroying the bootstrap resources..." ... time="2020-04-19T15:21:01Z" level=info msg="Waiting up to 30m0s for the cluster at https://api.ci-op-1yqlkmk7-0af52.origin-ci-int-aws.dev.rhcloud.com:6443 to initialize..." time="2020-04-19T15:21:02Z" level=debug msg="Still waiting for the cluster to initialize: Working towards 0.0.1-2020-04-19-150125: 75% complete" time="2020-04-19T15:21:28Z" level=debug msg="Still waiting for the cluster to initialize: Working towards 0.0.1-2020-04-19-150125: 76% complete" time="2020-04-19T15:21:43Z" level=debug msg="Still waiting for the cluster to initialize: Working towards 0.0.1-2020-04-19-150125: 82% complete" time="2020-04-19T15:21:58Z" level=debug msg="Still waiting for the cluster to initialize: Working towards 0.0.1-2020-04-19-150125: 83% complete" time="2020-04-19T15:22:13Z" level=debug msg="Still waiting for the cluster to initialize: Working towards 0.0.1-2020-04-19-150125: 85% complete" time="2020-04-19T15:23:13Z" level=debug msg="Still waiting for the cluster to initialize: Working towards 0.0.1-2020-04-19-150125: 86% complete, waiting on authentication, console, csi-snapshot-controller, image-registry, ingress, kube-storage-version-migrator, monitoring" time="2020-04-19T15:26:28Z" level=debug msg="Still waiting for the cluster to initialize: Working towards 0.0.1-2020-04-19-150125: 86% complete, waiting on authentication, console, monitoring" time="2020-04-19T15:30:13Z" level=debug msg="Still waiting for the cluster to initialize: Working towards 0.0.1-2020-04-19-150125: 87% complete, waiting on authentication" time="2020-04-19T15:32:13Z" level=debug msg="Still waiting for the cluster to initialize: Working towards 0.0.1-2020-04-19-150125: 98% complete" time="2020-04-19T15:32:28Z" level=debug msg="Cluster is initialized"
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409