Description of problem: When upgrade from 4.1.7 to 4.2 nightly, it reports error "a required extension is not available to update" Version-Release number of the following components: $ oc version Client Version: version.Info{Major:"4", Minor:"2+", GitVersion:"v4.2.0", GitCommit:"b3e2c8a2b", GitTreeState:"clean", BuildDate:"2019-07-24T02:29:03Z", GoVersion:"go1.12.6", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.4+3a25c9b", GitCommit:"3a25c9b", GitTreeState:"clean", BuildDate:"2019-07-18T00:10:31Z", GoVersion:"go1.11.6", Compiler:"gc", Platform:"linux/amd64"} $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.1.7 True False 15h Cluster version is 4.1.7 How reproducible: Always Steps to Reproduce: 1. Install 4.1.7 cluster with OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE: quay.io/openshift-release-dev/ocp-release:4.1.7 and tries to upgrade to 4.2 nightly $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.1.7 True False 15h Cluster version is 4.1.7 $ oc adm upgrade --to-image=registry.svc.ci.openshift.org/ocp/release:4.2.0-0.nightly-2019-07-24-000310 --force=true Updating to release image registry.svc.ci.openshift.org/ocp/release:4.2.0-0.nightly-2019-07-24-000310 2. Check upgrade progress $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.1.7 True True 13h Unable to apply 4.2.0-0.nightly-2019-07-24-000310: a required extension is not available to update $ oc get clusterversion -o yaml apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: ClusterVersion .... status: availableUpdates: null conditions: - lastTransitionTime: "2019-07-24T01:46:00Z" message: Done applying 4.1.7 status: "True" type: Available - lastTransitionTime: "2019-07-25T00:56:20Z" message: 'Could not update proxy "cluster" (23 of 428): the server does not recognize this resource, check extension API servers' reason: UpdatePayloadResourceTypeMissing status: "True" type: Failing - lastTransitionTime: "2019-07-24T11:33:30Z" message: 'Unable to apply 4.2.0-0.nightly-2019-07-24-000310: a required extension is not available to update' reason: UpdatePayloadResourceTypeMissing status: "True" type: Progressing - lastTransitionTime: "2019-07-24T02:41:29Z" status: "True" type: RetrievedUpdates desired: force: true image: registry.svc.ci.openshift.org/ocp/release:4.2.0-0.nightly-2019-07-24-000310 version: 4.2.0-0.nightly-2019-07-24-000310 ......... CVO logs: I0724 23:47:43.696978 1 reflector.go:169] Listing and watching *v1.Proxy from github.com/openshift/client-go/config/informers/externalversions/factory.go:101 E0724 23:47:43.697968 1 reflector.go:134] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: Failed to list *v1.Proxy: the server could not find the requested resource (get proxies.config.openshift.io) Actual results: error reported and upgrade can not continue Expected results: the upgrade should start without error Additional info: Please attach logs from ansible-playbook with the -vvv flag
Upgrade to newer 4.2 nightly build 4.2.0-0.nightly-2019-07-24-220922(which includes PR https://github.com/openshift/cluster-config-operator/pull/72 ) also hit the same issue $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.1.7 True True 17h Working towards 4.2.0-0.nightly-2019-07-24-220922: 5% complete
PR: https://github.com/openshift/cluster-config-operator/pull/77
Tested the upgrade from 4.1.8 to 4.2.0-0.nightly-2019-07-28-222114 The original issue described in this bug has been fixed, most operators successfully upgraded $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.2.0-0.nightly-2019-07-28-222114 True False False 7h12m cloud-credential 4.2.0-0.nightly-2019-07-28-222114 True False False 7h29m cluster-autoscaler 4.2.0-0.nightly-2019-07-28-222114 True False False 7h29m console 4.2.0-0.nightly-2019-07-28-222114 True False False 7h19m dns 4.1.8 True False False 7h29m image-registry 4.2.0-0.nightly-2019-07-28-222114 True False False 4h41m ingress 4.1.8 True False False 7h22m kube-apiserver 4.2.0-0.nightly-2019-07-28-222114 True False False 7h28m kube-controller-manager 4.2.0-0.nightly-2019-07-28-222114 True False False 7h27m kube-scheduler 4.2.0-0.nightly-2019-07-28-222114 True False False 7h27m machine-api 4.2.0-0.nightly-2019-07-28-222114 True False False 7h29m machine-config 4.1.8 True False False 7h28m marketplace 4.2.0-0.nightly-2019-07-28-222114 True False False 45m monitoring 4.2.0-0.nightly-2019-07-28-222114 True False False 42m network 4.1.8 True False False 7h29m node-tuning 4.2.0-0.nightly-2019-07-28-222114 True False False 45m openshift-apiserver 4.2.0-0.nightly-2019-07-28-222114 True False False 4h38m openshift-controller-manager 4.2.0-0.nightly-2019-07-28-222114 True False False 7h28m openshift-samples 4.2.0-0.nightly-2019-07-28-222114 True False False 45m operator-lifecycle-manager 4.2.0-0.nightly-2019-07-28-222114 True False False 7h28m operator-lifecycle-manager-catalog 4.2.0-0.nightly-2019-07-28-222114 True False False 7h28m operator-lifecycle-manager-packageserver 4.2.0-0.nightly-2019-07-28-222114 True False False 44m service-ca 4.2.0-0.nightly-2019-07-28-222114 True False False 7h29m service-catalog-apiserver 4.2.0-0.nightly-2019-07-28-222114 True False False 7h26m service-catalog-controller-manager 4.2.0-0.nightly-2019-07-28-222114 True False False 7h26m storage 4.2.0-0.nightly-2019-07-28-222114 True False False 46m support 4.2.0-0.nightly-2019-07-28-222114 True False False 45m These operators not upgraded successfully are tracked in bug 1731323 Verified on 4.2.0-0.nightly-2019-07-28-222114 $ oc adm release info --commits "registry.svc.ci.openshift.org/ocp/release:4.2.0-0.nightly-2019-07-28-222114" | grep cluster-config cluster-config-operator https://github.com/openshift/cluster-config-operator 7d44f45dcc2eab96e95628bd7459e4776260b01f [yapei@dhcp-141-192 cluster-config-operator]$ git log 7d44f45dcc2eab96e95628bd7459e4776260b01f | grep '#77' Merge pull request #77 from stlaz/update_crds
*** Bug 1732992 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922