tracks https://github.com/openshift/cluster-image-registry-operator/pull/408
Still can reproduce when set image-registry operator to Unmanaged when upgrade from 4.2.4 to latest payload, will check how things goes when set to removed tomorrow: $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.2.4 True True 3h20m Unable to apply 4.2.0-0.nightly-2019-11-11-233305: the cluster operator image-registry has not yet successfully rolled out
That's ok, Unamanged isn't a supported configuration. The operator doesn't manage the operand and therefore the operand doesn't get upgraded.
Still failed to upgrade when set image registry operator to Removed: [wzheng@openshift-qe 4.2.zupgrade]$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.2.4 True True 4h16m Unable to apply 4.2.0-0.nightly-2019-11-11-233305: the cluster operator image-registry has not yet successfully rolled out [wzheng@openshift-qe 4.2.zupgrade]$ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.2.0-0.nightly-2019-11-11-233305 True False False 5h18m cloud-credential 4.2.0-0.nightly-2019-11-11-233305 True False False 5h30m cluster-autoscaler 4.2.0-0.nightly-2019-11-11-233305 True False False 5h24m console 4.2.0-0.nightly-2019-11-11-233305 True False False 4h4m dns 4.2.4 True False False 5h29m image-registry 4.2.0-0.nightly-2019-11-11-233305 False False False 4h17m ingress 4.2.0-0.nightly-2019-11-11-233305 True False False 5h24m insights 4.2.0-0.nightly-2019-11-11-233305 True False False 5h30m kube-apiserver 4.2.0-0.nightly-2019-11-11-233305 True False False 5h29m kube-controller-manager 4.2.0-0.nightly-2019-11-11-233305 True False False 5h27m kube-scheduler 4.2.0-0.nightly-2019-11-11-233305 True False False 5h27m machine-api 4.2.0-0.nightly-2019-11-11-233305 True False False 5h30m machine-config 4.2.4 True False False 5h29m marketplace 4.2.0-0.nightly-2019-11-11-233305 True False False 4h6m monitoring 4.2.0-0.nightly-2019-11-11-233305 True False False 5h19m network 4.2.4 True False False 5h29m node-tuning 4.2.0-0.nightly-2019-11-11-233305 True False False 4h7m openshift-apiserver 4.2.0-0.nightly-2019-11-11-233305 True False False 5h26m openshift-controller-manager 4.2.0-0.nightly-2019-11-11-233305 True False False 5h28m openshift-samples 4.2.0-0.nightly-2019-11-11-233305 True False False 3h59m operator-lifecycle-manager 4.2.0-0.nightly-2019-11-11-233305 True False False 5h29m operator-lifecycle-manager-catalog 4.2.0-0.nightly-2019-11-11-233305 True False False 5h29m operator-lifecycle-manager-packageserver 4.2.0-0.nightly-2019-11-11-233305 True False False 4h6m service-ca 4.2.0-0.nightly-2019-11-11-233305 True False False 5h30m service-catalog-apiserver 4.2.0-0.nightly-2019-11-11-233305 True False False 5h26m service-catalog-controller-manager 4.2.0-0.nightly-2019-11-11-233305 True False False 5h26m storage 4.2.0-0.nightly-2019-11-11-233305 True False False 4h7m
I cannot get must-gather for below error: [must-gather-4p64p] OUT host_service_logs/masters/kubelet_service.log rsync: connection unexpectedly closed (100033472 bytes received so far) [receiver] rsync error: error in rsync protocol data stream (code 12) at io.c(605) [receiver=3.0.9] rsync: connection unexpectedly closed (145 bytes received so far) [generator] rsync error: error in rsync protocol data stream (code 12) at io.c(605) [generator=3.0.9] [must-gather-4p64p] OUT gather output not downloaded: exit status 12 [must-gather-4p64p] OUT Delete https://api.wzhengbug.qe.devcluster.openshift.com:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/must-gather-nxdzd: read tcp 10.66.140.240:49334->3.19.169.15:6443: read: connection timed out[must-gather ] OUT namespace/openshift-must-gather-qjx2d deleted error: unable to download output from pod must-gather-4p64p: exit status 12
When describe image-registry operator: Status: Conditions: Last Transition Time: 2019-11-14T07:04:50Z Message: The deployment does not exist Reason: DeploymentNotFound Status: False Type: Available Last Transition Time: 2019-11-14T09:01:14Z Message: All registry resources are removed Reason: Removed Status: False Type: Progressing Last Transition Time: 2019-11-14T06:42:58Z Status: False Type: Degraded Extension: <nil>
Verified on 4.2.0-0.nightly-2019-12-02-165545.
*** Bug 1769690 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:4093
*** Bug 1779196 has been marked as a duplicate of this bug. ***