I experienced the same issue on 4.11 # oc --kubeconfig cluster-deploys/ipi-dev-storage-34/auth/kubeconfig get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.11.0-0.ci-2022-04-29-080325 True False False 11m baremetal 4.11.0-0.ci-2022-04-29-080325 True False False 30m cloud-controller-manager 4.11.0-0.ci-2022-04-29-080325 True False False 36m cloud-credential 4.11.0-0.ci-2022-04-29-080325 True False False 29m cluster-autoscaler 4.11.0-0.ci-2022-04-29-080325 True False False 32m config-operator 4.11.0-0.ci-2022-04-29-080325 True False False 33m console 4.11.0-0.ci-2022-04-29-080325 True False False 13m csi-snapshot-controller 4.11.0-0.ci-2022-04-29-080325 True False False 31m dns 4.11.0-0.ci-2022-04-29-080325 True False False 29m etcd 4.11.0-0.ci-2022-04-29-080325 True False False 31m image-registry False True True 23m Available: The deployment does not exist... ingress 4.11.0-0.ci-2022-04-29-080325 True False False 17m ... # oc --kubeconfig cluster-deploys/ipi-dev-storage-34/auth/kubeconfig logs -n openshift-image-registry cluster-image-registry-operator-7d8d5cd787-6xbpg Overwriting root TLS certificate authority trust store I0506 00:30:28.261561 1 observer_polling.go:159] Starting file observer ... I0506 00:35:22.293992 1 recorder_logging.go:44] &Event{ObjectMeta:{dummy.16ec5ce9bc4521e4 dummy 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DaemonSetUpdated,Message:Updated DaemonSet.apps/node-ca -n openshift-image-registry because it changed,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-05-06 00:35:22.293912036 +0000 UTC m=+295.012638762,LastTimestamp:2022-05-06 00:35:22.293912036 +0000 UTC m=+295.012638762,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} I0506 00:35:22.295398 1 generator.go:60] object *v1.DaemonSet, Namespace=openshift-image-registry, Name=node-ca updated: W0506 00:35:23.521966 1 reflector.go:442] github.com/openshift/client-go/image/informers/externalversions/factory.go:101: watch of *v1.ImageStream ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 127; INTERNAL_ERROR") has prevented the request from succeeding W0506 00:35:23.531006 1 reflector.go:442] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 75; INTERNAL_ERROR") has prevented the request from succeeding E0506 00:35:24.785121 1 controller.go:373] unable to sync: unable to apply objects: failed to create object *v1.Secret, Namespace=openshift-image-registry, Name=image-registry-private-configuration: specified resource key credentials does not contain HMAC keys, requeuing E0506 00:35:31.461876 1 controller.go:373] unable to sync: unable to apply objects: failed to create object *v1.Secret, Namespace=openshift-image-registry, Name=image-registry-private-configuration: specified resource key credentials does not contain HMAC keys, requeuing I0506 00:35:33.895112 1 caconfig.go:75] unable to get the service name to add service-ca.crt
checked with ocp-release:4.10.9-x86_64 message: 'Progressing: Unable to apply resources: unable to apply objects: failed to create object *v1.Secret, Namespace=openshift-image-registry, Name=image-registry-private-configuration: specified resource key credentials does not contain HMAC keys' reason: Error status: "True" type: Progressing $ oc get co image-registry -o yaml .... message: 'Progressing: Unable to apply resources: unable to apply objects: failed to create object *v1.Secret, Namespace=openshift-image-registry, Name=image-registry-private-configuration: specified resource key credentials does not contain HMAC keys' reason: Error but check the nightly ci test record, 4.10.9-x86_64 pass. http://virt-openshift-05.lab.eng.nay.redhat.com/ci-logs/Flexy-install/93096/log
Sounds like the recent change to IBM COS permissions is likely the cause. https://cloud.ibm.com/docs/overview?topic=overview-whatsnew Tested adding Admin role to the CIRO CR https://github.com/openshift/cluster-image-registry-operator/pull/776 And looks like things are working better. I tested on 4.11 and we'll need to cherry-pick it back to 4.10 as well # oc --kubeconfig cluster-deploys/cjs-test-72/auth/kubeconfig get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.11.0-0.ci-2022-05-09-061049 True False False 6m23s baremetal 4.11.0-0.ci-2022-05-09-061049 True False False 24m cloud-controller-manager 4.11.0-0.ci-2022-05-09-061049 True False False 29m cloud-credential 4.11.0-0.ci-2022-05-09-061049 True False False 23m cluster-autoscaler 4.11.0-0.ci-2022-05-09-061049 True False False 23m config-operator 4.11.0-0.ci-2022-05-09-061049 True False False 26m console 4.11.0-0.ci-2022-05-09-061049 True False False 12m csi-snapshot-controller 4.11.0-0.ci-2022-05-09-061049 True False False 25m dns 4.11.0-0.ci-2022-05-09-061049 True False False 23m etcd 4.11.0-0.ci-2022-05-09-061049 True False False 23m image-registry 4.11.0-0.ci-2022-05-09-061049 True False False 16m ingress 4.11.0-0.ci-2022-05-09-061049 True False False 14m
Test https://github.com/openshift/cluster-image-registry-operator/pull/776 along with openshift/cluster-cloud-controller-manager-operator/pull/189, the image registry could be installed during installataion. Could push or pull image to the image registry.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5069