Hide Forgot
Description of problem: Registry pod cannot start up if change managementSet from Managed -> Removed -> Managed for " Warning Failed 4m (x24 over 9m) kubelet, ip-10-0-175-144.ec2.internal Error: Couldn't find key REGISTRY_STORAGE_S3_ACCESSKEY in Secret openshift-image-registry/image-registry-private-configuration": $ oc get pods NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-68cf5cbc58-qqltr 1/1 Running 0 3h image-registry-fc5df884-45lh5 0/1 CreateContainerConfigError 0 9m registry-ca-hostmapper-49dfh 1/1 Running 0 3h registry-ca-hostmapper-7q99b 1/1 Running 0 3h Event when describe the pod: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9m default-scheduler Successfully assigned openshift-image-registry/image-registry-fc5df884-45lh5 to ip-10-0-175-144.ec2.internal Normal Pulled 6m (x13 over 9m) kubelet, ip-10-0-175-144.ec2.internal Container image "registry.svc.ci.openshift.org/openshift/origin-v4.0-20181128050751@sha256:c8c110b8733d0d352ddc5fe35ba9eeac913b7609c2c9c778586f2bb74f281681" already present on machine Warning Failed 4m (x24 over 9m) kubelet, ip-10-0-175-144.ec2.internal Error: Couldn't find key REGISTRY_STORAGE_S3_ACCESSKEY in Secret openshift-image-registry/image-registry-private-configuration Version-Release number of selected component (if applicable): registry.svc.ci.openshift.org/openshift/origin-v4.0-20181128050751@sha256:c8c110b8733d0d352ddc5fe35ba9eeac913b7609c2c9c778586f2bb74f281681 How reproducible: always Steps to Reproduce: 1.Change managementSet from Managed -> Removed -> Managed in imageregistries 2.Check registry pod status 3. Actual results: It cannot startup for "CreateContainerConfigError" Expected results: Image registry pod should start up Additional info: The image-registry pods are running well before the change.
If do the same steps with emptyDir registry storage backend, image-registry doesn't appear after change managementState from Removed to Managed.
Looks like the secret/image-registry-private-configuration is being recreated but does not contain the AWS keys. Looking into this.
https://github.com/openshift/cluster-image-registry-operator/pull/106 has merged, can you please check this again and see if it's still an issue?
Looks like this bugs has fixed in v0.9.1. Could you please change this bug to on_qa?
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758