The ingress operator exits with 'failed to get aws creds from secret /: secrets \"cloud-credentials\" not found' on vSphere if the cluster infrastructure config's status.platformStatus field is unset: $ oc logs ingress-operator-5bcb978d8d-5957c 2019-07-30T10:27:32.387Z INFO operator log/log.go:26 started zapr logger 2019-07-30T10:27:35.267Z INFO operator.entrypoint ingress-operator/main.go:61 using operator namespace {"namespace": "openshift-ingress-operator"} 2019-07-30T10:27:35.281Z ERROR operator.entrypoint ingress-operator/main.go:104 failed to create DNS manager {"error": "failed to get aws creds from secret /: secrets \"cloud-credentials\" not found"} +++ This bug was initially created as a clone of Bug #1731323 +++
Verified with 4.2.0-0.nightly-2019-08-01-113533 and issue has been fixed. $ oc adm upgrade --to-image=registry.svc.ci.openshift.org/ocp/release:4.2.0-0.nightly-2019-08-01-113533 --force=true $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.2.0-0.nightly-2019-08-01-113533 True False False 100m cloud-credential 4.2.0-0.nightly-2019-08-01-113533 True False False 114m cluster-autoscaler 4.2.0-0.nightly-2019-08-01-113533 True False False 114m console 4.2.0-0.nightly-2019-08-01-113533 True False False 103m dns 4.2.0-0.nightly-2019-08-01-113533 True False False 114m image-registry 4.2.0-0.nightly-2019-08-01-113533 True False False 38m ingress 4.2.0-0.nightly-2019-08-01-113533 True False False 38m kube-apiserver 4.2.0-0.nightly-2019-08-01-113533 True False False 110m kube-controller-manager 4.2.0-0.nightly-2019-08-01-113533 True False False 109m kube-scheduler 4.2.0-0.nightly-2019-08-01-113533 True False False 108m machine-api 4.2.0-0.nightly-2019-08-01-113533 True False False 114m machine-config 4.2.0-0.nightly-2019-08-01-113533 True False False 2m7s marketplace 4.2.0-0.nightly-2019-08-01-113533 True False False 13m monitoring 4.2.0-0.nightly-2019-08-01-113533 True False False 60m network 4.2.0-0.nightly-2019-08-01-113533 True False False 115m node-tuning 4.2.0-0.nightly-2019-08-01-113533 True False False 26m openshift-apiserver 4.2.0-0.nightly-2019-08-01-113533 True False False 3m46s openshift-controller-manager 4.2.0-0.nightly-2019-08-01-113533 True False False 114m openshift-samples 4.2.0-0.nightly-2019-08-01-113533 True False False 52m operator-lifecycle-manager 4.2.0-0.nightly-2019-08-01-113533 True False False 112m operator-lifecycle-manager-catalog 4.2.0-0.nightly-2019-08-01-113533 True False False 112m operator-lifecycle-manager-packageserver 4.2.0-0.nightly-2019-08-01-113533 True False False 14m service-ca 4.2.0-0.nightly-2019-08-01-113533 True False False 114m service-catalog-apiserver 4.2.0-0.nightly-2019-08-01-113533 True False False 108m service-catalog-controller-manager 4.2.0-0.nightly-2019-08-01-113533 True False False 108m storage 4.2.0-0.nightly-2019-08-01-113533 True False False 61m support 4.2.0-0.nightly-2019-08-01-113533 True False False 61m
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922