This bug was initially created as a copy of Bug #1807128 I am copying this bug because: I have an install stuck at: level=debug msg="Still waiting for the cluster to initialize: Cluster operator operator-lifecycle-manager-catalog has not yet reported success" oc get clusteroperators does not show operator-lifecycle-manager-catalog... and the logs show: $ oc logs $POD -n openshift-operator-lifecycle-managertime="2020-02-25T14:58:32Z" level=info msg="log level info" time="2020-02-25T14:58:32Z" level=info msg="TLS keys set, using https for metrics" W0225 14:58:32.552916 1 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. time="2020-02-25T14:58:32Z" level=info msg="Using in-cluster kube client config" time="2020-02-25T14:58:32Z" level=info msg="Using in-cluster kube client config" W0225 14:58:32.557542 1 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. time="2020-02-25T14:58:32Z" level=info msg="Using in-cluster kube client config" time="2020-02-25T14:58:32Z" level=info msg="operator not ready: communicating with server failed: Get https://172.30.0.1:443/version?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused" time="2020-02-25T14:58:32Z" level=info msg="ClusterOperator api not present, skipping update (Get https://172.30.0.1:443/api?timeout=32s: dial tcp 172.30.0.1:443: connect: connection refused)" However, currently the API is now available: $ oc rsh -n openshift-operator-lifecycle-manager $POD sh-4.2$ curl -k https://172.30.0.1:443/api?timeout=32s: { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "forbidden: User \"system:anonymous\" cannot get path \"/api\"", "reason": "Forbidden", "details": { }, "code": 403 }sh-4.2$ But it appears the operator is not retrying.
Installed cluster and left it installed for approximately one day and OLM Cluster Operators are running as expected. Marking as VERIFIED. OCP Cluster Version: 4.4.0-0.nightly-2020-03-03-110909 oc get clusteroperators | grep "operator-lifecycle-manager*" operator-lifecycle-manager 4.4.0-0.nightly-2020-03-03-110909 True False False 23h operator-lifecycle-manager-catalog 4.4.0-0.nightly-2020-03-03-110909 True False False 23h operator-lifecycle-manager-packageserver 4.4.0-0.nightly-2020-03-03-110909 True False False 23h oc get pods -n openshift-operator-lifecycle-manager NAME READY STATUS RESTARTS AGE catalog-operator-79fd684bbd-jfftt 1/1 Running 0 9h olm-operator-7796cb5d6c-5mk8w 1/1 Running 0 9h packageserver-75f9d47c9-65tlk 1/1 Running 0 9h packageserver-75f9d47c9-99nhg 1/1 Running 0 9h
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0581