Verified this bug with payload 4.2.0-0.nightly-2019-10-08-223531. * On bootstrap node, "metadata,metadata.google.internal,metadata.google.internal." was added into default noProxy # env |grep NO_PROXY NO_PROXY=.cluster.local,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,api-int.qe-gpei-42rc31.qe.gcp.devcluster.openshift.com,api.qe-gpei-42rc31.qe.gcp.devcluster.openshift.com,etcd-0.qe-gpei-42rc31.qe.gcp.devcluster.openshift.com,etcd-1.qe-gpei-42rc31.qe.gcp.devcluster.openshift.com,etcd-2.qe-gpei-42rc31.qe.gcp.devcluster.openshift.com,localhost,metadata,metadata.google.internal,metadata.google.internal.,test.no-proxy.com kubelet service was running well, bootkube.service completed successfully. * In proxy/cluster, "metadata,metadata.google.internal,metadata.google.internal." was also added into default noProxy # oc get proxy cluster -o jsonpath='{.status.noProxy}' --config=/var/opt/openshift/auth/kubeconfig .cluster.local,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,api-int.qe-gpei-42rc31.qe.gcp.devcluster.openshift.com,api.qe-gpei-42rc31.qe.gcp.devcluster.openshift.com,etcd-0.qe-gpei-42rc31.qe.gcp.devcluster.openshift.com,etcd-1.qe-gpei-42rc31.qe.gcp.devcluster.openshift.com,etcd-2.qe-gpei-42rc31.qe.gcp.devcluster.openshift.com,localhost,metadata,metadata.google.internal,metadata.google.internal.,test.no-proxy.com Also added egress-allow firewall rules to enable the access to www.googleapis.com of the cluster(To workaround https://bugzilla.redhat.com/show_bug.cgi?id=1759400), then we could get a successful installation. # oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.2.0-0.nightly-2019-10-08-223531 True False False 9m3s cloud-credential 4.2.0-0.nightly-2019-10-08-223531 True False False 27m cluster-autoscaler 4.2.0-0.nightly-2019-10-08-223531 True False False 20m console 4.2.0-0.nightly-2019-10-08-223531 True False False 10m dns 4.2.0-0.nightly-2019-10-08-223531 True False False 27m image-registry 4.2.0-0.nightly-2019-10-08-223531 True False False 13m ingress 4.2.0-0.nightly-2019-10-08-223531 True False False 14m insights 4.2.0-0.nightly-2019-10-08-223531 True False False 27m kube-apiserver 4.2.0-0.nightly-2019-10-08-223531 True False False 25m kube-controller-manager 4.2.0-0.nightly-2019-10-08-223531 True False False 25m kube-scheduler 4.2.0-0.nightly-2019-10-08-223531 True False False 25m machine-api 4.2.0-0.nightly-2019-10-08-223531 True False False 27m machine-config 4.2.0-0.nightly-2019-10-08-223531 True False False 27m marketplace 4.2.0-0.nightly-2019-10-08-223531 True False False 21m monitoring 4.2.0-0.nightly-2019-10-08-223531 True False False 13m network 4.2.0-0.nightly-2019-10-08-223531 True False False 26m node-tuning 4.2.0-0.nightly-2019-10-08-223531 True False False 22m openshift-apiserver 4.2.0-0.nightly-2019-10-08-223531 True False False 22m openshift-controller-manager 4.2.0-0.nightly-2019-10-08-223531 True False False 25m openshift-samples 4.2.0-0.nightly-2019-10-08-223531 True False False 21m operator-lifecycle-manager 4.2.0-0.nightly-2019-10-08-223531 True False False 26m operator-lifecycle-manager-catalog 4.2.0-0.nightly-2019-10-08-223531 True False False 26m operator-lifecycle-manager-packageserver 4.2.0-0.nightly-2019-10-08-223531 True False False 24m service-ca 4.2.0-0.nightly-2019-10-08-223531 True False False 27m service-catalog-apiserver 4.2.0-0.nightly-2019-10-08-223531 True False False 22m service-catalog-controller-manager 4.2.0-0.nightly-2019-10-08-223531 True False False 23m storage 4.2.0-0.nightly-2019-10-08-223531 True False False 21m
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922