3 instances of openshift-apiserver failing https://openshift-gce-devel.appspot.com/build/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-upgrade-4.0/908 https://openshift-gce-devel.appspot.com/build/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-upgrade-4.0/907 https://openshift-gce-devel.appspot.com/build/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-upgrade-4.0/905 https://openshift-gce-devel.appspot.com/build/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-upgrade-4.0/904 with errors like ``` { "lastTransitionTime": "2019-04-04T11:20:38Z", "message": "Available: v1.quota.openshift.io is not ready: 503", "reason": "Available", "status": "False", "type": "Available" }, ``` or ``` { "lastTransitionTime": "2019-04-04T09:31:00Z", "message": "Available: v1.image.openshift.io is not ready: 500\nAvailable: v1.route.openshift.io is not ready: 500\nAvailable: v1.security.openshift.io is not ready: 500", "reason": "AvailableMultiple", "status": "False", "type": "Available" }, ```
Another CI runs that saw this failure: https://openshift-gce-devel.appspot.com/build/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-upgrade-4.0/928 https://openshift-gce-devel.appspot.com/build/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-upgrade-4.0/927 https://openshift-gce-devel.appspot.com/build/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-upgrade-4.0/926 Same error example: curl https://storage.googleapis.com/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-upgrade-4.0/928/artifacts/e2e-aws-upgrade/clusteroperators.json | jq '.items[] | select(.status.conditions[] | .type == "Available" and .status != "True") | [.metadata.name, .status.conditions]' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 61288 100 61288 0 0 123k 0 --:--:-- --:--:-- --:--:-- 123k [ "openshift-apiserver", [ { "lastTransitionTime": "2019-04-04T20:43:13Z", "reason": "AsExpected", "status": "False", "type": "Failing" }, { "lastTransitionTime": "2019-04-04T20:43:13Z", "reason": "AsExpected", "status": "False", "type": "Progressing" }, { "lastTransitionTime": "2019-04-04T21:01:33Z", "message": "Available: v1.quota.openshift.io is not ready: 503", "reason": "Available", "status": "False", "type": "Available" }, { "lastTransitionTime": "2019-04-04T20:43:13Z", "reason": "AsExpected", "status": "True", "type": "Upgradeable" } ] ]
This is kind of known. I suspect https://github.com/openshift/cluster-config-operator/pull/25 started this. I need to merge https://github.com/openshift/origin/pull/22425 and this failure should go away.
In fresh env of latest payload 4.0.0-0.nightly-2019-04-05-165550, also see: oc get co ... openshift-apiserver 4.0.0-0.nightly-2019-04-05-165550 False ... ... oc get co openshift-apiserver -o yaml ... status: conditions: - lastTransitionTime: 2019-04-08T02:41:44Z reason: AsExpected status: "False" type: Failing - lastTransitionTime: 2019-04-08T02:41:00Z reason: AsExpected status: "False" type: Progressing - lastTransitionTime: 2019-04-08T03:11:03Z message: 'Available: v1.quota.openshift.io is not ready: 503' reason: Available status: "False" type: Available ... oc get openshiftapiserver -o yaml ... status: conditions: - lastTransitionTime: 2019-04-08T02:32:47Z reason: NoUnsupportedConfigOverrides status: "True" type: UnsupportedConfigOverridesUpgradeable - lastTransitionTime: 2019-04-08T02:32:48Z status: "False" type: ConfigObservationFailing - lastTransitionTime: 2019-04-08T02:36:32Z status: "False" type: ResourceSyncControllerFailing - lastTransitionTime: 2019-04-08T03:26:32Z message: 'v1.quota.openshift.io is not ready: 503' status: "False" type: Available ...
Continuing comment 3: $ watch oc get clusteroperator/openshift-apiserver openshift-apiserver 4.0.0-0.nightly-2019-04-05-165550 False False False 0s ... openshift-apiserver 4.0.0-0.nightly-2019-04-05-165550 True False False 0s openshift-apiserver is always flapping between True and False
*** This bug has been marked as a duplicate of bug 1694226 ***