https://testgrid.k8s.io/redhat-openshift-informing#release-openshift-ocp-installer-e2e-aws-upgrade-4.8&include-filter-by-regex=Kubernetes%20APIs Seeing about a 30% fail rate of [sig-api-machinery] Kubernetes APIs remain available for new connections Feb 23 20:26:42.124: API "kubernetes-api-available-new-connections" was unreachable during disruption for at least 2s of 34m45s (0%): Feb 23 20:18:03.262 E kube-apiserver-new-connection kube-apiserver-new-connection started failing: Get "https://api.ci-op-ivyvzgrr-0b477.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/default": dial tcp 3.19.2.121:6443: connect: connection refused Feb 23 20:18:04.212 - 1s E kube-apiserver-new-connection kube-apiserver-new-connection is not responding to GET requests Feb 23 20:18:05.318 I kube-apiserver-new-connection kube-apiserver-new-connection started responding to GET requests Deeper detail from the node log shows that right as we get this error one of the instances finishes its connection ,which is right when the error happens. Feb 23 20:18:02.505 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-203-7.us-east-2.compute.internal node/ip-10-0-203-7 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 1m10s finished Feb 23 20:18:02.509 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-203-7.us-east-2.compute.internal node/ip-10-0-203-7 reason/TerminationStoppedServing Server has stopped listening Feb 23 20:18:03.148 I ns/openshift-console-operator deployment/console-operator reason/OperatorStatusChanged Status for clusteroperator/console changed: Degraded message changed from "CustomRouteSyncDegraded: the server is currently unable to handle the request (delete routes.route.openshift.io console-custom)\nSyncLoopRefreshDegraded: the server is currently unable to handle the request (get routes.route.openshift.io console)" to "SyncLoopRefreshDegraded: the server is currently unable to handle the request (get routes.route.openshift.io console)" (2 times) Feb 23 20:18:03.880 E kube-apiserver-reused-connection kube-apiserver-reused-connection started failing: Get "https://api.ci-op-ivyvzgrr-0b477.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/default": dial tcp 3.21.250.132:6443: connect: connection refused This kind of looks like the load balancer didn't remove the kube-apiserver and kept sending traffic and the connection didn't cleanly shut down - did something regress in the apiserver traffic connection? Setting to high because we had *just* got green when this regressed.
This has been green for 5 test runs in a row... Let's see if it clears up.
Also happening in regular runs: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/25918/pull-ci-openshift-origin-master-e2e-aws-csi/1364310688415092736 yesterday at 2pm EST. pod "pod-7961e5d1-283d-4e09-816f-995a9ffbfba0" was not deleted: Get "https://api.ci-op-7cbd82h9-2550a.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/e2e-fsgroupchangepolicy-7286/pods/pod-7961e5d1-283d-4e09-816f-995a9ffbfba0": dial tcp 3.209.36.244:6443: connect: connection refused occurred This error should never happen.
Happened 4minutes ago https://search.ci.openshift.org/?search=6443%3A+connect%3A+connection+refused&maxAge=48h&context=1&type=junit&name=.*-aws-.*&maxMatches=1&maxBytes=20971520&groupBy=job https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_sdn/263/pull-ci-openshift-sdn-master-e2e-aws-upgrade/1364633756488437760 Feb 24 18:43:59.292 E kube-apiserver-new-connection kube-apiserver-new-connection started failing: Get "https://api.ci-op-hhxdyjmp-550b5.origin-ci-int-aws.dev.rhcloud.com:6443/api/v1/namespaces/default": dial tcp 54.200.130.251:6443: connect: connection refused Feb 24 18:44:00.153 E kube-apiserver-new-connection kube-apiserver-new-connection is not responding to GET requests
OpenShift backport pending upstream merge.
Looks this is still an issue as I still see connection refused failure instances. https://search.ci.openshift.org/?search=6443%3A+connect%3A+connection+refused&maxAge=48h&context=1&type=junit&name=.*-aws-.*&maxMatches=1&maxBytes=20971520&groupBy=job https://search.ci.openshift.org/?search=kubelet+terminates+kube-apiserver+gracefully&maxAge=48h&context=1&type=junit&name=.*-aws-.*&excludeName=&maxMatches=1&maxBytes=20971520&groupBy=job
@Sunil I think this is a multi-part issue of which this was just one fix.
Sorry, hit send too soon--- Adding a depends on where we have been continuing to track down the root of the issue. For this bug, what we found here and what I submitted a fix for (hence, what should be verified) is that while the apiserver is gracefully terminating, it is not killed while in the process of deletion due to liveness probe failures. Representative log entries are in https://bugzilla.redhat.com/show_bug.cgi?id=1932097#c4 This can be verified by looking at the node journal.
Thanks @Elana, I checked journal logs while apiserver was gracefully terminating and do not see it being killed.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438