This bug was initially created as a copy of Bug #1811202 I am copying this bug because: +++ This bug was initially created as a clone of Bug #1811169 +++ Description of problem: Currently, /readyz starts reporting failure after ShutdownDelayDuration elapses. The load balancer(s) uses /readyz for health check and are not aware of the shutdown initiation until ShutdownDelayDuration elapses. This does not give the load balancer(s) enough time to detect and react to it. We expect /readyz to start returning failure as soon as apiserver shutdown is initiated(SIGTERM received). This gives the load balancer a window (defined by ShutdownDelayDuration) to detect that /readyz is red and stop sending traffic to this server. How reproducible: Always upstream PR: https://github.com/kubernetes/kubernetes/pull/88911
This bug hasn't had any activity in the last 30 days. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet. As such, we're marking this bug as "LifecycleStale". If you have further information on the current state of the bug, please update it, otherwise this bug will be automatically closed in 7 days. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant.
Not sure why the "Target Release" of the BZ has been reset. Looking at the history, it's set for "4.3.z". /readyz fix did not make into openshift-apiserver 4.3, looks like openshift/kubernetes-apiserver:openshift-apiserver-4.3-kubernetes-1.17.3 does not have the upstream fix for /readyz. The /readyz fix made it into 1.17.4 in upstream. Stefan created a new branch "1.17.4" on April 15, 2020 - https://github.com/openshift/kubernetes-apiserver/tree/openshift-apiserver-4.3-kubernetes-1.17.4 We need to work on a new PR to move openshift-apiserver 4.3 to use "1.17.4".
Verified with OCP 4.3.0-0.nightly-2020-05-25-153254 env, checked below. $ oc -n openshift-apiserver get po -o wide | grep apiserver | head -1 | awk '{print $6}' # get pod IP In one terminal, enter into master $ master=$(oc get node | grep master | awk '{print $1}' | head -1) $ oc debug node/$master After logged in the master debug pod, sh-4.2# chroot /host sh-4.4# while true; do curl -k --silent --show-error https://<pod IP>:8443/readyz ; done okokokokokokokokokokokokokokokokokokokokokokokokokokokokokok In another terminal, $ oc rsh pod/ip-...-31ap-south-1computeinternal-debug sh-4.2# chroot /host sh-4.4# ps aux | grep "openshift-apiserver start" root 30545 2.1 1.1 567144 196100 ? Ssl 04:13 0:35 openshift-apiserver start --config=/var/run/configmaps/config/config.yaml -v=2 sh-4.4# kill -INT 30545 In the first terminal, check the output, after above kill, can immediately see: curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to 10...40:8443 curl: (7) Failed to connect to 10.129.0.38 port 8443: Connection refused [+]ping ok [+]log ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/image.openshift.io-apiserver-caches ok [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld [-]poststarthook/authorization.openshift.io-ensureopenshift-infra failed: reason withheld [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [-]poststarthook/security.openshift.io-bootstrapscc failed: reason withheld [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]shutdown ok healthz check failed ... The endpoint of readyz will start returning failure as soon as openshift-apiserver shutdown is initiated, detects that /readyz is red.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2256