Bug 1989461 - kube-apiserver does not use the SO_REUSEPORT properly
Summary: kube-apiserver does not use the SO_REUSEPORT properly
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-apiserver
Version: 4.9
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.9.0
Assignee: Michal Fojtik
QA Contact: Ke Wang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-08-03 09:32 UTC by Michal Fojtik
Modified: 2021-10-18 17:44 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-10-18 17:44:09 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-kube-apiserver-operator pull 1191 0 None None None 2021-08-03 09:33:40 UTC
Red Hat Product Errata RHSA-2021:3759 0 None None None 2021-10-18 17:44:18 UTC

Description Michal Fojtik 2021-08-03 09:32:54 UTC
Description of problem:

kube-apiserver process utilizes the --permit-address-sharing argument, however, it still waits for the port to be released by the kernel in its "setup" container.
That means the argument is effectively not used and kube-apiserver rollout takes +60/90s for the kernel to fully release the port.


Version-Release number of selected component (if applicable):

4.6
4.7
4.8
4.9

How reproducible:

This is being reproduced on every kube-apiserver revision rollout.

Actual results:

kube-apiserver setup container waits for the ports 6443 and 6080 to be released by kernel. The SO_REUSEPORT allows the process to take the port regardless of what kernel thinks.

Expected results:

kube-apiserver should not wait for kernel to release the port, however it must be 100% sure there are no two kube-apiserver processes running in the system. For that locking is used to prevent that and a fallback mechanism is put in place to handle a case when kubelet has a bug and run two parallel processes.

Additional info:

Comment 2 Ke Wang 2021-08-19 10:54:24 UTC
Verification steps as below,

$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.9.0-0.nightly-2021-08-16-154237   True        False         91m     Cluster version is 4.9.0-0.nightly-2021-08-16-154237

In terminal A, forced kube-apiserver to do degraded.
$ oc patch kubeapiserver/cluster --type=json -p '[ {"op": "replace", "path": "/spec/forceRedeploymentReason", "value": "roll-'"$( date --rfc-3339=ns )"'"} ]'
kubeapiserver.operator.openshift.io/cluster patched

In terminal B,
$ cat test.sh 
#!/usr/bin/env bash
while true
do 
  date;curl -ks https://api....:6443/readyz
  echo
done

$ bash ./test.sh
...
Tue 17 Aug 2021 06:49:15 PM CST
ok
Tue 17 Aug 2021 06:49:16 PM CST

Tue 17 Aug 2021 06:49:22 PM CST

Tue 17 Aug 2021 06:49:27 PM CST
...
Tue 17 Aug 2021 06:49:54 PM CST

Tue 17 Aug 2021 06:49:59 PM CST

Tue 17 Aug 2021 06:50:05 PM CST
[+]ping ok
[+]log ok
[+]etcd ok
[+]api-openshift-apiserver-available ok
[+]api-openshift-oauth-apiserver-available ok
[-]informer-sync failed: reason withheld
[+]poststarthook/openshift.io-startkubeinformers ok
[+]poststarthook/openshift.io-openshift-apiserver-reachable ok
[+]poststarthook/openshift.io-oauth-apiserver-reachable ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/quota.openshift.io-clusterquotamapping ok
[+]poststarthook/openshift.io-deprecated-api-requests-filter ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[-]poststarthook/start-apiextensions-controllers failed: reason withheld
[+]poststarthook/crd-informer-synced ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-wait-for-first-sync ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]shutdown ok
readyz check failed
...
Tue 17 Aug 2021 06:50:21 PM CST
ok

Run above script, from the test results, we can see 
The kube-apiserver is outage from 2021 06:49:16 to 2021 06:50:21, about one minute(60s)

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Did the same test as above  on OCP 4.8 without this PR fix,

$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.8.0-0.nightly-2021-08-17-004424   True        False         10m     Cluster version is 4.8.0-0.nightly-2021-08-17-004424

Wed 18 Aug 2021 01:25:11 PM CST
ok
Wed 18 Aug 2021 01:25:12 PM CST
[+]ping ok
[+]log ok
[+]etcd ok
[+]api-openshift-apiserver-available ok
[+]api-openshift-oauth-apiserver-available ok
[+]informer-sync ok
[+]poststarthook/quota.openshift.io-clusterquotamapping ok
[+]poststarthook/openshift.io-deprecated-api-requests-filter ok
[+]poststarthook/openshift.io-startkubeinformers ok
[+]poststarthook/openshift.io-openshift-apiserver-reachable ok
[+]poststarthook/openshift.io-oauth-apiserver-reachable ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-wait-for-first-sync ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[-]shutdown failed: reason withheld
readyz check failed
...
Wed 18 Aug 2021 01:28:33 PM CST
[+]ping ok
[+]log ok
[+]etcd ok
[+]api-openshift-apiserver-available ok
[+]api-openshift-oauth-apiserver-available ok
[+]informer-sync ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/quota.openshift.io-clusterquotamapping ok
[+]poststarthook/openshift.io-deprecated-api-requests-filter ok
[+]poststarthook/openshift.io-startkubeinformers ok
[+]poststarthook/openshift.io-openshift-apiserver-reachable ok
[+]poststarthook/openshift.io-oauth-apiserver-reachable ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-wait-for-first-sync ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]shutdown ok
readyz check failed

Wed 18 Aug 2021 01:28:42 PM CST
ok

From the above test results, we can see 
The kube-apiserver is outage from  2021 01:25:12 to 2021 01:28:42 , about three and a half minutes(210s)

- Tested on non-SNO cluster, get one of kube-apiserver ip,


$ oc patch kubeapiserver/cluster --type=json -p '[ {"op": "replace", "path": "/spec/forceRedeploymentReason", "value": "roll-'"$( date --rfc-3339=ns )"'"} ]'

Logged in one bastion server and run following script,
# cat test.sh
while true
do
  date;curl -ks https://10.0.0.7:6443/readyz
  echo
  sleep 1
done
      
# ./test.sh | tee test.log

After the kube-apiserver roll-out is completed.
$ cat test.log
...
Thu Aug 19 06:22:53 EDT 2021
ok
Thu Aug 19 06:22:54 EDT 2021
[+]ping ok
[+]log ok
[+]etcd ok
[+]api-openshift-apiserver-available ok
[+]api-openshift-oauth-apiserver-available ok
[+]informer-sync ok
[+]poststarthook/openshift.io-openshift-apiserver-reachable ok
[+]poststarthook/openshift.io-oauth-apiserver-reachable ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/quota.openshift.io-clusterquotamapping ok
[+]poststarthook/openshift.io-deprecated-api-requests-filter ok
[+]poststarthook/openshift.io-startkubeinformers ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-wait-for-first-sync ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[-]shutdown failed: reason withheld
readyz check failed
...
Thu Aug 19 06:25:13 EDT 2021
ok 

The kube-apiserver is outage from  06:22:54 to 06:25:13 , total 139s, it's a normal GracefulTerminationDuration, but the PR fix saved a significant time for SNO cluster, all is well, so move the bug VERIFIED.

Comment 5 errata-xmlrpc 2021-10-18 17:44:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:3759


Note You need to log in before you can comment on or make changes to this bug.