Bug 1984635 - openshift-config-operator needs to handle 60 seconds downtime of API server gracefully in SNO
Summary: openshift-config-operator needs to handle 60 seconds downtime of API server g...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: config-operator
Version: 4.9
Hardware: Unspecified
OS: Linux
unspecified
high
Target Milestone: ---
: 4.9.0
Assignee: Antonio Ojea
QA Contact: Rahul Gangwar
URL:
Whiteboard: chaos
Depends On:
Blocks: 1984730
TreeView+ depends on / blocked
 
Reported: 2021-07-21 19:45 UTC by Naga Ravi Chaitanya Elluri
Modified: 2021-10-18 17:40 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-10-18 17:40:27 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-config-operator pull 211 0 None open Bug 1984635: use leader election values to handle apiserver rollout on SNO 2021-07-26 16:27:21 UTC
Github openshift cluster-config-operator pull 213 0 None open Bug 1984635: use new default leader election values to handle SNO environments 2021-07-28 15:57:35 UTC
Red Hat Product Errata RHSA-2021:3759 0 None None None 2021-10-18 17:40:57 UTC

Description Naga Ravi Chaitanya Elluri 2021-07-21 19:45:57 UTC
Description of problem:
openshift-config-operator leader election lease duration is set to 60 seconds which is causing it to go through leader elections and restart during the kube-apiserver rollout which currently takes around 60 seconds with shutdown-delay-duration and gracefulTerminationDuration is now set to 0 and 15 seconds ( https://github.com/openshift/cluster-kube-apiserver-operator/pull/1168 and https://github.com/openshift/library-go/pull/1104 ). openshift-config-operator leader election lease duration should be set to > 60 seconds to handle the downtime gracefully in SNO.

Recommended lease duration values to be considered for reference as noted in https://github.com/openshift/enhancements/pull/832/files#diff-2e28754e69aa417e5b6d89e99e42f05bfb6330800fa823753383db1d170fbc2fR183:

LeaseDuration=137s, RenewDealine=107s, RetryPeriod=26s.
These are the configurable values in k8s.io/client-go based leases and controller-runtime exposes them.
This gives us
   1. clock skew tolerance == 30s
   2. kube-apiserver downtime tolerance == 78s
   3. worst non-graceful lease reacquisition == 163s
   4. worst graceful lease reacquisition == 26s


Here is the trace of the events during the rollout:http://dell-r510-01.perf.lab.eng.rdu2.redhat.com/chaos/sno/openshift-config-operator/cerberus_api_rollout_trace.json. We can see that leader lease failures in the log:http://dell-r510-01.perf.lab.eng.rdu2.redhat.com/chaos/sno/openshift-config-operator/openshift-config-operator.log. The leader election can also be disabled given that there's no HA in SNO.


Version-Release number of selected component (if applicable):
4.8.0-0.nightly-2021-07-19-192457

How reproducible:
Always

Steps to Reproduce:
1. Install a SNO cluster using the latest nightly payload.
2. Trigger kube-apiserver rollout or outage which lasts for at least 60 seconds ( kube-apiserver rollout on a cluster built using payload with https://github.com/openshift/cluster-kube-apiserver-operator/pull/1168 should take ~60 seconds ) - $oc patch kubeapiserver/cluster --type merge -p '{"spec":{"forceRedeploymentReason":"ITERATIONX"}}' where X can be 1,2...n
3. Observe the state of openshift-config-operator.

Actual results:
openshift-config-operator goes through leader election and restarts.

Expected results:
openshift-config-operator should handle the API rollout/outage gracefully.

Additional info:
Logs including must-gather: http://dell-r510-01.perf.lab.eng.rdu2.redhat.com/chaos/sno/openshift-config-operator/

Comment 6 Antonio Ojea 2021-07-28 14:31:57 UTC
I've updated the wrong dependency, sorry , this is the correct one with the new leader election timeouts
https://github.com/openshift/cluster-config-operator/pull/213

Comment 8 Rahul Gangwar 2021-08-06 09:20:43 UTC




In 4.8 version observe restart after rollout/outage

oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.8.0-0.nightly-2021-08-05-031749   True        False         146m    Cluster version is 4.8.0-0.nightly-2021-08-05-031749


oc get pods -n openshift-config-operator
NAME                                         READY   STATUS    RESTARTS   AGE
openshift-config-operator-6d4c79bb47-pznxh   1/1     Running   5          173m

 oc patch kubeapiserver/cluster --type merge -p '{"spec":{"forceRedeploymentReason":"ITERATION1"}}'
kubeapiserver.operator.openshift.io/cluster patched


 oc get pods -n openshift-config-operator
NAME                                         READY   STATUS             RESTARTS   AGE
openshift-config-operator-6d4c79bb47-pznxh   0/1     CrashLoopBackOff   8          3h8m


In 4.9 version doesn't observe restart after rollout/outage

oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.9.0-0.nightly-2021-08-04-131508   True        False         9m28s   Cluster version is 4.9.0-0.nightly-2021-08-04-131508

oc get pods -n openshift-config-operator
NAME                                         READY   STATUS    RESTARTS   AGE
openshift-config-operator-79544949d7-4rm4z   1/1     Running   1          24m

 oc patch kubeapiserver/cluster --type merge -p '{"spec":{"forceRedeploymentReason":"ITERATION1"}}'
kubeapiserver.operator.openshift.io/cluster patched

oc get pods -n openshift-config-operator
NAME                                         READY   STATUS    RESTARTS   AGE
openshift-config-operator-79544949d7-4rm4z   1/1     Running   1          26m


And also observe in config-operator logs, apiserver is out for less than 1 minute(approx 25-30 secs)

Start

2021-08-06T08:59:01.626086475+00:00 stderr F E0806 08:59:01.626043       1 leaderelection.go:325] error retrieving resource lock openshift-config-operator/config-operator-lock: Get "https://172.30.0.1:443@                                                                                                                                                                                                           2021-08-06T08:59:01.626086475+00:00 stderr F E0806 08:59:01.626043       1 leaderelection.go:325] error retrieving resource lock openshift-config-operator/config-operator-lock: Get "https://172.30.0.1:443api/v1/namespaces/openshift-config-operator/configmaps/config-operator-lock?timeout=1m47s": dial tcp 172.30.0.1:443: connect: connection refused
2021-08-06T08:59:16.534920451+00:00 stderr F E0806 08:59:16.534873       1 webhook.go:127] Failed to make webhook authenticator request: Post 
"https://172.30.0.1:443/apis/authentication.k8s.io/v1/tokenrev@   



End
2021-08-06T08:59:44.167122190+00:00 stderr F I0806 08:59:44.165264       1 trace.go:205] Trace[677221414]: "Reflector ListAndWatch" name:k8s.io/client-go.1/tools/cache/reflector.go:167 (06-Aug-2021 08:59:30.861) (total time: 13304ms):
2021-08-06T08:59:44.167122190+00:00 stderr F Trace[677221414]: ---"Objects listed" 13304ms (08:59:00.165)
2021-08-06T08:59:44.167122190+00:00 stderr F Trace[677221414]: [13.304097358s] [13.304097358s] END

Comment 11 errata-xmlrpc 2021-10-18 17:40:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:3759


Note You need to log in before you can comment on or make changes to this bug.