Bug 1986560 - etcd-operator needs to handle API server downtime gracefully in SNO
Summary: etcd-operator needs to handle API server downtime gracefully in SNO
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Etcd
Version: 4.9
Hardware: Unspecified
OS: Linux
high
high
Target Milestone: ---
: 4.9.0
Assignee: Sam Batschelet
QA Contact: ge liu
URL:
Whiteboard: chaos
Depends On:
Blocks: 1984730
TreeView+ depends on / blocked
 
Reported: 2021-07-27 18:59 UTC by Naga Ravi Chaitanya Elluri
Modified: 2021-10-18 17:42 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-10-18 17:42:20 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-etcd-operator pull 638 0 None None None 2021-08-11 15:07:56 UTC
Red Hat Product Errata RHSA-2021:3759 0 None None None 2021-10-18 17:42:37 UTC

Description Naga Ravi Chaitanya Elluri 2021-07-27 18:59:30 UTC
Description of problem:
etcd-operator  is crashing/restarting and is going though leader elections during the kube-apiserver rollout which takes around ~60 seconds with shutdown-delay-duration and gracefulTerminationDuration is now set to 0 and 15 seconds ( https://github.com/openshift/cluster-kube-apiserver-operator/pull/1168 and https://github.com/openshift/library-go/pull/1104 ).etcd-operator  leader election timeout should be set to > 60  seconds to handle the downtime gracefully in SNO. 

Recommended lease duration values to be considered for reference as noted in https://github.com/openshift/enhancements/pull/832/files#diff-2e28754e69aa417e5b6d89e99e42f05bfb6330800fa823753383db1d170fbc2fR183:

LeaseDuration=137s, RenewDealine=107s, RetryPeriod=26s.
These are the configurable values in k8s.io/client-go based leases and controller-runtime exposes them.
This gives us
   1. clock skew tolerance == 30s
   2. kube-apiserver downtime tolerance == 78s
   3. worst non-graceful lease reacquisition == 163s
   4. worst graceful lease reacquisition == 26s

 
We can see that etcd-operator leader lease failures in the log: http://dell-r510-01.perf.lab.eng.rdu2.redhat.com/chaos/sno/openshift-etcd-operator/leader-lease-failure.log. The leader election can also be disabled given that there's no HA in SNO.


Version-Release number of selected component (if applicable):
4.9.0-0.nightly-2021-07-26-031621

How reproducible:
Always

Steps to Reproduce:
1. Install a SNO cluster using the latest nightly payload.
2. Trigger kube-apiserver rollout/outage lasting for ~60 seconds - $ oc patch kubeapiserver/cluster --type merge -p '{"spec":{"forceRedeploymentReason":"ITERATIONX"}}' where X can be 1,2...n
3. Observe the state of etcd-operator.

Actual results:
etcd-operator is crashing/restarting and going through leader elections.

Expected results:
etcd-operator should handle the API rollout/outage gracefully.

Additional info:
Logs including must-gather: http://dell-r510-01.perf.lab.eng.rdu2.redhat.com/chaos/sno/openshift-etcd-operator/.

Comment 1 Sam Batschelet 2021-08-11 15:32:11 UTC
The attached PR bumps library go which adjusts the default[1].

[1] https://github.com/openshift/library-go/pull/1104

Comment 3 Sandeep 2021-08-24 05:04:33 UTC
Verification steps followed :

1. Installed SNO cluster with 4.9.0-0.ci-2021-08-22-162505.
2. Triggered kube-apiserver rollout using : oc patch kubeapiserver/cluster --type merge -p '{"spec":{"forceRedeploymentReason":"ITERATIONX"}}'   ..... (with X=3)
3. no restart/crashloopError of etcd-operator was observed.

Comment 4 Sandeep 2021-08-24 06:16:52 UTC
Please find the test logs :

[skundu@skundu ~]$ oc patch kubeapiserver/cluster --type merge -p '{"spec":{"forceRedeploymentReason":"ITERATION2"}}'
kubeapiserver.operator.openshift.io/cluster patched



After 2 mins check 
[skundu@skundu ~]$ oc get co
NAME                                       VERSION                             AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE

etcd                                       4.9.0-0.nightly-2021-08-22-070405   True        False         False      78m     
kube-apiserver                             4.9.0-0.nightly-2021-08-22-070405   True        False         False      74m     
kube-controller-manager                    4.9.0-0.nightly-2021-08-22-070405   True        False         False      78m     
kube-scheduler                             4.9.0-0.nightly-2021-08-22-070405   True        False         False      77m     
kube-storage-version-migrator              4.9.0-0.nightly-2021-08-22-070405   True        False         False      80m      
openshift-apiserver                        4.9.0-0.nightly-2021-08-22-070405   True        False         False      70m     
openshift-controller-manager               4.9.0-0.nightly-2021-08-22-070405   True        False         False      79m     
openshift-samples                          4.9.0-0.nightly-2021-08-22-070405   True        False         False      71m    


[skundu@skundu ~]$ oc project openshift-etcd-operator
Now using project "openshift-etcd-operator" on server "https://api.skundu-at.qe.devcluster.openshift.com:6443".
[skundu@skundu ~]$ oc get pods
NAME                             READY   STATUS    RESTARTS      AGE
etcd-operator-7578997597-6bdhm   1/1     Running   1 (81m ago)   101m

Comment 7 errata-xmlrpc 2021-10-18 17:42:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:3759


Note You need to log in before you can comment on or make changes to this bug.