Description of problem: etcd-operator is crashing/restarting and is going though leader elections during the kube-apiserver rollout which takes around ~60 seconds with shutdown-delay-duration and gracefulTerminationDuration is now set to 0 and 15 seconds ( https://github.com/openshift/cluster-kube-apiserver-operator/pull/1168 and https://github.com/openshift/library-go/pull/1104 ).etcd-operator leader election timeout should be set to > 60 seconds to handle the downtime gracefully in SNO. Recommended lease duration values to be considered for reference as noted in https://github.com/openshift/enhancements/pull/832/files#diff-2e28754e69aa417e5b6d89e99e42f05bfb6330800fa823753383db1d170fbc2fR183: LeaseDuration=137s, RenewDealine=107s, RetryPeriod=26s. These are the configurable values in k8s.io/client-go based leases and controller-runtime exposes them. This gives us 1. clock skew tolerance == 30s 2. kube-apiserver downtime tolerance == 78s 3. worst non-graceful lease reacquisition == 163s 4. worst graceful lease reacquisition == 26s We can see that etcd-operator leader lease failures in the log: http://dell-r510-01.perf.lab.eng.rdu2.redhat.com/chaos/sno/openshift-etcd-operator/leader-lease-failure.log. The leader election can also be disabled given that there's no HA in SNO. Version-Release number of selected component (if applicable): 4.9.0-0.nightly-2021-07-26-031621 How reproducible: Always Steps to Reproduce: 1. Install a SNO cluster using the latest nightly payload. 2. Trigger kube-apiserver rollout/outage lasting for ~60 seconds - $ oc patch kubeapiserver/cluster --type merge -p '{"spec":{"forceRedeploymentReason":"ITERATIONX"}}' where X can be 1,2...n 3. Observe the state of etcd-operator. Actual results: etcd-operator is crashing/restarting and going through leader elections. Expected results: etcd-operator should handle the API rollout/outage gracefully. Additional info: Logs including must-gather: http://dell-r510-01.perf.lab.eng.rdu2.redhat.com/chaos/sno/openshift-etcd-operator/.
The attached PR bumps library go which adjusts the default[1]. [1] https://github.com/openshift/library-go/pull/1104
Verification steps followed : 1. Installed SNO cluster with 4.9.0-0.ci-2021-08-22-162505. 2. Triggered kube-apiserver rollout using : oc patch kubeapiserver/cluster --type merge -p '{"spec":{"forceRedeploymentReason":"ITERATIONX"}}' ..... (with X=3) 3. no restart/crashloopError of etcd-operator was observed.
Please find the test logs : [skundu@skundu ~]$ oc patch kubeapiserver/cluster --type merge -p '{"spec":{"forceRedeploymentReason":"ITERATION2"}}' kubeapiserver.operator.openshift.io/cluster patched After 2 mins check [skundu@skundu ~]$ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE etcd 4.9.0-0.nightly-2021-08-22-070405 True False False 78m kube-apiserver 4.9.0-0.nightly-2021-08-22-070405 True False False 74m kube-controller-manager 4.9.0-0.nightly-2021-08-22-070405 True False False 78m kube-scheduler 4.9.0-0.nightly-2021-08-22-070405 True False False 77m kube-storage-version-migrator 4.9.0-0.nightly-2021-08-22-070405 True False False 80m openshift-apiserver 4.9.0-0.nightly-2021-08-22-070405 True False False 70m openshift-controller-manager 4.9.0-0.nightly-2021-08-22-070405 True False False 79m openshift-samples 4.9.0-0.nightly-2021-08-22-070405 True False False 71m [skundu@skundu ~]$ oc project openshift-etcd-operator Now using project "openshift-etcd-operator" on server "https://api.skundu-at.qe.devcluster.openshift.com:6443". [skundu@skundu ~]$ oc get pods NAME READY STATUS RESTARTS AGE etcd-operator-7578997597-6bdhm 1/1 Running 1 (81m ago) 101m
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:3759