Description of problem: openshift-service-ca-operator leader election lease duration is set to 60 seconds which is causing it to go through leader elections and restart during the kube-apiserver rollout which currently takes around 60 seconds with shutdown-delay-duration and gracefulTerminationDuration is now set to 0 and 15 seconds ( https://github.com/openshift/cluster-kube-apiserver-operator/pull/1168 and https://github.com/openshift/library-go/pull/1104 ). openshift-service-ca-operator leader election lease duration needs to be set to > 60 seconds to handle the downtime gracefully in SNO. Recommended lease duration values to be considered for reference as noted in https://github.com/openshift/enhancements/pull/832/files#diff-2e28754e69aa417e5b6d89e99e42f05bfb6330800fa823753383db1d170fbc2fR183: LeaseDuration=137s, RenewDealine=107s, RetryPeriod=26s. These are the configurable values in k8s.io/client-go based leases and controller-runtime exposes them. This gives us 1. clock skew tolerance == 30s 2. kube-apiserver downtime tolerance == 78s 3. worst non-graceful lease reacquisition == 163s 4. worst graceful lease reacquisition == 26s Here is the trace of the events during the rollout: http://dell-r510-01.perf.lab.eng.rdu2.redhat.com/chaos/sno/openshift-service-ca-operator/cerberus_api_rollout_trace.json. We can see that leader lease failures in the log:http://dell-r510-01.perf.lab.eng.rdu2.redhat.com/chaos/sno/openshift-service-ca-operator/openshift-service-ca-operator.log. The leader election can also be disabled given that there's no HA in SNO. Version-Release number of selected component (if applicable): 4.8.0-0.nightly-2021-07-19-192457 How reproducible: Always Steps to Reproduce: 1. Install a SNO cluster using the latest nightly payload. 2. Trigger kube-apiserver rollout or outage which lasts for at least 60 seconds ( kube-apiserver rollout on a cluster built using payload with https://github.com/openshift/cluster-kube-apiserver-operator/pull/1168 should take ~60 seconds ) - $ oc patch kubeapiserver/cluster --type merge -p '{"spec":{"forceRedeploymentReason":"ITERATIONX"}}' where X can be 1,2...n 3. Observe the state of openshift-service-ca-operator. Actual results: openshift-service-ca-operator goes through leader election and restarts. Expected results: openshift-service-ca-operator should handle the API rollout/outage gracefully. Additional info: Logs including must-gather:http://dell-r510-01.perf.lab.eng.rdu2.redhat.com/chaos/sno/openshift-service-ca-operator/
sprint review: we have a PR out
Verified in SNO env of version 4.9.0-0.nightly-2021-08-23-224104: Tried several times of: $ oc patch kubeapiserver/cluster --type merge -p '{"spec":{"forceRedeploymentReason":"'"`date`"'"}}' Wait for kube-apiserver finishes rollout. Meantime watch oc get po -n openshift-service-ca-operator. Didn't see openshift-service-ca-operator pod restarts. Also watch its logs: $ oc logs -f service-ca-operator-69587858c6-mdcd6 -n openshift-service-ca-operator | grep lease Didn't see "failed to acquire lease openshift-service-ca-operator/service-ca-operator-lock"
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:3759