Server Version: 4.5.0-0.nightly-2020-03-11-050258 After cert recovery testing QA found the cluster working but saw x509 error in kube-apiserver log. After cert recovery flow all certs were regenerated but cluster-policy-controller didn't reload new client certificates while kube-controller-manager container did. (share the same kubeconfig.) must-gather.local.5401326841172208534/quay-io-openshift-release-dev-ocp-v4-0-art-dev-sha256-237daa2c76ddf92deea9f46b304797e62acfe8c9f76c534a863b1dc4e7ef509e/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-xxia03-7cblr-m-0.c.openshift-qe.internal/cluster-policy-controller/cluster-policy-controller/logs/current.log:582:2020-03-11T13:04:43.72597951Z E0311 13:04:43.725876 1 leaderelection.go:331] error retrieving resource lock openshift-kube-controller-manager/cluster-policy-controller: Unauthorized must-gather.local.5401326841172208534/quay-io-openshift-release-dev-ocp-v4-0-art-dev-sha256-237daa2c76ddf92deea9f46b304797e62acfe8c9f76c534a863b1dc4e7ef509e/namespaces/openshift-kube-apiserver/pods/kube-apiserver-xxia03-7cblr-m-0.c.openshift-qe.internal/kube-apiserver/kube-apiserver/logs/current.log:1658:2020-03-11T13:04:43.725314803Z E0311 13:04:43.725256 1 authentication.go:104] Unable to authenticate the request due to an error: x509: certificate has expired or is not yet valid
Confirmed with payload: 4.5.0-0.nightly-2020-03-13-001017, the issue has fixed.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409