This bug was initially created as a copy of Bug #1721586 I am copying this bug because: We need to make sure that we account for this in BYOR nodes which may not be using the workers machineconfigpool. Hosts in that machineconfigpool will apply the change already either if MCD is running or if an upgrade playbook is run. For hosts not in that MCP we need to pull the CA out of the cluster and update it when upgrading hosts. It'd also be nice if there were a standalone playbook to complete the same. Description of problem: `/etc/kubernetes/ca.crt` is not managed, yet looking at the default content it contains: - root-ca (10y) - admin-kubeconfig-signer (10y) - kubelet-signer (1d) - kube-control-plane-signer (1y) - kube-apiserver-to-kubelet-signer (1y) - kubelet-bootstrap-kubeconfig-signer (10y) If `kube-apiserver-to-kubelet-signer` is rotated, `/etc/kubernetes/ca.crt` requires update or logs won't work. I am not sure which of these CAs in the bundle is actually used but if it is also another one then the `kube-apiserver-to-kubelet-signer` this recovery step (from https://docs.openshift.com/container-platform/4.1/disaster_recovery/scenario-3-expired-certs.html) `oc get configmap kube-apiserver-to-kubelet-client-ca -n openshift-kube-apiserver-operator --template='{{ index .data "ca-bundle.crt" }}' > /etc/kubernetes/ca.crt` is probably not a great way to solve it as it wipes the other ones. Also when the cert is rotated in normal flow (which might be about 1/2 of a year) logs will stop working. Setting to urgent as the suspected outcome is logs stopping to work. Feel free to adjust if this proves not to be the case, but I have force rotated the `kube-apiserver-to-kubelet-signer` and logs stopped working until executing the above mentioned recovery step.