Hide Forgot
Description of problem: The etcd signer is not available after the bootstrap node is destroyed. Expected results: Required for disaster recovery.
I have no problem with getting the signer image into the release payload for 4.0.0, but I'm very sceptical about actually wiring it up to run. Will that just happen via docs and admin intervention? If etcd is broken, does it matter if the signer was part of the release payload or not? You certainly don't want it running during regular operations, and allowing anyone who wants access into the etcd cluster (or maybe it is doing some client authentication, I dunno).
Also in this space: https://github.com/coreos/kubecsr/pull/19
I think it's fair to say human intervention is required today in a DR scenario (I'm sure Derek will correct me if I'm wrong about this). I imagining one benefit of getting it to the release image (aside from consistency) is that it's important for disconnected installs.
Pull request is in-flight here: https://github.com/openshift/installer/pull/1363 This does not add the signer, but instead adds the CA.
> This does not add the signer, but instead adds the CA. Wouldn't we want both the signer and the CA? I think we're going to need the signer referenced from the release image for disconnected installs anyway.
kube-system namespace should contain configmaps named etcd-signer-client, etcd-signer, etcd-ca-bundle, and etcd-client-ca-deprecated which can be used to sign certificates for additional etcd members during disaster recovery operations
Please check if it could be verified.
I Will Verify it after Bug 1698456 fixed.
Verified it, etcd signer is valid in disaster recovery(tested with etcd leader crashed)
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758