Hi, Stefan and Lukasz, saw https://github.com/openshift/enhancements/pull/131/files#diff-29a58870b4078595bb0b7d5a2a3bee18R279 : "encryption-config ... mounted via host mount as ... in the kube-apiserver pod" "A restore must put ... the backup in place ... before starting up kube-apiserver" I've a question about the restore: for etcd restore, it has doc https://docs.openshift.com/container-platform/4.2/backup_and_restore/disaster_recovery/scenario-2-restoring-cluster-state.html ; for encryption-config restore, what's right steps to do the host mount into the static pods? Modify /etc/kubernetes/manifests/kube-apiserver-pod.yaml on each master? Or modify /etc/kubernetes/static-pod-resources/kube-apiserver-pod-$LATEST_REVISION/kube-apiserver-pod.yaml? Or whatever? Thanks.
Tried 4.3.0-0.nightly-2020-01-06-185654 env twice, one time did not hit above issue, another time hit above issue. For the time that hit the issue, tried to restart the pods by: oc delete po router-default-6b44978bc4-mrslh router-default-6b44978bc4-z6st7 -n openshift-ingress . Then wait several mins, the issue is gone.
Sam, i filed a doc bug to trace this workaround, https://bugzilla.redhat.com/show_bug.cgi?id=1788895
Hello Sam, i filed a doc bug to trace this workaround, https://bugzilla.redhat.com/show_bug.cgi?id=1788895
Closing this for now because the documentation is an appropriate fix: https://bugzilla.redhat.com/show_bug.cgi?id=1788895 We can consider a backport if we work out what, if anything, we can do to fix it on the master bug.
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 365 days