When I ssh to a master and want to test/use the local apiserver there is no kubeconfig present. It would really help for debugging and recovery if we would provide one. Could also help with a recovery when user deletes his dir from installer, yet he has a cluster without credentials.
Some discussion in the upstream refer comment -> https://bugzilla.redhat.com/show_bug.cgi?id=1660273#c5
This bug is actively worked on.
This bug hasn't had any activity in the last 30 days. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet. As such, we're marking this bug as "LifecycleStale" and decreasing the severity/priority. If you have further information on the current state of the bug, please update it, otherwise this bug can be closed in about 7 days. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant.
The LifecycleStale keyword was removed because the bug got commented on recently. The bug assignee was notified.
Verification steps: $ oc version Client Version: 4.6.0-202009040605.p0-f2a4a03 Server Version: 4.6.0-0.nightly-2020-09-15-063156 Kubernetes Version: v1.19.0+35ab7c5 $ oc debug node/<master node> sh-4.4# chroot /host sh-4.4# pwd /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs sh-4.4# ls lb-ext.kubeconfig lb-int.kubeconfig localhost-recovery.kubeconfig localhost.kubeconfig sh-4.4# export KUBECONFIG=`pwd`/localhost.kubeconfig sh-4.4# oc get nodes NAME STATUS ROLES AGE VERSION kewang1565-9n24f-master-0 Ready master 14h v1.19.0+35ab7c5 kewang1565-9n24f-master-1 Ready master 14h v1.19.0+35ab7c5 kewang1565-9n24f-master-2 Ready master 14h v1.19.0+35ab7c5 kewang1565-9n24f-worker-0 Ready worker 14h v1.19.0+35ab7c5 kewang1565-9n24f-worker-1 Ready worker 14h v1.19.0+35ab7c5 kewang1565-9n24f-worker-2 Ready worker 14h v1.19.0+35ab7c5 sh-4.4# export KUBECONFIG=`pwd`/localhost-recovery.kubeconfig sh-4.4# oc get nodes NAME STATUS ROLES AGE VERSION kewang1565-9n24f-master-0 Ready master 14h v1.19.0+35ab7c5 kewang1565-9n24f-master-1 Ready master 14h v1.19.0+35ab7c5 kewang1565-9n24f-master-2 Ready master 14h v1.19.0+35ab7c5 kewang1565-9n24f-worker-0 Ready worker 14h v1.19.0+35ab7c5 kewang1565-9n24f-worker-1 Ready worker 14h v1.19.0+35ab7c5 kewang1565-9n24f-worker-2 Ready worker 14h v1.19.0+35ab7c5 sh-4.4# export KUBECONFIG=`pwd`/lb-int.kubeconfig sh-4.4# oc get nodes NAME STATUS ROLES AGE VERSION kewang1565-9n24f-master-0 Ready master 14h v1.19.0+35ab7c5 kewang1565-9n24f-master-1 Ready master 14h v1.19.0+35ab7c5 kewang1565-9n24f-master-2 Ready master 14h v1.19.0+35ab7c5 kewang1565-9n24f-worker-0 Ready worker 14h v1.19.0+35ab7c5 kewang1565-9n24f-worker-1 Ready worker 14h v1.19.0+35ab7c5 kewang1565-9n24f-worker-2 Ready worker 14h v1.19.0+35ab7c5 sh-4.4# export KUBECONFIG=`pwd`/lb-ext.kubeconfig sh-4.4# oc get nodes NAME STATUS ROLES AGE VERSION kewang1565-9n24f-master-0 Ready master 14h v1.19.0+35ab7c5 kewang1565-9n24f-master-1 Ready master 14h v1.19.0+35ab7c5 kewang1565-9n24f-master-2 Ready master 14h v1.19.0+35ab7c5 kewang1565-9n24f-worker-0 Ready worker 14h v1.19.0+35ab7c5 kewang1565-9n24f-worker-1 Ready worker 14h v1.19.0+35ab7c5 kewang1565-9n24f-worker-2 Ready worker 14h v1.19.0+35ab7c5 kubeconfigs work as expected, move the bug verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196