Fresh install of 3.9.14 in HA mode (disconneted). ssh into master01.ocp.nicknach.net, 'oc get nodes' and everything is as expected. Logout. Now i point a browser to https://master01.ocp.nicknach.net:9090 and login to cockpint as 'root' user. I then ssh into master01.ocp.nicknach.net again, only this time my kubecfg is busted and i cannot use 'oc'. (See below for output). To mitigate this, you have to do this to log back into system:admin using these two lines: export KUBECONFIG=/etc/origin/master/admin.kubeconfig oc login -u system:admin What exactly is cockpit doing to my kubecfg of the 'root' user? Is it corrupting or modifying these files somehow as 'root'? This has potential to be a major security flaw. What other files is cockpit allowed to change as root? [root@master01 ~]# oc get nodes --loglevel=10 I0417 07:28:12.053471 97414 factory_object_mapping.go:83] Unable to get a discovery client to find server resources, falling back to hardcoded types: Missing or incomplete configuration info. Please login or point to an existing, complete config file: 1. Via the command-line flag --config 2. Via the KUBECONFIG environment variable 3. In your home directory as ~/.kube/config To view or setup config directly use the 'config' command. F0417 07:28:12.054082 97414 helpers.go:119] error: Missing or incomplete configuration info. Please login or point to an existing, complete config file: 1. Via the command-line flag --config 2. Via the KUBECONFIG environment variable 3. In your home directory as ~/.kube/config To view or setup config directly use the 'config' command.
openshift v3.9.22 kubernetes v1.9.1+a0ce1bc657 # rpm -qa|grep cockpit cockpit-system-160-3.el7.noarch cockpit-bridge-160-3.el7.x86_64 cockpit-docker-160-3.el7.x86_64 cockpit-ws-160-3.el7.x86_64 cockpit-kubernetes-160-3.el7.x86_64 Checked on ocp ha env with above version, after login cockpit with "root", ssh into the first master again, could run oc commands normally. Could not reproduce the bug.
So fixed on 3.9.22 ? I've had multiple install replicate this behavior on 3.9.14.
Is it possible to share one of env which could reproduce the issue?
> ssh into master01.ocp.nicknach.net, 'oc get nodes' and everything is as expected. Just to be clear, did you also do that as root@ ? Could it be that you were doing this ssh as a normal user, which has a valid ~/.kube/config, but root user doesn't? > Now i point a browser to https://master01.ocp.nicknach.net:9090 and login to cockpint as 'root' user. What exact steps did you do there? In particular, did you get a "Couldn't connect to server" error and used the "Troubleshoot" button? That would be one step that logs into the cluster again explicitly and thus change your kube config.