Bug 1642149
| Summary: | project "default" sometimes not reflected in kubeconfig | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Aleksandar Kostadinov <akostadi> | ||||||
| Component: | oc | Assignee: | Juan Vallejo <jvallejo> | ||||||
| Status: | CLOSED NOTABUG | QA Contact: | Xingxing Xia <xxia> | ||||||
| Severity: | unspecified | Docs Contact: | |||||||
| Priority: | unspecified | ||||||||
| Version: | 3.11.0 | CC: | akostadi, aos-bugs, bmeng, cryan, hongli, jokerman, mmccomas, yufchang | ||||||
| Target Milestone: | --- | ||||||||
| Target Release: | 3.11.z | ||||||||
| Hardware: | Unspecified | ||||||||
| OS: | Unspecified | ||||||||
| Whiteboard: | |||||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||||
| Doc Text: | Story Points: | --- | |||||||
| Clone Of: | Environment: | ||||||||
| Last Closed: | 2018-12-04 22:31:44 UTC | Type: | Bug | ||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | --- | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Embargoed: | |||||||||
| Attachments: |
|
||||||||
Could you post loglevel 8 output? Also, are you getting any "permission denied" errors when changing projects within the pod? How are you executing commands on the pod, `oc exec`? `oc rsh`? Can you replicate this with a newer client, such as oc v3.12? I was only able to replicate this issue by removing "write" permissions on my kubeconfig file before copying it to my pod. Created attachment 1499650 [details] level 8 log from `oc get pod` You are right that with the master generated admin kubeconfig it doesn't happen. It happens with a config file that has been generated like: > oc config set-credentials admin --client-certificate=/home/user/workdir/koTak-user/clcert20181031-11132-1mbkep5 --client-key=/home/user/workdir/koTak-user/clkey20181031-11132-75bi7e --embed-certs=true --server=https://host-8-244-186.host.centralci.eng.rdu2.redhat.com:8443 --config=/home/user/workdir/koTak-user/ose_admin.kubeconfig --insecure-skip-tls-verify=true > oc config set-cluster default --server=https://host-8-244-186.host.centralci.eng.rdu2.redhat.com:8443 --config=/home/user/workdir/koTak-user/ose_admin.kubeconfig --insecure-skip-tls-verify=true > oc config set-context default --cluster=default --user=admin --config=/home/user/workdir/koTak-user/ose_admin.kubeconfig --insecure-skip-tls-verify=true > oc config use-context default --config=/home/user/workdir/koTak-user/ose_admin.kubeconfig --insecure-skip-tls-verify=true See the config I attached earlier. In the context created this way `namespace` is missing. And `oc project default` does not set it for some reason. Here is log: > sh-4.2$ oc project default --config=/tmp/ose_admin.kubeconfig --loglevel=8 > I1031 21:00:00.622336 33 loader.go:357] Config loaded from file /tmp/ose_admin.kubeconfig > I1031 21:00:00.624493 33 loader.go:357] Config loaded from file /tmp/ose_admin.kubeconfig > Now using context named "default" on server "https://host-8.example.com:8443". See attached log of `oc get pod`. You can see that the pods from the project of rsh pod are displayed. > sh-4.2$ oc version > oc v3.11.0-alpha.0+1ff2229-9 > kubernetes v1.10.0+b81c8f8 > features: Basic-Auth GSSAPI Kerberos SPNEGO > > Server https://172.30.0.1:443 > openshift v3.11.34 > kubernetes v1.11.0+d4cacc0 To run within the pod, I use `oc rsh`. wrt newer version, I don't see `openshift/origin:v3.12` existing. What tag do you suggest to use? Thanks for all the info, will try to replicate locally. > To run within the pod, I use `oc rsh`. wrt newer version, I don't see `openshift/origin:v3.12` existing. What tag do you suggest to use? Try using origin:v4.0 For the client, there's no stable release yet, but you could try compiling this tag from source https://github.com/openshift/origin/releases/tag/v4.0.0-alpha.0 if you'd like to give that a try. Also, I noticed that you've attached log-level 8 output from `oc get pod`, however it seems that all of the log output was printed to stderr, but only stdout was saved to the attached file. Based on what I have seen locally, because (in this particular case) the name of your context (default) has the same name as the namespace you wish to use, it causes this branch [1] in the `project` command to be used. That branch looks to see if you've provided a context name as an argument to `oc project`, instead of a namespace name. If a context is successfully matched based on your argument, then the namespace is left intact, and the config is not modified in any way. By naming your context something different, such as "default2", we are able to hit the second branch here [2], check project access, and ultimately modify your kubeconfig with the namespace "default". Note that for this to work, however, you must have access to project "default". Based on the config file you have linked ( unless you've taken out any certificate and key data) you'll be logged in as "system:anonymous" and won't have access to project "default" anyway, but you'll get an error explicitly stating so in this case. 1. https://github.com/openshift/origin/blob/master/pkg/oc/cli/project/project.go#L207 2. https://github.com/openshift/origin/blob/master/pkg/oc/cli/project/project.go#L212 I didn't remove any data from kubeconfig. File was created with the commands listed in comment 2. Maybe we shouldn't allow context to be created without a namespace element. What is the point of having a context and not having a namespace specified within? Maybe issue is with how the commands generate a kubeconfig? btw the kubeconfig logged me in my test environment as system:admin, not anonymous. But because certificates generated on my test cluster will not match yours, this can't happen on your cluster. wrt log level - my bad. I obviously didn't use the proper pipe `|&`. Will update log tomorrow. > I didn't remove any data from kubeconfig. File was created with the commands listed in comment 2. Maybe we shouldn't allow context to be created without a namespace element. What is the point of having a context and not having a namespace specified within? Agree, will have a look. > Maybe issue is with how the commands generate a kubeconfig? Could you try generating the kubeconfig again, but setting the context name to something other than "default"? If I change name to `generated` then `oc project default` works as expected: * new context is created * the context has `namespace: default` element added (unlike the situation with context `default`) I also tried oc v4.0.0-0.49.0 and it has same behaviour as 3.11 And you are right, when context is `default` then I see: > Now using context named "default" on server "https://host-8-249-84.host.centralci.eng.rdu2.redhat.com:8443". When context is "generated" then I see: > Now using project "default" on server "https://host-8-249-84.host.centralci.eng.rdu2.redhat.com:8443". So IMO `oc project whatever` should not switch to a context. We have a separate command to switch context. This is an unexpected behaviour and as a user I don't like to be surprised. Additionally `oc` I also don't see how generating context without `namespace` helps. On the other hand issue doesn't see critical given multiple contexts usage has many other rough edges. See bug 1515505. > So IMO `oc project whatever` should not switch to a context. We have a separate command to switch context. Believe it or not, it has been possible for quite some time to pass a "fully qualified" context name to `oc project` :) It's just that in this particular case, the "fully qualified" context name just happens to match a namespace's name as well. I don't believe we will remove this functionality in the spirit of backwards compatibility with that command. > Additionally `oc` I also don't see how generating context without `namespace` helps. I think that could very likely be a bug when setting a context that shares the same name as its namespace field. I'll have a look at this Based on this line from comment 2: > oc config set-context default --cluster=default --user=admin --config=/home/user/workdir/koTak-user/ose_admin.kubeconfig --insecure-skip-tls-verify=true You'll need to specify a namespace via the `--namespace` flag in order to create a context with a namespace field set. For example: ``` oc config set-context default --cluster=default --user=admin --config=/home/user/workdir/koTak-user/ose_admin.kubeconfig --insecure-skip-tls-verify=true --namespace=default ``` |
Created attachment 1496785 [details] /root/.kube/config Description of problem: *Sometimes* (it happens reproducibly with `/root/.kube/config` from first master) the command `oc project default` has no effect. The `namespace` element is not added to the context. This is a problem when running `oc` inside a container where a service account secret is mounted (this is the default). Version-Release number of selected component (if applicable): oc v3.11.28 How reproducible: always Steps to Reproduce: 1. have an OpenShift cluster 2. have a project 3. oc run 311 --image=openshift/origin:v3.11 --command=true sleep 36000 4. copy /root/.kube/config from master to the running pod 5. execute on pod: oc project default --config=/tmp/admin.kubeconfig 6. execute on pod: oc get pod --config=/tmp/admin.kubeconfig Actual results: list of pods in user project are displayed Expected results: list of pods in project `default` are displayed