Bug 1826676 - OC status shows wrong project name
Summary: OC status shows wrong project name
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: oc
Version: 4.3.z
Hardware: Unspecified
OS: Linux
Target Milestone: ---
: 4.5.0
Assignee: Jan Chaloupka
QA Contact: zhou ying
Depends On:
Blocks: 1838614
TreeView+ depends on / blocked
Reported: 2020-04-22 10:04 UTC by Noam Manos
Modified: 2020-07-13 17:30 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1835011 1838614 (view as bug list)
Last Closed: 2020-07-13 17:30:08 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Github openshift oc pull 413 0 None closed bug 1826676: oc status: check if the current project exists before printing status 2020-08-09 11:30:18 UTC
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-07-13 17:30:22 UTC

Description Noam Manos 2020-04-22 10:04:14 UTC
Description of problem:
When running "oc status" it shows project name from cache/environment data, instead of the default project from current kubeconfig.
This can lead to executing oc actions against a non-existing project, which will obviously fail all together.

Version-Release number of selected component (if applicable):
Client Version: 4.3.12
Server Version: 4.2.0
Kubernetes Version: v1.14.6+2e5ed54

How reproducible:

Steps to Reproduce:
1) Login to an existing POD inside "Cluster_1" - "Project_A".
2) export KUBECONFIG="Cluster_2" (Cluster_2 does not include "Project_A")
3) Run: oc status 

Does it print the default project in Cluster_2, or wrong project name ?
In project Project_A on server https://api.Cluster_2.openshift.com:6443

Actual results when running it inside a pod in "Cluster_1" - "Project_A":

$ oc config view

apiVersion: v1
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://api.nmanos-cl1.devcluster.openshift.com:6443
  name: nmanos-cl1
- context:
    cluster: nmanos-cl1
    user: admin
  name: admin
current-context: admin
kind: Config
preferences: {}
- name: admin
    client-certificate-data: REDACTED
    client-key-data: REDACTED

$ oc status

In project jenkins-csb-skynet on server https://api.nmanos-cl1.devcluster.openshift.com:6443

You have no services, deployment configs, or build configs.
Run 'oc new-app' to create an application.

$ oc project jenkins-csb-skynet

error: A project named "jenkins-csb-skynet" does not exist on "https://api.nmanos-cl1.devcluster.openshift.com:6443".

Your projects are:
* default
* kube-node-lease
* kube-public
* kube-system
* openshift
* openshift-apiserver
* openshift-apiserver-operator
* openshift-authentication
* openshift-authentication-operator
* openshift-cloud-credential-operator
* openshift-cluster-machine-approver
* openshift-cluster-node-tuning-operator
* openshift-cluster-samples-operator
* openshift-cluster-storage-operator
* openshift-cluster-version
* openshift-config
* openshift-config-managed
* openshift-console
* openshift-console-operator
* openshift-controller-manager
* openshift-controller-manager-operator
* openshift-dns
* openshift-dns-operator
* openshift-etcd
* openshift-image-registry
* openshift-infra
* openshift-ingress
* openshift-ingress-operator
* openshift-insights
* openshift-kni-infra
* openshift-kube-apiserver
* openshift-kube-apiserver-operator
* openshift-kube-controller-manager
* openshift-kube-controller-manager-operator
* openshift-kube-scheduler
* openshift-kube-scheduler-operator
* openshift-machine-api
* openshift-machine-config-operator
* openshift-marketplace
* openshift-monitoring
* openshift-multus
* openshift-network-operator
* openshift-node
* openshift-openstack-infra
* openshift-operator-lifecycle-manager
* openshift-operators
* openshift-sdn
* openshift-service-ca
* openshift-service-ca-operator
* openshift-service-catalog-apiserver-operator
* openshift-service-catalog-controller-manager-operator

$ oc projects | grep jenkins

$ oc get ns | grep jenkins

Expected results - oc status should print:

In project default on server https://api.nmanos-cl1.devcluster.openshift.com:6443

Comment 1 Jan Chaloupka 2020-05-11 10:42:13 UTC
Hi Noam,

can you help me with the steps to reproduce the issue? I am not sure what you mean by `Login to an existing POD inside "Cluster_1" - "Project_A".` Are you saying you have two clusters Cluster_1 and Cluster_2 running and you are switching from Cluster_1 to Cluster_2 the following way?:

1. KUBECONFIG=Cluster_1
2. oc project jenkins-csb-skynet
3. KUBECONFIF=Cluster_2
4. oc status

Not expecting oc status to display status of jenkins-csb-skynet in Cluster_2 (since the project does not exist in the cluster)?

Comment 2 Jan Chaloupka 2020-05-11 11:30:13 UTC
I can reproduce the same response without changing the cluster when following these steps:

1. oc create ns jenkins-csb-skynet
2. oc project jenkins-csb-skynet
3. oc status
4. oc delete jenkins-csb-skynet
5. oc status
6. oc project

Question is what to expect when the project/namespace is removed but kubeconfig's context still refers to non-existing namespace.

Snippet from my kubeconfig:

- context:
    cluster: jchaloup-20200511
    user: admin
  name: admin
- context:
    cluster: api-jchaloup-20200511-group-b-devcluster-openshift-com:6443
    namespace: jenkins-csb-skynet
    user: system:admin
  name: jenkins-csb-skynet/api-jchaloup-20200511-group-b-devcluster-openshift-com:6443/system:admin

Contexts do not get updated when jenkins-csb-skynet namespace gets deleted. Given projects are deleted via `oc/kubectl delete ns ...` or equivalent API request, it's impossible to update the list of contexts everytime a namespace is deleted. Any caller with sufficient permissions can delete the namespace. Thus making impossible to track location of every kubeconfig and update the contexts accordingly.

Switching between two clusters and running oc status/oc project is just another instance of the situation.

Comment 3 Jan Chaloupka 2020-05-11 11:38:20 UTC
Noam, can you please confirm if my understanding of the reported issue is correct? If so, we can print error message when running oc status to inform the project/namespace no longer exists as oc project does.

Comment 4 Noam Manos 2020-05-11 12:04:53 UTC
Hi Jan,
Your assumption in comment 2 is probably the same issue, where oc client doesn't really check the actual cluster via API.

Regarding your first comment, I'll here's more details:

1. Login to an existing POD inside "Cluster_1" - "jenkins-csb-skynet" (might not be required to reproduce bug)
2. KUBECONFIG=Cluster_1
3. oc project jenkins-csb-skynet
   # Now using project "jenkins-csb-skynet" on server "".

4. Make sure kubeconfig of Cluster_2 does NOT have a default namespace. 
   For example, remove "namespace: default" from kubeconfig of Cluster_2

5. KUBECONFIF=Cluster_2
6. oc status
   # In project jenkins-csb-skynet on server https://api.default-cl1.devcluster.openshift.com:6443

This ^ is wrong info - "jenkins-csb-skynet" project does NOT exists on Cluster_2.

Comment 5 Noam Manos 2020-05-11 12:07:46 UTC
Also relevant for oc version 4.4.3:

Client Version: version.Info{Major:"", Minor:"", GitVersion:"v4.2.0-alpha.0-4-g38b0f09", GitCommit:"38b0f09", GitTreeState:"clean", BuildDate:"2019-08-12T19:05:43Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.1", GitCommit:"b9b84e0", GitTreeState:"clean", BuildDate:"2020-04-26T20:16:35Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
OpenShift Version: 4.4.3

Comment 6 Jan Chaloupka 2020-05-12 08:01:00 UTC
Thanks Noam for the validation.

Comment 11 zhou ying 2020-05-15 05:44:58 UTC
confirmed with latest oc , can't reproduce the issue now:

[root@dhcp-140-138 roottest]# oc version 
Client Version: 4.5.0-202005140917-9547330
Server Version: 4.5.0-0.nightly-2020-05-14-231228
Kubernetes Version: v1.18.2

[root@dhcp-140-138 roottest]# oc get project --kubeconfig=/root/.kube/config 
zhouyt                  Active
[root@dhcp-140-138 roottest]# oc status
In project ztest on server https://api.kewang1551.qe.gcp.devcluster.openshift.com:6443

You have no services, deployment configs, or build configs.
Run 'oc new-app' to create an application.
[root@dhcp-140-138 roottest]# oc status --kubeconfig=/root/.kube/config
In project zhouyt on server https://api.yz-5141.qe.devcluster.openshift.com:6443

You have no services, deployment configs, or build configs.

Comment 12 Jan Chaloupka 2020-05-18 10:22:58 UTC
`oc status` command is expected to return the following error:

error: the project "XXX" specified in your config does not exist.

Zhou Ying, can you please re-verify the fix one more time? It's sufficient to follow https://bugzilla.redhat.com/show_bug.cgi?id=1826676#c2.

Comment 13 zhou ying 2020-05-25 06:08:12 UTC
[root@dhcp-140-138 roottest]# oc version -o yaml 
  buildDate: "2020-05-23T15:25:26Z"
  compiler: gc
  gitCommit: 44354e2c9621e62b46d1854fd2d868f46fcdffff
  gitTreeState: clean
  gitVersion: 4.5.0-202005231517-44354e2
  goVersion: go1.13.4
  major: ""
  minor: ""
  platform: linux/amd64

[root@dhcp-140-138 roottest]# oc create ns jenkins-csb-skynet
namespace/jenkins-csb-skynet created
[root@dhcp-140-138 roottest]# oc project jenkins-csb-skynet
Now using project "jenkins-csb-skynet" on server "https://api.xxx.openshift.com:6443".
[root@dhcp-140-138 roottest]# oc status
In project jenkins-csb-skynet on server https://api.xxx.openshift.com:6443

You have no services, deployment configs, or build configs.
Run 'oc new-app' to create an application.

[root@dhcp-140-138 roottest]# oc delete project jenkins-csb-skynet
project.project.openshift.io "jenkins-csb-skynet" deleted
[root@dhcp-140-138 roottest]# oc status
error: you do not have rights to view project "jenkins-csb-skynet" specified in your config or the project doesn't exist
[root@dhcp-140-138 roottest]# oc project
error: you do not have rights to view project "jenkins-csb-skynet" specified in your config or the project doesn't exist

Comment 14 Jan Chaloupka 2020-05-25 08:38:43 UTC
Looks good now. Thank you!!!

Comment 15 errata-xmlrpc 2020-07-13 17:30:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.