Bug 1906121

Summary: [oc] After new-project creation, the kubeconfig file does not set the project
Product: OpenShift Container Platform Reporter: RamaKasturi <knarra>
Component: ocAssignee: Maciej Szulik <maszulik>
Status: CLOSED ERRATA QA Contact: RamaKasturi <knarra>
Severity: high Docs Contact:
Priority: high    
Version: 4.7CC: akostadi, aos-bugs, jhou, jokerman, lxia, mfojtik, sasha, somalley, yprokule
Target Milestone: ---Keywords: Regression, TestBlocker
Target Release: 4.7.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-02-24 15:41:51 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description RamaKasturi 2020-12-09 17:36:21 UTC
Description of problem:
Always hit error "cannot create resource "pods" in API group "" in the namespace "aos-qe-ci"

Discussing further looks like When SA token is mounted, then oc47 seems to be reading SA default config instead of cli --kubeconfig option. This bug has been reported in the past and fixed , reappearing now again.

Version-Release number of selected component (if applicable):
clientVersion:
        buildDate: "2020-12-09T12:16:57Z"
        compiler: gc
        gitCommit: 0229219f5520aa9e9e4946cf15ef5b2bc0b8abe2
        gitTreeState: clean
        gitVersion: 4.7.0-202012091125.p0-0229219
        goVersion: go1.15.2
        major: ""
        minor: ""
        platform: linux/amd64

How reproducible:
Always

Steps to Reproduce:
1. When running tests using automation this bug has been hit always
2.
3.

Actual results:
oc create -f 10.json --kubeconfig=/home/jenkins/ws/workspace/Runner-v3/workdir/ocp4_testuser-22.kubeconfig
      
      STDERR:
      Error from server (Forbidden): error when creating "10.json": pods is forbidden: User "testuser-22" cannot create resource "pods" in API group "" in the namespace "aos-qe-ci"

Expected results:
should not hit any such issues.

Additional info:
same cases works fine with a 4.6 client and no such error is seen.

Comment 2 Aleksandar Kostadinov 2020-12-09 22:58:33 UTC
Hi Sally, steps to reproduce manually are:
1. create pod with mounted secret in cluster (with mounted service account secret /this should be already the default/)
2. copy kuneconfig of *another* credentials and project name to pod
3. inside the pod run `oc create -f pod.yaml --kubeconfig /tmp/testuser-config` with oc 4.7

Comment 4 RamaKasturi 2020-12-10 07:19:32 UTC
Adding TestBlocker as it is blocking all auto tests, thanks !!

Comment 5 Maciej Szulik 2020-12-10 09:32:46 UTC
I see where the problem is, thanks Rama for detailed explanation. It's related with new-project and --kubeconfig flag

Comment 7 RamaKasturi 2020-12-11 19:06:58 UTC
Verified with the payload below and i see that issue no more exists, i.e after new-project kubeconfig file sets the project

[knarra@knarra openshift-client-linux-4.7.0-0.nightly-2020-12-11-135127]$ ./oc version -o yaml
clientVersion:
  buildDate: "2020-12-11T02:26:19Z"
  compiler: gc
  gitCommit: 4ebfe9cad4c3b7286e1a3d722c502e201b1a8a2b
  gitTreeState: clean
  gitVersion: 4.7.0-202012110053.p0-4ebfe9c
  goVersion: go1.15.2
  major: ""
  minor: ""
  platform: linux/amd64
openshiftVersion: 4.7.0-0.nightly-2020-12-11-135127
releaseClientVersion: 4.7.0-0.nightly-2020-12-11-135127
serverVersion:
  buildDate: "2020-11-25T00:18:44Z"
  compiler: gc
  gitCommit: ad738ba548b6d6b5cd2e83351951ccd7019afa4c
  gitTreeState: clean
  gitVersion: v1.19.2+ad738ba
  goVersion: go1.15.2
  major: "1"
  minor: "19"
  platform: linux/amd64

[knarra@knarra openshift-client-linux-4.7.0-0.nightly-2020-12-11-135127]$ ./oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.7.0-0.nightly-2020-12-11-135127   True        False         17m     Cluster version is 4.7.0-0.nightly-2020-12-11-135127


[knarra@knarra openshift-client-linux-4.7.0-0.nightly-2020-12-11-135127]$ ./oc login -u testuser-3 -p vu36laYhVeiE --server https://api.knarra1211.qe.devcluster.openshift.com:6443 --insecure-skip-tls-verify --kubeconfig /tmp/ocp_testuser3.kubeconfig
Login successful.

You don't have any projects. You can try to create a new project, by running

    oc new-project <projectname>

[knarra@knarra openshift-client-linux-4.7.0-0.nightly-2020-12-11-135127]$ ./oc new-project testuser-3-proj --kubeconfig /tmp/ocp_testuser3.kubeconfig 
Now using project "testuser-3-proj" on server "https://api.knarra1211.qe.devcluster.openshift.com:6443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app rails-postgresql-example

to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:

    kubectl create deployment hello-node --image=k8s.gcr.io/serve_hostname

[knarra@knarra openshift-client-linux-4.7.0-0.nightly-2020-12-11-135127]$ ./oc config view --kubeconfig /tmp/ocp_testuser3.kubeconfig 
apiVersion: v1
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: https://api.knarra1211.qe.devcluster.openshift.com:6443
  name: api-knarra1211-qe-devcluster-openshift-com:6443
contexts:
- context:
    cluster: api-knarra1211-qe-devcluster-openshift-com:6443
    user: testuser-3/api-knarra1211-qe-devcluster-openshift-com:6443
  name: /api-knarra1211-qe-devcluster-openshift-com:6443/testuser-3
- context:
    cluster: api-knarra1211-qe-devcluster-openshift-com:6443
    namespace: testuser-3-proj
    user: testuser-3/api-knarra1211-qe-devcluster-openshift-com:6443
  name: testuser-3-proj/api-knarra1211-qe-devcluster-openshift-com:6443/testuser-3
current-context: testuser-3-proj/api-knarra1211-qe-devcluster-openshift-com:6443/testuser-3
kind: Config
preferences: {}
users:
- name: testuser-3/api-knarra1211-qe-devcluster-openshift-com:6443
  user:
    token: REDACTED
[knarra@knarra openshift-client-linux-4.7.0-0.nightly-2020-12-11-135127]$ ./oc get pods -o wide
No resources found in default namespace.

[knarra@knarra openshift-client-linux-4.7.0-0.nightly-2020-12-11-135127]$ ./oc create -f /tmp/pod.yaml 
pod/mypod-constrained created
[knarra@knarra openshift-client-linux-4.7.0-0.nightly-2020-12-11-135127]$ ./oc get pods -o wide
NAME                READY   STATUS    RESTARTS   AGE   IP            NODE                                        NOMINATED NODE   READINESS GATES
mypod-constrained   1/1     Running   0          6s    10.129.2.31   ip-10-0-135-33.us-east-2.compute.internal   <none>           <none>


Based on the above moving bug to verified state.

Comment 9 Aleksandar Kostadinov 2020-12-18 15:16:01 UTC
RamaKasturi, do you know how is the situation with 4.6? Shall we file an issue for it as well?

Comment 10 RamaKasturi 2020-12-18 15:59:33 UTC
(In reply to Aleksandar Kostadinov from comment #9)
> RamaKasturi, do you know how is the situation with 4.6? Shall we file an
> issue for it as well?

Hello Aleksandar,

  4.6 looks good, there is no such issue. For more info you could refer to my comment here: https://bugzilla.redhat.com/show_bug.cgi?id=1906121#c3

Thanks
kasturi

Comment 12 errata-xmlrpc 2021-02-24 15:41:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633