Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1601362

Summary: Openshift api 403 error at jenkins start
Product: OpenShift Container Platform Reporter: Christian Stark <cstark>
Component: ImageStreamsAssignee: Gabe Montero <gmontero>
Status: CLOSED NOTABUG QA Contact: Dongbo Yan <dyan>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 3.9.0CC: aos-bugs, bparees, cstark, gmontero, jokerman, mkhan, mmccomas
Target Milestone: ---   
Target Release: 3.9.z   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-08-16 12:56:49 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
logs none

Description Christian Stark 2018-07-16 07:50:51 UTC
Description of problem:

While Jenkins is starting different plugins and startup scripts which are trying to access the Openshift API (using the Fabric8 Kubernetes Client) they all fail with a 403 http error (logs attached). This issue could be ignored so far because after startup Jenkins was working fine - the API access is just ok
after startup.....
However now there is a requirement to implement functionality which needs earlier API access so the 403 errors becomes a problem. 


First preanalysis/thoughts on this:

The maven kubernetes client for both the kubernetes plugin from CloudBees and the OpenShift Sync plugin do not have the proper
credentials for connecting to the cluster, and are hence getting a 403.

Both are configured to use a token to connect.  Which token is used depends on 
setup, topology, and configuration.  

For example,

1) if  the openshift jenkins image get launched in a pod using the template,  
the token for the SA mounted into the pod get used for the given project jenkins is running in

But if someone configures the system to look up resources in other projects for example,you can get 403s

If you are running the image outside a pod, as only a docker container, or if someone manually imported these plugins into an external jenkins image, it is necessary to manually set up the tokens, certs, etc.

2) you can configure through the jenkins config panels to use any OpenShift token mapped into Jenkins, in lieu of the default SA token.  It needs to be checked if the token which has been been imported has insufficient permissions.



Version-Release number of selected component (if applicable):

OpenShift 3.9.30

How reproducable:



Actual results:

403 Errors at Startup

Expected results:

403 Errors disappear 

Additional info:

logs are attached

Comment 1 Christian Stark 2018-07-16 07:58:07 UTC
Created attachment 1459062 [details]
logs

Comment 2 Gabe Montero 2018-07-16 13:46:17 UTC
Chritian and I had an email exchange based on some stack traces he provided.  Based on that data:

This on the surface is a configuration error.  The maven kubernetes client for both the
kubernetes plugin from CloudBees and the OpenShift Sync plugin do not have the proper
credentials for connecting to the cluster, and are hence getting a 403.

Both are congfigured to use a token to connect.  Which token is used depends on your
setup, topology, and configuration.  So you'll need to track that down.  For example,

1) if you launch the openshift jenkins image in a pod using our template, we use
the token for the SA mounted into the pod for the given project jenkins is running in

but if you then configure the system to look up resources in other projects for example,
you can get 403s

or if you are running the image outside a pod, as only a docker container, or you manually
imported these plugins into an external jenkins image, you have to manually set up
the tokens, certs, etc.

2) you can configure through the jenkins config panels to use any OpenShift token
mapped into Jenkins, in lieu of the default SA token.  You need to see if this is going on
as well, and if a token they have imported has insufficient permissions.

Bottom line - gather all these details, and we can see about helping you getting the config
correct.

Comment 5 Gabe Montero 2018-07-19 15:54:00 UTC
Ok, the jenkins log shows 403's for the sync plugin accessing build configs in the 'jenkins-build' namespace.

It also shows the "jenkins" SA has been mounted in the pod for the "jenkins-build" namespace.

The jenkins pod yaml shows the name of the SA is "jenkins"

So, again, points to a configuration issue.  More follow up:
1) if you run "oc adm policy who-can list builds" in the jenkins-build project, what is the output?
2) Run "oc describe sa jenkins" in the jenkins-build project, note the mountable secrets and tokens ... one name, like jenkins-token-53sdf should be common to both lists
3) Run "oc describe secret <token name>" and note the raw, decoded token
4) Now login into Jenkins.  Go to "Mangage Jenkins -> Configure System" ... in the "OpenShift Jenkins Sync" section, I assume "jenkins-build" in in the Namespace field, right?   Are there any Credentials selected in the "Credentials" field?  If there are none, it is using the token for the jenkins "SA" mounted into the pod.  If another token is listed, it is using that one (and it doesn't have permissions for the "jenkins-build" namespace.
5) If no Credentials are listed in the Jenkins config panel noted in 4), run "oc rsh <jenkins pod name> /bin/bash", or alternatively, log into the openshift console, and open a terminal into the jenkins pod from Applications -> Pods, select the jenkins pod, then select the "Terminal" tab
6) from the command prompt, cd to /run/secrets/kubernetes.io/serviceaccount
7) cat the contents of the token ... does it line up with what has permissions from the prior oc invocations above

Comment 7 Gabe Montero 2018-07-30 18:46:51 UTC
Yep I agree that looks right, at least from the jenkins / sync plugin perspective.

And yes, being able to run oc with those mounted sa credentials is a good sanity check.

Unfortunately, at this point, we are going to need api server logs to get the details on the denials that are resulting in the 403s in Jenkins.

Log level 2 should be sufficient on the master to capture this.

Comment 17 Gabe Montero 2018-08-16 12:56:49 UTC
thanks Christian