Bug 1601362
| Summary: | Openshift api 403 error at jenkins start | ||||||
|---|---|---|---|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Christian Stark <cstark> | ||||
| Component: | ImageStreams | Assignee: | Gabe Montero <gmontero> | ||||
| Status: | CLOSED NOTABUG | QA Contact: | Dongbo Yan <dyan> | ||||
| Severity: | medium | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 3.9.0 | CC: | aos-bugs, bparees, cstark, gmontero, jokerman, mkhan, mmccomas | ||||
| Target Milestone: | --- | ||||||
| Target Release: | 3.9.z | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2018-08-16 12:56:49 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
|
Description
Christian Stark
2018-07-16 07:50:51 UTC
Created attachment 1459062 [details]
logs
Chritian and I had an email exchange based on some stack traces he provided. Based on that data: This on the surface is a configuration error. The maven kubernetes client for both the kubernetes plugin from CloudBees and the OpenShift Sync plugin do not have the proper credentials for connecting to the cluster, and are hence getting a 403. Both are congfigured to use a token to connect. Which token is used depends on your setup, topology, and configuration. So you'll need to track that down. For example, 1) if you launch the openshift jenkins image in a pod using our template, we use the token for the SA mounted into the pod for the given project jenkins is running in but if you then configure the system to look up resources in other projects for example, you can get 403s or if you are running the image outside a pod, as only a docker container, or you manually imported these plugins into an external jenkins image, you have to manually set up the tokens, certs, etc. 2) you can configure through the jenkins config panels to use any OpenShift token mapped into Jenkins, in lieu of the default SA token. You need to see if this is going on as well, and if a token they have imported has insufficient permissions. Bottom line - gather all these details, and we can see about helping you getting the config correct. Ok, the jenkins log shows 403's for the sync plugin accessing build configs in the 'jenkins-build' namespace. It also shows the "jenkins" SA has been mounted in the pod for the "jenkins-build" namespace. The jenkins pod yaml shows the name of the SA is "jenkins" So, again, points to a configuration issue. More follow up: 1) if you run "oc adm policy who-can list builds" in the jenkins-build project, what is the output? 2) Run "oc describe sa jenkins" in the jenkins-build project, note the mountable secrets and tokens ... one name, like jenkins-token-53sdf should be common to both lists 3) Run "oc describe secret <token name>" and note the raw, decoded token 4) Now login into Jenkins. Go to "Mangage Jenkins -> Configure System" ... in the "OpenShift Jenkins Sync" section, I assume "jenkins-build" in in the Namespace field, right? Are there any Credentials selected in the "Credentials" field? If there are none, it is using the token for the jenkins "SA" mounted into the pod. If another token is listed, it is using that one (and it doesn't have permissions for the "jenkins-build" namespace. 5) If no Credentials are listed in the Jenkins config panel noted in 4), run "oc rsh <jenkins pod name> /bin/bash", or alternatively, log into the openshift console, and open a terminal into the jenkins pod from Applications -> Pods, select the jenkins pod, then select the "Terminal" tab 6) from the command prompt, cd to /run/secrets/kubernetes.io/serviceaccount 7) cat the contents of the token ... does it line up with what has permissions from the prior oc invocations above Yep I agree that looks right, at least from the jenkins / sync plugin perspective. And yes, being able to run oc with those mounted sa credentials is a good sanity check. Unfortunately, at this point, we are going to need api server logs to get the details on the denials that are resulting in the 403s in Jenkins. Log level 2 should be sufficient on the master to capture this. thanks Christian |