Bug 1530484
| Summary: | Don't use the default token in the first master in playbook | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Anping Li <anli> |
| Component: | Logging | Assignee: | Jeff Cantrill <jcantril> |
| Status: | CLOSED ERRATA | QA Contact: | Anping Li <anli> |
| Severity: | low | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 3.9.0 | CC: | aos-bugs, ewolinet, jcantril, rekhan, rmeggins, sdodson, xtian |
| Target Milestone: | --- | Keywords: | Reopened |
| Target Release: | 3.9.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | No Doc Update | |
| Doc Text: |
undefined
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2018-12-13 19:26:48 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Scott, are you able to comment? Multiple changes have been made to insure that we're using a specific kubeconfig file throughout the installation and upgrade process. Requesting QE verify that to be true. The bug was fixed in openshift-ansible-3.9.51-1.git.0.c4968ca.el7.noarch Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:3748 |
Description of problem: The logging is using the default token $HOME/.kube/config. there are some issue with this token. the job may fail if the token is expired during deploying. the token maybe a common user. the correct way is to use the token /etc/origin/master/admin.kubeconfig. For example: openshift_service_catalog/tasks/install.yml: {{ openshift_client_binary }} --config=/etc/origin/master/admin.kubeconfig replace -f {{ mktemp.stdout }}/admin_sc_patch.yml Version-Release number of selected component (if applicable): openshift-ansible-3.9.0-0.13.0 openshift-ansible-3.7 openshift-ansible-3.6 openshift-ansible-3.5 How reproducible: Steps to Reproduce: 1. login as common user on the first master 2. deploy logging Or 1. Login as a cluster-admin user on the first master 2. deploy logging when the user is close to the expire time. Actual results: 1) #Playbook failed when the default user is not cluster-admin in the first master RUNNING HANDLER [openshift_logging_elasticsearch : Restarting logging-{{ _cluster_component }} cluster] *********************************************************************************************************** included: /usr/share/ansible/openshift-ansible/roles/openshift_logging_elasticsearch/tasks/restart_cluster.yml for openshift-181.lab.eng.nay.redhat.com RUNNING HANDLER [openshift_logging_elasticsearch : command] ******************************************************************************************************************************************************* fatal: [openshift-181.lab.eng.nay.redhat.com]: FAILED! => {"changed": true, "cmd": ["oc", "get", "pod", "-l", "component=es,provider=openshift", "-n", "logging", "-o", "jsonpath={.items[*].metadata.name}"], "delta": "0:00:00.267391", "end": "2018-01-03 01:04:01.553493", "failed": true, "msg": "non-zero return code", "rc": 1, "start": "2018-01-03 01:04:01.286102", "stderr": "Error from server (Forbidden): User \"anli\" cannot list pods in the namespace \"logging\": User \"anli\" cannot list pods in project \"logging\" (get pods)", "stderr_lines": ["Error from server (Forbidden): User \"anli\" cannot list pods in the namespace \"logging\": User \"anli\" cannot list pods in project \"logging\" (get pods)"], "stdout": "", "stdout_lines": []} RUNNING HANDLER [openshift_logging_elasticsearch : command] ******************************************************************************************************************************************************* RUNNING HANDLER [openshift_logging_elasticsearch : command] ******************************************************************************************************************************************************* 2) #Playbook failed when token is expired RUNNING HANDLER [openshift_logging_elasticsearch : Restarting logging-{{ _cluster_component }} cluster] *********************************************************************************************************** included: /usr/share/ansible/openshift-ansible/roles/openshift_logging_elasticsearch/tasks/restart_cluster.yml for openshift-181.lab.eng.nay.redhat.com RUNNING HANDLER [openshift_logging_elasticsearch : command] ******************************************************************************************************************************************************* fatal: [openshift-181.lab.eng.nay.redhat.com]: FAILED! => {"changed": true, "cmd": ["oc", "get", "pod", "-l", "component=es,provider=openshift", "-n", "logging", "-o", "jsonpath={.items[*].metadata.name}"], "delta": "0:00:00.295316", "end": "2018-01-03 00:38:42.693575", "failed": true, "msg": "non-zero return code", "rc": 1, "start": "2018-01-03 00:38:42.398259", "stderr": "error: You must be logged in to the server (the server has asked for the client to provide credentials (get pods))", "stderr_lines": ["error: You must be logged in to the server (the server has asked for the client to provide credentials (get pods))"], "stdout": "", "stdout_lines": []} RUNNING HANDLER [openshift_logging_elasticsearch : command] ******************************************************************************************************************************************************* RUNNING HANDLER [openshift_logging_elasticsearch : command] ******************************************************************************************************************************************************* Expected results: {{ openshift_client_binary }} --config=/etc/origin/master/admin.kubeconfig **** Additional info: openshift_logging_elasticsearch/tasks/restart_cluster.yml: oc get pod -l component={{ _cluster_component }},provider=openshift -n {{ openshift_logging_elasticsearch_namespace }} -o jsonpath={.items[*].metadata.name} openshift_logging_elasticsearch/tasks/restart_cluster.yml: oc get dc -l component={{ _cluster_component }},provider=openshift -n {{ openshift_logging_elasticsearch_namespace }} -o jsonpath={.items[*].metadata.name} openshift_logging_elasticsearch/tasks/restart_cluster.yml: oc get pod -l component={{ _cluster_component }},provider=openshift -n {{ openshift_logging_elasticsearch_namespace }} -o jsonpath={.items[*].metadata.name} openshift_logging_elasticsearch/tasks/restart_es_node.yml: oc get pods -l deploymentconfig={{ _es_node }} -n {{ openshift_logging_elasticsearch_namespace }} -o jsonpath={.items[*].metadata.name} openshift_logging_elasticsearch/tasks/main.yaml: oc get dc -l component="{{ es_component }}" -n "{{ openshift_logging_elasticsearch_namespace }}" -o name | cut -d'/' -f2