Bug 1530484 - Don't use the default token in the first master in playbook
Summary: Don't use the default token in the first master in playbook
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 3.9.0
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: 3.9.0
Assignee: Jeff Cantrill
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-01-03 06:20 UTC by Anping Li
Modified: 2021-09-09 12:59 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
Environment:
Last Closed: 2018-12-13 19:26:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:3748 0 None None None 2018-12-13 19:26:58 UTC

Description Anping Li 2018-01-03 06:20:01 UTC
Description of problem:
The logging is using the default token $HOME/.kube/config.  there are some issue with this token. the job may fail if the token is expired during deploying. the token maybe a common user. the correct way is to use the token /etc/origin/master/admin.kubeconfig.

For example: 
openshift_service_catalog/tasks/install.yml:    {{ openshift_client_binary }} --config=/etc/origin/master/admin.kubeconfig replace -f {{ mktemp.stdout }}/admin_sc_patch.yml



Version-Release number of selected component (if applicable):
openshift-ansible-3.9.0-0.13.0
openshift-ansible-3.7
openshift-ansible-3.6
openshift-ansible-3.5

How reproducible:


Steps to Reproduce:
1. login as common user on the first master
2. deploy logging

Or 
1. Login as a cluster-admin user on the first master
2. deploy logging when the user is close to the expire time.

Actual results:
1) #Playbook failed when the default user is not cluster-admin in the first master
RUNNING HANDLER [openshift_logging_elasticsearch : Restarting logging-{{ _cluster_component }} cluster] ***********************************************************************************************************
included: /usr/share/ansible/openshift-ansible/roles/openshift_logging_elasticsearch/tasks/restart_cluster.yml for openshift-181.lab.eng.nay.redhat.com

RUNNING HANDLER [openshift_logging_elasticsearch : command] *******************************************************************************************************************************************************
fatal: [openshift-181.lab.eng.nay.redhat.com]: FAILED! => {"changed": true, "cmd": ["oc", "get", "pod", "-l", "component=es,provider=openshift", "-n", "logging", "-o", "jsonpath={.items[*].metadata.name}"], "delta": "0:00:00.267391", "end": "2018-01-03 01:04:01.553493", "failed": true, "msg": "non-zero return code", "rc": 1, "start": "2018-01-03 01:04:01.286102", "stderr": "Error from server (Forbidden): User \"anli\" cannot list pods in the namespace \"logging\": User \"anli\" cannot list pods in project \"logging\" (get pods)", "stderr_lines": ["Error from server (Forbidden): User \"anli\" cannot list pods in the namespace \"logging\": User \"anli\" cannot list pods in project \"logging\" (get pods)"], "stdout": "", "stdout_lines": []}

RUNNING HANDLER [openshift_logging_elasticsearch : command] *******************************************************************************************************************************************************

RUNNING HANDLER [openshift_logging_elasticsearch : command] *******************************************************************************************************************************************************

2) #Playbook failed when token is expired
RUNNING HANDLER [openshift_logging_elasticsearch : Restarting logging-{{ _cluster_component }} cluster] ***********************************************************************************************************
included: /usr/share/ansible/openshift-ansible/roles/openshift_logging_elasticsearch/tasks/restart_cluster.yml for openshift-181.lab.eng.nay.redhat.com

RUNNING HANDLER [openshift_logging_elasticsearch : command] *******************************************************************************************************************************************************
fatal: [openshift-181.lab.eng.nay.redhat.com]: FAILED! => {"changed": true, "cmd": ["oc", "get", "pod", "-l", "component=es,provider=openshift", "-n", "logging", "-o", "jsonpath={.items[*].metadata.name}"], "delta": "0:00:00.295316", "end": "2018-01-03 00:38:42.693575", "failed": true, "msg": "non-zero return code", "rc": 1, "start": "2018-01-03 00:38:42.398259", "stderr": "error: You must be logged in to the server (the server has asked for the client to provide credentials (get pods))", "stderr_lines": ["error: You must be logged in to the server (the server has asked for the client to provide credentials (get pods))"], "stdout": "", "stdout_lines": []}

RUNNING HANDLER [openshift_logging_elasticsearch : command] *******************************************************************************************************************************************************

RUNNING HANDLER [openshift_logging_elasticsearch : command] *******************************************************************************************************************************************************


Expected results:
{{ openshift_client_binary }} --config=/etc/origin/master/admin.kubeconfig ****

Additional info:
openshift_logging_elasticsearch/tasks/restart_cluster.yml:    oc get pod -l component={{ _cluster_component }},provider=openshift -n {{ openshift_logging_elasticsearch_namespace }} -o jsonpath={.items[*].metadata.name}
openshift_logging_elasticsearch/tasks/restart_cluster.yml:    oc get dc -l component={{ _cluster_component }},provider=openshift -n {{ openshift_logging_elasticsearch_namespace }} -o jsonpath={.items[*].metadata.name}
openshift_logging_elasticsearch/tasks/restart_cluster.yml:    oc get pod -l component={{ _cluster_component }},provider=openshift -n {{ openshift_logging_elasticsearch_namespace }} -o jsonpath={.items[*].metadata.name}
openshift_logging_elasticsearch/tasks/restart_es_node.yml:    oc get pods -l deploymentconfig={{ _es_node }} -n {{ openshift_logging_elasticsearch_namespace }} -o jsonpath={.items[*].metadata.name}
openshift_logging_elasticsearch/tasks/main.yaml:        oc get dc -l component="{{ es_component }}" -n "{{ openshift_logging_elasticsearch_namespace }}" -o name | cut -d'/' -f2

Comment 4 Jeff Cantrill 2018-11-01 19:55:26 UTC
Scott, are you able to comment?

Comment 7 Scott Dodson 2018-11-06 15:48:51 UTC
Multiple changes have been made to insure that we're using a specific kubeconfig file throughout the installation and upgrade process.

Requesting QE verify that to be true.

Comment 8 Anping Li 2018-11-12 08:26:06 UTC
The bug was fixed in openshift-ansible-3.9.51-1.git.0.c4968ca.el7.noarch

Comment 11 errata-xmlrpc 2018-12-13 19:26:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3748


Note You need to log in before you can comment on or make changes to this bug.