Bug 1468760

Summary: ansible logging check can have unexpected failure during module execution
Product: OpenShift Container Platform Reporter: Luke Meyer <lmeyer>
Component: InstallerAssignee: Juan Vallejo <jvallejo>
Status: CLOSED ERRATA QA Contact: Johnny Liu <jialiu>
Severity: low Docs Contact:
Priority: medium    
Version: 3.6.0CC: aos-bugs, jokerman, mmccomas, smunilla
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-08-10 05:29:50 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Luke Meyer 2017-07-07 20:31:55 UTC
Description of problem:
When running the logging checks via openshift-ansible it's possible to encounter following traceback.


Version-Release number of the following components:
I ran the installation with the latest pre-release ose-ansible build v3.6.137-1

rpm -q openshift-ansible: openshift-ansible-3.6.137-1.git.0.12654fb.el7.noarch
rpm -q ansible: ansible-2.2.3.0-1.el7.noarch
FWIW I installed against CentOS 7, containerized with the Origin 3.6.0-alpha.1 images specified.


How reproducible:
100%

Steps to Reproduce:
1. Deploy cluster with logging.
2. Break a component so that its pod(s) are defined but not scheduled. One way that probably works is to mark all nodes unschedulable and delete an existing pod.
3. Run the logging checks.

Expected results:
Should report on broken components

Actual results:
Will attach privately. Traceback is as follows:
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 125, in run
    res = self._execute()
  File "/usr/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 522, in _execute
    result = self._handler.run(task_vars=variables)
  File "/home/lmeyer/go/src/github.com/openshift/openshift-ansible/roles/openshift_health_checker/action_plugins/openshift_health_check.py", line 66, in run
    r = check.run(tmp, task_vars)
  File "/home/lmeyer/go/src/github.com/openshift/openshift-ansible/roles/openshift_health_checker/openshift_checks/logging/kibana.py", line 39, in run
    check_error = self.check_kibana(kibana_pods)
  File "/home/lmeyer/go/src/github.com/openshift/openshift-ansible/roles/openshift_health_checker/openshift_checks/logging/kibana.py", line 105, in check_kibana
    not_running = self.not_running_pods(pods)
  File "/home/lmeyer/go/src/github.com/openshift/openshift-ansible/roles/openshift_health_checker/openshift_checks/logging/logging.py", line 59, in not_running_pods
    for container in pod['status']['containerStatuses']
KeyError: 'containerStatuses'

Comment 4 Juan Vallejo 2017-07-10 19:40:31 UTC
Related PR: https://github.com/openshift/openshift-ansible/pull/4728

Comment 5 Luke Meyer 2017-07-12 18:35:25 UTC
fix merged in https://github.com/openshift/openshift-ansible/pull/4737

Comment 9 errata-xmlrpc 2017-08-10 05:29:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1716