Bug 1419811 - [IntService_public_324]Failed in running handler "[openshift_logging : restart master]" after task "[openshift_logging : Delete temp directory]"
Summary: [IntService_public_324]Failed in running handler "[openshift_logging : restar...
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.5.0
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: ---
Assignee: ewolinet
QA Contact: Xia Zhao
Depends On:
TreeView+ depends on / blocked
Reported: 2017-02-07 06:37 UTC by Xia Zhao
Modified: 2017-07-24 14:11 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Last Closed: 2017-04-12 18:49:35 UTC
Target Upstream Version:

Attachments (Terms of Use)
ansible_log (729.93 KB, text/plain)
2017-02-07 06:51 UTC, Xia Zhao
no flags Details

System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:0903 0 normal SHIPPED_LIVE OpenShift Container Platform atomic-openshift-utils bug fix and enhancement 2017-04-12 22:45:42 UTC

Description Xia Zhao 2017-02-07 06:37:00 UTC
Description of problem:
Deploy logging with ansible, failed in running handler "[openshift_logging : restart master]" after task "[openshift_logging : Delete temp directory]" :

RUNNING HANDLER [openshift_logging : restart master] ***************************
Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/system/systemd.py
<$master> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/root/libra.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r $master '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
ERROR! The requested handler 'Verify API Server' was not found in either the main handlers list nor in the listening handlers list

And at the same time EFK stacks are actually deployed in logging project:
# oc get po -n xiazhao
NAME                          READY     STATUS             RESTARTS   AGE
logging-curator-1-m8mfv       1/1       Running            0          1m
logging-es-3okue3pz-1-fbvf9   0/1       CrashLoopBackOff   3          1m
logging-fluentd-0k0pg         1/1       Running            0          2m
logging-fluentd-tgsck         1/1       Running            0          1m
logging-kibana-1-8h9w1        2/2       Running            0          1m

Version-Release number of selected component (if applicable):
# openshift version
openshift v3.5.0.17+c55cf2b
kubernetes v1.5.2+43a9be4
etcd 3.1.0

How reproducible:

Steps to Reproduce:
1.Deploy logging 3.5.0 by executing ansible scripts

Actual results:
The ansible deployment script failed

Expected results:
The ansible deployment script should complete successfully

Additional info:
Full log of ansible attached

Comment 1 Xia Zhao 2017-02-07 06:51:07 UTC
Created attachment 1248289 [details]

Comment 2 Xia Zhao 2017-02-07 07:09:50 UTC
This issue didn't repro after the 3rd time of logging deployment. Change priority and severity to low.

Comment 3 Xia Zhao 2017-02-08 06:38:23 UTC
We have figured out that this issue only repro for the 1st time deploying logging from the ansible control machine. If you try it there for the 2nd or 3rd and next times, issue will gone. Change back the priority to medium.

Comment 4 ewolinet 2017-02-17 19:15:06 UTC
I was able to recreate this, we are missing a handler as part of the openshift_logging role.

Comment 7 Xia Zhao 2017-02-23 04:42:38 UTC
Verify fixed with the latest code from openshift-ansible repo. Deployed logging several times today, none repro. Set to verified.

Comment 9 errata-xmlrpc 2017-04-12 18:49:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.