Bug 1609131 - ES and Kibana pod couldn't be ready.
Summary: ES and Kibana pod couldn't be ready.
Keywords:
Status: CLOSED DUPLICATE of bug 1609138
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 3.11.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 3.11.0
Assignee: Jeff Cantrill
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-27 05:23 UTC by Qiaoling Tang
Modified: 2018-07-27 10:11 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-07-27 10:11:42 UTC
Target Upstream Version:


Attachments (Terms of Use)

Comment 2 Anping Li 2018-07-27 10:10:48 UTC
It is a playbook bug, the PLAY [Update vm.max_map_count for ES 5.x] was skipped.
 
The elasticsearch pod became ready after I configured the  vm.max_map_count manually.

PLAY [Update vm.max_map_count for ES 5.x] **************************************
META: ran handlers

TASK [Checking vm max_map_count value] *****************************************
task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/private/config.yml:85
Friday 27 July 2018  08:53:10 +0000 (0:00:00.053)       0:00:37.868 *********** 
skipping: [qe-qitang-311-gcenode-1] => {"changed": false, "skip_reason": "Conditional result was False"}

TASK [Updating vm.max_map_count value] *****************************************
task path: /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/private/config.yml:90
Friday 27 July 2018  08:53:10 +0000 (0:00:00.023)       0:00:37.892 *********** 
skipping: [qe-qitang-311-gcenode-1] => {"changed": false, "skip_reason": "Conditional result was False"}
META: ran handlers
META: ran handlers

PLAY [Remove created 99-elasticsearch sysctl] **********************************
META: ran handlers

Comment 3 Anping Li 2018-07-27 10:11:42 UTC

*** This bug has been marked as a duplicate of bug 1609138 ***


Note You need to log in before you can comment on or make changes to this bug.