Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1704667

Summary: [ES] kibana.xxxx index is not replicated by default
Product: OpenShift Container Platform Reporter: Victor Hernando <vhernand>
Component: LoggingAssignee: Jeff Cantrill <jcantril>
Status: CLOSED DUPLICATE QA Contact: Anping Li <anli>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.11.0CC: aos-bugs, rmeggins
Target Milestone: ---   
Target Release: 3.11.z   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-04-30 12:04:09 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Victor Hernando 2019-04-30 10:06:41 UTC
Description of problem:
Deploying a new EFK stack in a fresh OCP 3.11 cluster, .kibana.xxxx index is not replicated by default despite logging was deployed with 1 replica setting.

# oc exec logging-es-data-master-2kpt6ybk-2-vsmdc -c elasticsearch -- env|grep SHAR
PRIMARY_SHARDS=1
REPLICA_SHARDS=1

Version-Release number of selected component (if applicable):
# rpm -qa|grep openshift-3.11
atomic-openshift-3.11.82-1.git.0.08bc31b.el7.x86_64

# oc exec logging-es-data-master-2kpt6ybk-2-vsmdc -c elasticsearch -- env|grep ES_VER
ES_VER=5.6.13
OSE_ES_VER=5.6.13.2-redhat-1

How reproducible:

Steps to Reproduce:
1.Deploy a new EFK stack on top of OCP 3.11.
# Ansible logging variables.
# EFK
openshift_logging_install_logging=True
openshift_logging_es_cluster_size=3
openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra":"true"}
openshift_logging_es_number_of_replicas=1

2. Check indices status:
#  oc exec -c elasticsearch logging-es-data-master-2kpt6ybk-2-vsmdc -- $curl_es/_cat/indices?v\&bytes=m
health status index                                                                uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .kibana                                                              n8kQpd86T9-25PhU9makyQ   1   1          1            0          0              0
green  open   .searchguard                                                         F20noqtCQIeo4UGhl1xNHQ   1   1          5            1          0              0
green  open   project.newtesthttpd.9092fe0d-459c-11e9-8a24-525400451bb0.2019.04.30 1MZKqGsuQruUapuuRQdf8Q   1   1         31            0          0              0
green  open   .operations.2019.04.30                                               b47a5R4SR9CqDcsKi5iU3w   1   1       2181            0          7              4

3. Inspect your logs from kibana web UI.
4. Check indices status again and observe that a new kibana index has been created without replicas.
#  oc exec -c elasticsearch logging-es-data-master-2kpt6ybk-2-vsmdc -- $curl_es/_cat/indices?v\&bytes=m
health status index                                                                uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .kibana                                                              n8kQpd86T9-25PhU9makyQ   1   1          1            0          0              0
green  open   .searchguard                                                         F20noqtCQIeo4UGhl1xNHQ   1   1          5            1          0              0
green  open   project.newtesthttpd.9092fe0d-459c-11e9-8a24-525400451bb0.2019.04.30 1MZKqGsuQruUapuuRQdf8Q   1   1         31            0          0              0
green  open   .operations.2019.04.30                                               b47a5R4SR9CqDcsKi5iU3w   1   1       2181            0          7              4
green  open   .kibana.d033e22ae348aeb5660fc2140aec35850c4da997                     mVGZNoB8Qgmp0Cg5v-xXKA   1   0          5            0          0              0
 

Actual results:
New kibana index created without replica just after inspecting logs on Kibana web UI.
If one ES node is lost, ES cluster becomes RED and unavailable despite having set 1 replica for ElasticSearch.


Expected results:
All indices should have the number of replicas was set in the cluster.

Additional info:

Comment 1 Jeff Cantrill 2019-04-30 12:04:09 UTC

*** This bug has been marked as a duplicate of bug 1667801 ***