Bug 1665700 - REPLICA_SHARDS should be 0 when es-node-data=1
Summary: REPLICA_SHARDS should be 0 when es-node-data=1
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
medium
low
Target Milestone: ---
: 4.1.0
Assignee: Josef Karasek
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-01-13 06:07 UTC by Anping Li
Modified: 2019-06-04 10:41 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-04 10:41:49 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift elasticsearch-operator pull 84 0 None closed Bug 1665700: REPLICA_SHARDS should be 0 when es-node-data=1 2020-10-01 15:11:48 UTC
Red Hat Product Errata RHBA-2019:0758 0 None None None 2019-06-04 10:41:55 UTC

Description Anping Li 2019-01-13 06:07:50 UTC
Description of problem:
Currently the REPLICA_SHARDS=1 by default.  The replicas shard can be allocated when there are only 1 Elasticsearch data node.
It is better to default REPLICA_SHARDS=0, and the operator adjust value when es-node-data are more than 1.


Version-Release number of selected component (if applicable):
registry.reg-aws.openshift.com:443/openshift/ose-cluster-logging-operator:v4.0
registry.reg-aws.openshift.com:443/openshift/ose-logging-elasticsearch5:v4.0

How reproducible:
always

Steps to Reproduce:
1. Deploy logging using operator
2. Check the pod status
oc get pods --selector component=elasticsearch
NAME                                                 READY     STATUS    RESTARTS   AGE
elasticsearch-clientdatamaster-0-1-d4ddc458f-6vg95   1/1       Running   0          1h

3. Check the Elasticsearh shards
$es_util --query=_cat/shards
.operations.2019.01.13                                           0 p STARTED    659798 586.7mb 10.131.0.13 elasticsearch-clientdatamaster-0-1
.operations.2019.01.13                                           0 r UNASSIGNED                            
.searchguard                                                     0 p STARTED         5  32.8kb 10.131.0.13 elasticsearch-clientdatamaster-0-1
.searchguard                                                     0 r UNASSIGNED                            
.kibana                                                          0 p STARTED         5  56.8kb 10.131.0.13 elasticsearch-clientdatamaster-0-1
.kibana                                                          0 r UNASSIGNED                            
project.logtest1.9e42894c-16f1-11e9-a208-02e3476c20e4.2019.01.13 0 p STARTED      3085     2mb 10.131.0.13 elasticsearch-clientdatamaster-0-1
project.logtest1.9e42894c-16f1-11e9-a208-02e3476c20e4.2019.01.13 0 r UNASSIGNED                            

4. Check the REPLICA_SHARDS in configmap 

$ oc get configmap elasticsearch -o yaml|grep REPLICA
    REPLICA_SHARDS=1


Actual results:
The replicas shards can't be allocated for only one Elasticsearch data node.


Expected results:
The REPLICA_SHARDS=0 by default.

Additional info:

Comment 1 Jeff Cantrill 2019-01-18 15:08:48 UTC
Moving this to low since the expectation is generally you will deploy a 3+ node cluster. We will resolve by:

* Logging a warning message in the operator that there are not enough nodes to support the replication policy(advise to add more ES nodes?)
* Add status message with similar information as the warning message
* Take no additional action to modify the replication settings

Comment 2 Josef Karasek 2019-02-28 18:38:25 UTC
When invalid combination of RedundancyPolicy and Topology is configured, the operator will not attempt to create Elasticsearch cluster.

Instead it will print out an error message, warning the user that such request is invalid.

Comment 3 Anping Li 2019-03-01 02:56:48 UTC
If the RedundancyPolicy set correctly, the  REPLICA_SHARDS are created as expected. So move to verified.

Comment 5 errata-xmlrpc 2019-06-04 10:41:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.