Bug 1463046 - Change the Elasticsearch setting "node.max_local_storage_nodes" to 1 to prevent sharing EBS volumes
Summary: Change the Elasticsearch setting "node.max_local_storage_nodes" to 1 to preve...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 3.5.1
Hardware: All
OS: All
unspecified
high
Target Milestone: ---
: 3.5.z
Assignee: Jeff Cantrill
QA Contact: Xia Zhao
URL:
Whiteboard:
: 1462281 (view as bug list)
Depends On: 1460564 1462277
Blocks: 1462281
TreeView+ depends on / blocked
 
Reported: 2017-06-20 02:04 UTC by Jeff Cantrill
Modified: 2017-07-11 10:47 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: Elasticsearch default value for sharing storage between ES instances was wrong Consequence: The incorrect default value allowed an ES pod starting up (when another ES pod was shutting down, e.g. during dc redeployments) to create a new location on the PV for managing the storage volume, duplicating data, and in some instances, potentially causing data loss. Fix: All ES pods now run with "node.max_local_storage_nodes" set to 1. Result: The ES pods starting up/shutting down will no longer share the same storage and prevent the data duplication and/or data loss.
Clone Of: 1462277
Environment:
Last Closed: 2017-07-11 10:47:38 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:1640 0 normal SHIPPED_LIVE OpenShift Container Platform 3.5 and 3.4 bug fix update 2017-07-11 14:47:16 UTC

Description Jeff Cantrill 2017-06-20 02:04:43 UTC
+++ This bug was initially created as a clone of Bug #1462277 +++

+++ This bug was initially created as a clone of Bug #1460564 +++

Change the setting for node.max_local_storage_nodes to 1 for all ES pods, as this would prevent us from seeing problems where two ES pods end up sharing the same EBS volume if one pod does not shut down properly.

For an example of this, see https://bugzilla.redhat.com/show_bug.cgi?id=1443350#c33

See discussion from https://discuss.elastic.co/t/multiple-folders-inside-nodes-folder/85358, and the documentation at https://www.elastic.co/guide/en/elasticsearch/reference/2.4/modules-node.html#max-local-storage-nodes.

--- Additional comment from Jeff Cantrill on 2017-06-19 21:57:37 EDT ---

merged in https://github.com/openshift/openshift-ansible/pull/4466/

--- Additional comment from Jeff Cantrill on 2017-06-19 22:03:29 EDT ---

Modifying this BZ to ref 3.4.1 as it clones the one for which comment 1 PR references the cloned BZ

Comment 1 Jeff Cantrill 2017-06-20 02:09:21 UTC
backport PR https://github.com/openshift/openshift-ansible/pull/4502

Comment 3 Xia Zhao 2017-06-30 05:50:24 UTC
The testing work is blocked by this new regression bug: https://bugzilla.redhat.com/show_bug.cgi?id=1466626

Comment 4 Jeff Cantrill 2017-06-30 17:18:09 UTC
*** Bug 1462281 has been marked as a duplicate of this bug. ***

Comment 5 Xia Zhao 2017-07-03 05:45:14 UTC
max_local_storage_nodes is 1 now
# oc get configmap logging-elasticsearch -o yaml | grep -i max_local_storage_nodes
      max_local_storage_nodes: 1

Testing env:
# openshift version
openshift v3.5.5.31
kubernetes v1.5.2+43a9be4
etcd 3.1.0

ansible version:
openshift-ansible-playbooks-3.5.91-1.git.0.28b3ddb.el7.noarch
worked around bug #1466626 by adding the configuration in https://github.com/openshift/openshift-ansible/pull/4657/files

Images from brew registry:
openshift3/logging-kibana    277c4a616a5a
openshift3/logging-elasticsearch    a7989e457354
openshift3/logging-fluentd    c09565262cad
openshift3/logging-curator    0aa259fbc36e
openshift3/logging-auth-proxy    d79212db0381

Comment 7 errata-xmlrpc 2017-07-11 10:47:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1640


Note You need to log in before you can comment on or make changes to this bug.