+++ This bug was initially created as a clone of Bug #1460564 +++ Change the setting for node.max_local_storage_nodes to 1 for all ES pods, as this would prevent us from seeing problems where two ES pods end up sharing the same EBS volume if one pod does not shut down properly. For an example of this, see https://bugzilla.redhat.com/show_bug.cgi?id=1443350#c33 See discussion from https://discuss.elastic.co/t/multiple-folders-inside-nodes-folder/85358, and the documentation at https://www.elastic.co/guide/en/elasticsearch/reference/2.4/modules-node.html#max-local-storage-nodes.
merged in https://github.com/openshift/openshift-ansible/pull/4466/
Modifying this BZ to ref 3.4.1 as it clones the one for which comment 1 PR references the cloned BZ
Upstream fix: https://github.com/openshift/origin-aggregated-logging/pull/49
brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-deployer:rhaos-3.4-rhel-7-docker-candidate-88845-20170620200020, brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-deployer:3.4.1, brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-deployer:latest, brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-deployer:v3.4, brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-deployer:v3.4.1.41, brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-deployer:v3.4.1.41-2
max_local_storage_nodes is 1 now # oc get configmap logging-elasticsearch -o yaml | grep -i max_local_storage_nodes max_local_storage_nodes: 1 Testing env: # openshift version openshift v3.4.1.42 kubernetes v1.4.0+776c994 etcd 3.1.0-rc.0 Images from brew registry # docker images | grep logging logging-deployer 3.4.1 80ca9c90d261 35 hours ago 857.5 MB logging-kibana 3.4.1 0c2759ddfcd9 35 hours ago 338.8 MB logging-elasticsearch 3.4.1 2240ae237369 35 hours ago 399.6 MB logging-fluentd 3.4.1 059b92a39419 35 hours ago 232.7 MB logging-curator 3.4.1 46fd26ad9a8b 35 hours ago 244.5 MB logging-auth-proxy 3.4.1 990787824baf 35 hours ago 215.3 MB
@Jeff - We have a situation here with regards to the errata - https://errata.devel.redhat.com/advisory/29143 where the release date is tomorrow (29th June) and the customer is looking for this for quite some time. Customer also escalated this several times and Mustafa, Sudhir, Satish and a lot of others from the senior management is directly involved to get the issues taken care for the customer. Just received an update from Xiaoli Tan that if these bugs are fixed today, we could still have the timely release tomorrow. Thanks, Praveen Escalation Manager
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1640