Description of problem: After upgrading from 3.11.117 to 3.11.157 when attempting to upgrade the logging stack the cluster fails to reach the 100% active shards percentage. Memory of ES pod --> 55 GB per pod CPU of ES pod --> 4 cores per ES pod PRIMARY_SHARDS -- > 1 REPLICA_SHARDS -- > 2 The solution applied for solving the problem and achieve the 100% cluster state was to set all the shards which are in yellow state to one replica. This way there are no more shards to sync and the percentage goes back to 100% and the cluster turn to green state so the next member ( ES node ) can be redeployed by the playbook. There are no shards in RED state. Version-Release number of selected component (if applicable): How reproducible: Upgrade the EFK stack from 3.11.117 to 3.11.157 Steps to Reproduce: 1. 2. 3. Actual results: - The cluster percentage shows 99.92128197919595 - Some shards remain in yellow state Expected results: - The cluster percentage should be 100% - No shards remain in yellow state or RED state. Logs show that there is a version mismatch between nodes. ~~~ "target node version [5.6.13] is older than the source node version [5.6.16]" ~~~ Additional info:
*** Bug 1816965 has been marked as a duplicate of this bug. ***
Hi, can you please check the unassigned.reason code? See the following command example: $ oc exec $es_pod -n openshift-logging -c elasticsearch -- curl -s --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert --cacert /etc/elasticsearch/secret/admin-ca https://localhost:9200/_cat/shards?h=index,shard,prirep,state,unassigned.reason | grep -v STARTED It should give you output like this: .orphaned.2020.03.19 0 r UNASSIGNED INDEX_CREATED .orphaned.2020.03.19 0 r UNASSIGNED INDEX_CREATED project.pep-zoll-uat.5b8b65e2-81ea-11e9-ab56-00505698129b.2020.03.19 0 r UNASSIGNED INDEX_CREATED project.pep-zoll-uat.5b8b65e2-81ea-11e9-ab56-00505698129b.2020.03.19 0 r UNASSIGNED INDEX_CREATED What I am really interested in is the last column. Is there any other code than "INDEX_CREATED"? If yes, what is the code? Regards, Lukáš
Verified using ose-ansible:v3.11.232-2
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2477