Bug 1816966 - Cluster state does not manage to reach 100% during an upgrade from 3.11.117 to 3.11.157
Summary: Cluster state does not manage to reach 100% during an upgrade from 3.11.117 t...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 3.11.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 3.11.z
Assignee: ewolinet
QA Contact: Anping Li
URL:
Whiteboard:
: 1816965 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-03-25 09:37 UTC by Saurabh Sadhale
Modified: 2023-10-06 19:29 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-06-17 20:21:27 UTC
Target Upstream Version:
Embargoed:
jcantril: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift openshift-ansible pull 12165 0 None closed Bug 1816966: Updating ES restart policy 2021-02-17 17:53:31 UTC
Red Hat Knowledge Base (Solution) 4927201 0 None None None 2020-03-25 10:42:38 UTC
Red Hat Product Errata RHBA-2020:2477 0 None None None 2020-06-17 20:21:37 UTC

Description Saurabh Sadhale 2020-03-25 09:37:11 UTC
Description of problem:

After upgrading from 3.11.117 to 3.11.157 when attempting to upgrade the logging stack the cluster fails to reach the 100% active shards percentage. 

Memory of ES pod --> 55 GB per pod 
CPU of ES pod --> 4 cores per ES pod 
PRIMARY_SHARDS -- > 1
REPLICA_SHARDS -- > 2

The solution applied for solving the problem and achieve the 100% cluster state was to set all the shards which are in yellow state to one replica. This way there are no more shards to sync and the percentage goes back to 100% and the cluster turn to green state so the next member ( ES node ) can be redeployed by the playbook. There are no shards in RED state. 

Version-Release number of selected component (if applicable):


How reproducible:
Upgrade the EFK stack from 3.11.117 to 3.11.157

Steps to Reproduce:
1.
2.
3.

Actual results:
- The cluster percentage shows 99.92128197919595 
- Some shards remain in yellow state
Expected results:
- The cluster percentage should be 100%
- No shards remain in yellow state or RED state. 

Logs show that there is a version mismatch between nodes. 

~~~
"target node version [5.6.13] is older than the source node version [5.6.16]"
~~~


Additional info:

Comment 3 Jeff Cantrill 2020-03-25 12:48:30 UTC
*** Bug 1816965 has been marked as a duplicate of this bug. ***

Comment 6 Lukas Vlcek 2020-05-13 17:26:05 UTC
Hi,

can you please check the unassigned.reason code?
See the following command example:

$ oc exec $es_pod -n openshift-logging -c elasticsearch -- curl -s --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert --cacert /etc/elasticsearch/secret/admin-ca https://localhost:9200/_cat/shards?h=index,shard,prirep,state,unassigned.reason | grep -v STARTED


It should give you output like this:

.orphaned.2020.03.19                                                                           0 r UNASSIGNED INDEX_CREATED
.orphaned.2020.03.19                                                                           0 r UNASSIGNED INDEX_CREATED
project.pep-zoll-uat.5b8b65e2-81ea-11e9-ab56-00505698129b.2020.03.19                           0 r UNASSIGNED INDEX_CREATED
project.pep-zoll-uat.5b8b65e2-81ea-11e9-ab56-00505698129b.2020.03.19                           0 r UNASSIGNED INDEX_CREATED


What I am really interested in is the last column. Is there any other code than "INDEX_CREATED"? If yes, what is the code?

Regards,
Lukáš

Comment 13 Anping Li 2020-06-10 16:21:24 UTC
Verified using ose-ansible:v3.11.232-2

Comment 17 errata-xmlrpc 2020-06-17 20:21:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2477


Note You need to log in before you can comment on or make changes to this bug.