Bug 1655675 - Define Elasticsearch DC recreate timeout to avoid premature rollbacks
Summary: Define Elasticsearch DC recreate timeout to avoid premature rollbacks
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 3.11.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 3.11.z
Assignee: Jeff Cantrill
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-12-03 16:37 UTC by Jeff Cantrill
Modified: 2019-01-10 09:05 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Feature: Define the recreate strategy timeout for Elasticsearch Reason: There are examples on AWS Openshift clusters where rollout of new ES pods fail because the cluster is having issues attaching storage. Defining a long recreate timeout allows the the cluster more time to attach storage to the new pod Result: Elasticsearch pods have more time to restart and experience fewer rollbacks
Clone Of:
Environment:
Last Closed: 2019-01-10 09:04:12 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Github openshift openshift-ansible pull 10866 None None None 2018-12-12 04:08:31 UTC
Red Hat Product Errata RHBA-2019:0024 None None None 2019-01-10 09:05:49 UTC

Description Jeff Cantrill 2018-12-03 16:37:23 UTC
Description of problem:

Elasticsearch nodes are rolled back to previous versions if they are unable to achieve a ready state within the default period of 600s.  Need to set the value to 1800 to avoid rollback



Additional info:  Advised it can take upwards of 30 or 40 minutes for AWS backed clusters to attach storage

Comment 1 Rich Megginson 2018-12-03 17:31:41 UTC
Is there a policy that says "never, ever rollback unless I explicitly tell you to rollback"?

Comment 3 Anping Li 2018-12-21 09:21:40 UTC
The fix in ose-ansible:v3.11.59

Comment 5 errata-xmlrpc 2019-01-10 09:04:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0024


Note You need to log in before you can comment on or make changes to this bug.