Bug 1741350
Summary: | Failed to upgrade ES pods because cpu limit is set to zero in the deployment | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Jeff Cantrill <jcantril> |
Component: | Logging | Assignee: | Jeff Cantrill <jcantril> |
Status: | CLOSED ERRATA | QA Contact: | Anping Li <anli> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.1.z | CC: | anli, aos-bugs, bparees, qitang, rmeggins |
Target Milestone: | --- | Keywords: | Regression |
Target Release: | 4.1.z | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | 1740447 | Environment: | |
Last Closed: | 2019-08-28 19:55:01 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1740447 | ||
Bug Blocks: | 1740957 |
Description
Jeff Cantrill
2019-08-14 20:55:33 UTC
*** Bug 1740957 has been marked as a duplicate of this bug. *** The ES weren't upgraded, but the ES still working at this phase. [anli@preserve-anli-slave 41b]$ oc logs elasticsearch-operator-b75f66bdc-8t5qz time="2019-08-19T12:02:04Z" level=info msg="Go Version: go1.10.8" time="2019-08-19T12:02:04Z" level=info msg="Go OS/Arch: linux/amd64" time="2019-08-19T12:02:04Z" level=info msg="operator-sdk Version: 0.0.7" time="2019-08-19T12:02:04Z" level=info msg="Watching logging.openshift.io/v1, Elasticsearch, , 5000000000" time="2019-08-19T12:02:43Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 7 shards in preparation for cluster restart" time="2019-08-19T12:02:43Z" level=warning msg="Error occurred while updating node elasticsearch-cdm-sjnug6wx-1: Deployment.apps \"elasticsearch-cdm-sjnug6wx-1\" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: \"200m\": must be less than or equal to cpu limit" time="2019-08-19T12:02:54Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 7 shards in preparation for cluster restart" time="2019-08-19T12:02:54Z" level=warning msg="Error occurred while updating node elasticsearch-cdm-sjnug6wx-2: Deployment.apps \"elasticsearch-cdm-sjnug6wx-2\" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: \"200m\": must be less than or equal to cpu limit" time="2019-08-19T12:03:05Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 6 shards in preparation for cluster restart" time="2019-08-19T12:03:05Z" level=warning msg="Error occurred while updating node elasticsearch-cdm-sjnug6wx-3: Deployment.apps \"elasticsearch-cdm-sjnug6wx-3\" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: \"200m\": must be less than or equal to cpu limit" time="2019-08-19T12:03:17Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 6 shards in preparation for cluster restart" time="2019-08-19T12:03:33Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 6 shards in preparation for cluster restart" time="2019-08-19T12:03:46Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 5 shards in preparation for cluster restart" time="2019-08-19T12:04:06Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 6 shards in preparation for cluster restart" time="2019-08-19T12:04:19Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 6 shards in preparation for cluster restart" time="2019-08-19T12:04:39Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 6 shards in preparation for cluster restart" time="2019-08-19T12:04:54Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 6 shards in preparation for cluster restart" time="2019-08-19T12:05:10Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 6 shards in preparation for cluster restart" time="2019-08-19T12:05:23Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 6 shards in preparation for cluster restart" time="2019-08-19T12:05:43Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 5 shards in preparation for cluster restart" time="2019-08-19T12:06:07Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 5 shards in preparation for cluster restart" time="2019-08-19T12:06:22Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 7 shards in preparation for cluster restart" time="2019-08-19T12:06:37Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 6 shards in preparation for cluster restart" time="2019-08-19T12:06:50Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 8 shards in preparation for cluster restart" time="2019-08-19T12:07:06Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 6 shards in preparation for cluster restart" time="2019-08-19T12:07:18Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 5 shards in preparation for cluster restart" time="2019-08-19T12:07:33Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 6 shards in preparation for cluster restart" time="2019-08-19T12:07:48Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 6 shards in preparation for cluster restart" time="2019-08-19T12:08:01Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 4 shards in preparation for cluster restart" time="2019-08-19T12:08:17Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 6 shards in preparation for cluster restart" time="2019-08-19T12:08:31Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 6 shards in preparation for cluster restart" time="2019-08-19T12:08:43Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 5 shards in preparation for cluster restart" time="2019-08-19T12:09:07Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 4 shards in preparation for cluster restart" time="2019-08-19T12:09:18Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 6 shards in preparation for cluster restart" Can someone explain how this bug got introduced into 4.1.11 in the first place? nothing should have gone into 4.1.11 that wasn't first verified at working in 4.2. Can you link to the 4.1.z PR that introduced it so we can see how it got merged? The clusterlogging could be upgraded from 4.1.4 to 4.1.13. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2547 |