Bug 1887357
Summary: | ES pods can't recover from `Pending` status. | |||
---|---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Qiaoling Tang <qitang> | |
Component: | Logging | Assignee: | ewolinet | |
Status: | CLOSED ERRATA | QA Contact: | Qiaoling Tang <qitang> | |
Severity: | high | Docs Contact: | Rolfe Dlugy-Hegwer <rdlugyhe> | |
Priority: | unspecified | |||
Version: | 4.6 | CC: | aos-bugs, ewolinet, periklis, rdlugyhe | |
Target Milestone: | --- | |||
Target Release: | 4.7.0 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | logging-exploration | |||
Fixed In Version: | Doc Type: | Bug Fix | ||
Doc Text: |
* Previously, nodes did not recover from "Pending" status because a software bug did not correctly update their status in the Elasticsearch custom resource (CR). The current release fixes this issue, so the nodes can recover when their status is "Pending."
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1887357[*BZ#1887357*])
|
Story Points: | --- | |
Clone Of: | ||||
: | 1887943 (view as bug list) | Environment: | ||
Last Closed: | 2021-02-24 11:21:19 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1880926, 1887943 |
Description
Qiaoling Tang
2020-10-12 08:40:57 UTC
@qitang Was the reason the pods were pending initially due to the memory request being too large? (In reply to ewolinet from comment #1) > @qitang > > Was the reason the pods were pending initially due to the memory request > being too large? Yes, when I described the ES pods, the output was: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 26s 0/6 nodes are available: 6 Insufficient memory. Warning FailedScheduling 26s 0/6 nodes are available: 6 Insufficient memory. Tested with quay.io/openshift/origin-elasticsearch-operator@sha256:c59349755eeefe446a5c39a2caf9dce1320a462530e8ac7b9f73fa38bc10a468, the status could be updated, the ES pod could be redeployed. nodes: - conditions: - lastTransitionTime: "2020-10-14T00:41:23Z" message: '0/6 nodes are available: 6 Insufficient memory.' reason: Unschedulable status: "True" type: Unschedulable deploymentName: elasticsearch-cdm-n4txturr-1 upgradeStatus: scheduledUpgrade: "True" underUpgrade: "True" upgradePhase: nodeRestarting Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Errata Advisory for Openshift Logging 5.0.0), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:0652 |