Bug 1889573
Summary: | The EO CrashLoopBackOff after update the kibana resource configurations in clusterlogging instance. | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Qiaoling Tang <qitang> |
Component: | Logging | Assignee: | Hui Kang <hkang> |
Status: | CLOSED ERRATA | QA Contact: | Qiaoling Tang <qitang> |
Severity: | high | Docs Contact: | Rolfe Dlugy-Hegwer <rdlugyhe> |
Priority: | unspecified | ||
Version: | 4.5 | CC: | aos-bugs, hkang, periklis, rdlugyhe |
Target Milestone: | --- | ||
Target Release: | 4.7.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | logging-exploration | ||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
* Previously, if you updated the Kibana resource configuration in the clusterlogging instance to `resource{}`. The resulting a nil map caused a panic and changed the status of the Elasticsearch Operator to `CrashLoopBackOff`. The current release fixes this issue by initializing the map.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1889573[*BZ#1889573*])
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2021-02-24 11:21:19 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Qiaoling Tang
2020-10-20 03:11:29 UTC
elasticsearch-operator.4.6.0-202010140833.p0 has the same issue. > 1. deploy logging 4.5 > 2. create clusterlogging instance with: > apiVersion: "logging.openshift.io/v1" > kind: "ClusterLogging" > metadata: > name: "instance" > namespace: "openshift-logging" > spec: > managementState: "Managed" > logStore: > type: "elasticsearch" > retentionPolicy: > application: > maxAge: 1d > infra: > maxAge: 3h > audit: > maxAge: 2w > elasticsearch: > nodeCount: 3 > redundancyPolicy: "SingleRedundancy" > resources: > requests: > memory: "2Gi" > storage: > storageClassName: "standard" > size: "20Gi" > visualization: > type: "kibana" > kibana: > resources: {} > replicas: 1 > collection: > logs: > type: "fluentd" > fluentd: {} I found the issue only happens when there has `spec.visualization.kibana.resources: {}` in the clusterlogging instance, if create clusterlogging instance with the below yaml and do the same steps, no such issue. apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" spec: managementState: "Managed" logStore: type: "elasticsearch" retentionPolicy: application: maxAge: 1d infra: maxAge: 3h audit: maxAge: 2w elasticsearch: nodeCount: 3 redundancyPolicy: "SingleRedundancy" resources: requests: memory: "2Gi" storage: storageClassName: "standard" size: "20Gi" visualization: type: "kibana" kibana: replicas: 1 collection: logs: type: "fluentd" fluentd: {} > 3. wait until all the EFK pods become Running, update the kibana resource > configurations to: > managementState: Managed > visualization: > kibana: > proxy: > resources: > limits: > memory: 1Gi > requests: > cpu: 100m > memory: 1Gi > replicas: 1 > resources: > limits: > cpu: 1000m > memory: 4Gi > requests: > cpu: 800m > memory: 2Gi > type: kibana > 4. check the EO status Doc Text: Previously, the operator fails when the current resource of kibana is resources: {} Doc type: Bug fix Verified with quay.io/openshift/origin-elasticsearch-operator@sha256:1a1446fab00689c1e1eb256ad57be20ef0b2215236841564254862d888efd007 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Errata Advisory for Openshift Logging 5.0.0), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:0652 |