Summary: | The EO CrashLoopBackOff after update the kibana resource configurations in clusterlogging instance. | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Qiaoling Tang <qitang> |
Component: | Logging | Assignee: | Hui Kang <hkang> |
Status: | VERIFIED --- | QA Contact: | Qiaoling Tang <qitang> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.5 | CC: | aos-bugs, hkang, periklis |
Target Milestone: | --- | ||
Target Release: | 4.7.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | logging-exploration | ||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
Cause: When current resource of kibana is set as "resource{}", it is a nil map.
Consequence: panic due to access nil map
Fix: initialize the map
Result: Fix the bug
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | Type: | Bug | |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: |
Description
Qiaoling Tang
2020-10-20 03:11:29 UTC
elasticsearch-operator.4.6.0-202010140833.p0 has the same issue. > 1. deploy logging 4.5 > 2. create clusterlogging instance with: > apiVersion: "logging.openshift.io/v1" > kind: "ClusterLogging" > metadata: > name: "instance" > namespace: "openshift-logging" > spec: > managementState: "Managed" > logStore: > type: "elasticsearch" > retentionPolicy: > application: > maxAge: 1d > infra: > maxAge: 3h > audit: > maxAge: 2w > elasticsearch: > nodeCount: 3 > redundancyPolicy: "SingleRedundancy" > resources: > requests: > memory: "2Gi" > storage: > storageClassName: "standard" > size: "20Gi" > visualization: > type: "kibana" > kibana: > resources: {} > replicas: 1 > collection: > logs: > type: "fluentd" > fluentd: {} I found the issue only happens when there has `spec.visualization.kibana.resources: {}` in the clusterlogging instance, if create clusterlogging instance with the below yaml and do the same steps, no such issue. apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" spec: managementState: "Managed" logStore: type: "elasticsearch" retentionPolicy: application: maxAge: 1d infra: maxAge: 3h audit: maxAge: 2w elasticsearch: nodeCount: 3 redundancyPolicy: "SingleRedundancy" resources: requests: memory: "2Gi" storage: storageClassName: "standard" size: "20Gi" visualization: type: "kibana" kibana: replicas: 1 collection: logs: type: "fluentd" fluentd: {} > 3. wait until all the EFK pods become Running, update the kibana resource > configurations to: > managementState: Managed > visualization: > kibana: > proxy: > resources: > limits: > memory: 1Gi > requests: > cpu: 100m > memory: 1Gi > replicas: 1 > resources: > limits: > cpu: 1000m > memory: 4Gi > requests: > cpu: 800m > memory: 2Gi > type: kibana > 4. check the EO status Doc Text: Previously, the operator fails when the current resource of kibana is resources: {} Doc type: Bug fix Verified with quay.io/openshift/origin-elasticsearch-operator@sha256:1a1446fab00689c1e1eb256ad57be20ef0b2215236841564254862d888efd007 |