Bug 1836866
| Summary: | The kibana couldn't be scaled up | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Anping Li <anli> |
| Component: | Logging | Assignee: | IgorKarpukhin <ikarpukh> |
| Status: | CLOSED ERRATA | QA Contact: | Anping Li <anli> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 4.5 | CC: | aos-bugs, ikarpukh |
| Target Milestone: | --- | ||
| Target Release: | 4.5.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: |
Cause: EO wasn't handling `kibana.spec.replicas` field properly
Consequence: Kibana couldn't be scaled
Fix: EO operator now handles replicas in a correct way
Result: Kibana can be scaled up/down
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-07-13 17:39:36 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
@Anping, due to quay outage today I couldn't create the cluster to reproduce this bug. Could you please post the logs of the elasticsearch operator here? @Anping, this PR solves that bug: https://github.com/openshift/elasticsearch-operator/pull/358 @Anping, PR merged. Please test. Verified on elasticsearch-operator.4.5.0-202005290037 and clusterlogging.4.5.0-202005280857 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409 |
Description of problem: The kibana wasn't be scaled up after the replicas number are changed. Version-Release number of selected component (if applicable): 4.5 origin How reproducible: Akways Steps to Reproduce: 1. Deploy cluserlogging with kibana replicas number=1 in clusterlogging CR apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 1 resources: limits: memory: 2Gi requests: cpu: 200m memory: 2Gi storage: {} redundancyPolicy: "ZeroRedundancy" visualization: type: "kibana" kibana: replicas: 1 curation: type: "curator" curator: schedule: "*/10 * * * *" collection: logs: type: "fluentd" fluentd: {} 2. Modify kibana replicas numbe=2 apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging spec: <---snip --- > visualization: type: "kibana" kibana: replicas: 2 <---snip----> Actual results: The replicas number =2 in kibana CR. But replicas number still 1 in the deployment kibana Expected results: There are two kibana pods. Additional info: Workaround: delete the kibana deployment. you can get two kibana pods