Bug 1836866 - The kibana couldn't be scaled up
Summary: The kibana couldn't be scaled up
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.5
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.5.0
Assignee: IgorKarpukhin
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-05-18 11:35 UTC by Anping Li
Modified: 2020-07-13 17:39 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: EO wasn't handling `kibana.spec.replicas` field properly Consequence: Kibana couldn't be scaled Fix: EO operator now handles replicas in a correct way Result: Kibana can be scaled up/down
Clone Of:
Environment:
Last Closed: 2020-07-13 17:39:36 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift elasticsearch-operator pull 358 0 None closed Bug 1836866: Fixed deployment trigger when replicas changed 2020-06-24 03:03:25 UTC
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-07-13 17:39:51 UTC

Description Anping Li 2020-05-18 11:35:09 UTC
Description of problem:
The kibana wasn't be scaled up after the replicas number are changed.

Version-Release number of selected component (if applicable):
4.5 origin

How reproducible:
Akways

Steps to Reproduce:
1. Deploy cluserlogging with kibana replicas number=1 in clusterlogging CR
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
  name: "instance"
  namespace: openshift-logging
spec:
  managementState: "Managed"
  logStore:
    type: "elasticsearch"
    elasticsearch:
      nodeCount: 1
      resources:
        limits:
          memory: 2Gi
        requests:
          cpu: 200m
          memory: 2Gi
      storage: {}
      redundancyPolicy: "ZeroRedundancy"
  visualization:
    type: "kibana"
    kibana:
      replicas: 1
  curation:
    type: "curator"
    curator:
      schedule: "*/10 * * * *"
  collection:
    logs:
      type: "fluentd"
      fluentd: {}

2. Modify kibana replicas numbe=2
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
  name: "instance"
  namespace: openshift-logging
spec:
  <---snip --- >
  visualization:
    type: "kibana"
    kibana:
      replicas: 2
  <---snip---->

Actual results:
The replicas number =2 in kibana CR. But replicas number still 1 in the deployment kibana


Expected results:
There are two kibana pods.

Additional info:
Workaround: delete the kibana deployment. you can get two kibana pods

Comment 1 IgorKarpukhin 2020-05-19 14:26:31 UTC
@Anping, due to quay outage today I couldn't create the cluster to reproduce this bug. Could you please post the logs of the elasticsearch operator here?

Comment 2 IgorKarpukhin 2020-05-20 14:18:31 UTC
@Anping, this PR solves that bug: https://github.com/openshift/elasticsearch-operator/pull/358

Comment 3 IgorKarpukhin 2020-05-27 14:12:09 UTC
@Anping, PR merged. Please test.

Comment 4 Anping Li 2020-05-29 06:09:29 UTC
Verified on elasticsearch-operator.4.5.0-202005290037 and clusterlogging.4.5.0-202005280857

Comment 6 errata-xmlrpc 2020-07-13 17:39:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.