Description of problem: The kibana wasn't be scaled up after the replicas number are changed. Version-Release number of selected component (if applicable): 4.5 origin How reproducible: Akways Steps to Reproduce: 1. Deploy cluserlogging with kibana replicas number=1 in clusterlogging CR apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 1 resources: limits: memory: 2Gi requests: cpu: 200m memory: 2Gi storage: {} redundancyPolicy: "ZeroRedundancy" visualization: type: "kibana" kibana: replicas: 1 curation: type: "curator" curator: schedule: "*/10 * * * *" collection: logs: type: "fluentd" fluentd: {} 2. Modify kibana replicas numbe=2 apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging spec: <---snip --- > visualization: type: "kibana" kibana: replicas: 2 <---snip----> Actual results: The replicas number =2 in kibana CR. But replicas number still 1 in the deployment kibana Expected results: There are two kibana pods. Additional info: Workaround: delete the kibana deployment. you can get two kibana pods
@Anping, due to quay outage today I couldn't create the cluster to reproduce this bug. Could you please post the logs of the elasticsearch operator here?
@Anping, this PR solves that bug: https://github.com/openshift/elasticsearch-operator/pull/358
@Anping, PR merged. Please test.
Verified on elasticsearch-operator.4.5.0-202005290037 and clusterlogging.4.5.0-202005280857
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409