Bug 1732698

Summary: The es node count in clusterlogging instance status is not correct when es nodeCount > 3.
Product: OpenShift Container Platform Reporter: Qiaoling Tang <qitang>
Component: LoggingAssignee: Periklis Tsirakidis <periklis>
Status: CLOSED ERRATA QA Contact: Anping Li <anli>
Severity: low Docs Contact:
Priority: medium    
Version: 4.2.0CC: aos-bugs, jcantril, periklis, rmeggins
Target Milestone: ---   
Target Release: 4.5.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Cause: Reading the wrong CR status fields to node count from elasticsearch CR. Consequence: Incorrect node count propagated from elasticsearch status to cluster logging CR status Fix: Use the correct field. Result: Correct node count propagated from elasticsearch CR to cluster logging CR status.
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-07-13 17:11:03 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Qiaoling Tang 2019-07-24 07:28:15 UTC
Description of problem:
Deploy logging, set ES nodeCount > 3, then check the status in clusterlogging instance, the nodeCount is always 3

    logStore:
      elasticsearchStatus:
      - ShardAllocationEnabled: all
        cluster:
          activePrimaryShards: 6
          activeShards: 30
          initializingShards: 0
          numDataNodes: 5
          numNodes: 5
          pendingTasks: 0
          relocatingShards: 0
          status: green
          unassignedShards: 0
        clusterName: elasticsearch
        nodeConditions:
          elasticsearch-cd-qw0rlndh-1: []
          elasticsearch-cd-qw0rlndh-2: []
          elasticsearch-cdm-93f80lxs-1: []
          elasticsearch-cdm-93f80lxs-2: []
          elasticsearch-cdm-93f80lxs-3: []
        nodeCount: 3
        pods:
          client:
            failed: []
            notReady: []
            ready:
            - elasticsearch-cd-qw0rlndh-1-7d4855f9fb-9cgdf
            - elasticsearch-cd-qw0rlndh-2-6458b5b44-677rv
            - elasticsearch-cdm-93f80lxs-1-7dcc88c567-5hctf
            - elasticsearch-cdm-93f80lxs-2-c465cd4db-6wzbq
            - elasticsearch-cdm-93f80lxs-3-5f7f8dd6f8-jm8rn
          data:
            failed: []
            notReady: []
            ready:
            - elasticsearch-cd-qw0rlndh-1-7d4855f9fb-9cgdf
            - elasticsearch-cd-qw0rlndh-2-6458b5b44-677rv
            - elasticsearch-cdm-93f80lxs-1-7dcc88c567-5hctf
            - elasticsearch-cdm-93f80lxs-2-c465cd4db-6wzbq
            - elasticsearch-cdm-93f80lxs-3-5f7f8dd6f8-jm8rn
          master:
            failed: []
            notReady: []
            ready:
            - elasticsearch-cdm-93f80lxs-1-7dcc88c567-5hctf
            - elasticsearch-cdm-93f80lxs-2-c465cd4db-6wzbq
            - elasticsearch-cdm-93f80lxs-3-5f7f8dd6f8-jm8rn


Version-Release number of selected component (if applicable):
ose-cluster-logging-operator-v4.2.0-201907232219


How reproducible:
Always

Steps to Reproduce:
1. deploy logging, set es nodeCount > 3
2. check the status in clusterlogging instance
3.

Actual results:


Expected results:


Additional info:

Comment 4 Anping Li 2020-05-06 13:20:19 UTC
verified in  clusterlogging.v4.5.0
 logStore:
    elasticsearchStatus:
    - cluster:
        activePrimaryShards: 64
        activeShards: 64
        initializingShards: 1
        numDataNodes: 4
        numNodes: 4
        pendingTasks: 0
        relocatingShards: 0
        status: yellow
        unassignedShards: 0
      clusterName: elasticsearch
      nodeConditions:
        elasticsearch-cd-27k3wtz7-1: []
        elasticsearch-cdm-ya1nz4gf-1: []
        elasticsearch-cdm-ya1nz4gf-2: []
        elasticsearch-cdm-ya1nz4gf-3: []
      nodeCount: 4

Comment 6 errata-xmlrpc 2020-07-13 17:11:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409