Bug 1840888

Summary: Multiple Kibana and ES statuses showing as Invalid
Product: OpenShift Container Platform Reporter: Greg Rodriguez II <grodrigu>
Component: LoggingAssignee: Jeff Cantrill <jcantril>
Status: CLOSED ERRATA QA Contact: Anping Li <anli>
Severity: low Docs Contact:
Priority: unspecified    
Version: 4.4CC: aos-bugs, jcantril
Target Milestone: ---Flags: jcantril: needinfo-
Target Release: 4.5.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Cause: CSV did not properly define status fields Consequence: The status was not able to be properly evaluated and displayed an error in the console Fix: Result:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-07-13 17:42:22 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Greg Rodriguez II 2020-05-27 19:37:11 UTC
Description of problem:
Customer states that after installing clusterlogging.4.4.0-202005121717 and attempting to access ClusterLogging Details, they see the following:

Kibana Status
  The field status.visualization.kibanaStatus.pods is invalid
Elasticsearch Client Pod Status
  The field status.logStore.elasticsearchStatus.pods.client is invalid
Elasticsearch Data Pod Status
  The field status.logStore.elasticsearchStatus.pods.data is invalid
Elasticsearch Master Pod Status
  The field status.logStore.elasticsearchStatus.pods.master is invalid

Version-Release number of selected component (if applicable):
OCP 4.4

How reproducible:
Customer reproducible

Steps to Reproduce:
Confirmed customer used the Installing cluster logging using the CLI (https://docs.openshift.com/container-platform/4.4/logging/cluster-logging-deploying.html#cluster-logging-deploy-cli_cluster-logging-deploying) instructions and made no modifications

Actual results:
Several fields for ES and Kibana are showing as invalid status and is preventing functionality on customer end

Expected results:
To be able to use the EFK stack

Additional info:
- Attaching logging dump as a private attachment

Comment 3 Jeff Cantrill 2020-05-28 12:56:19 UTC
Should be fixed by https://github.com/openshift/cluster-logging-operator/pull/440  Lowering the severity as this is not a blocker

Comment 6 Greg Rodriguez II 2020-05-29 15:26:28 UTC
Is there any chance this can be backported to 4.4 at all?  I know that 4.5 is using ES6, so I'm not sure what limitations that may cause.

Comment 7 Anping Li 2020-06-01 14:32:56 UTC
The kibana status work as expected. move to verified.

        "logStore": {
            "elasticsearchStatus": [
                {
                    "cluster": {
                        "activePrimaryShards": 189,
                        "activeShards": 378,
                        "initializingShards": 0,
                        "numDataNodes": 3,
                        "numNodes": 3,
                        "pendingTasks": 0,
                        "relocatingShards": 0,
                        "status": "green",
                        "unassignedShards": 0
                    },
                    "clusterName": "elasticsearch",
                    "nodeConditions": {
                        "elasticsearch-cdm-l31caawp-1": [],
                        "elasticsearch-cdm-l31caawp-2": [],
                        "elasticsearch-cdm-l31caawp-3": []
                    },
                    "nodeCount": 3,
                    "pods": {
                        "client": {
                            "failed": [],
                            "notReady": [],
                            "ready": [
                                "elasticsearch-cdm-l31caawp-1-7f9bc6c77-zg8xf",
                                "elasticsearch-cdm-l31caawp-2-857dd656cb-dft46",
                                "elasticsearch-cdm-l31caawp-3-544b987d99-fjf8k"
                            ]
                        },
                        "data": {
                            "failed": [],
                            "notReady": [],
                            "ready": [
                                "elasticsearch-cdm-l31caawp-1-7f9bc6c77-zg8xf",
                                "elasticsearch-cdm-l31caawp-2-857dd656cb-dft46",
                                "elasticsearch-cdm-l31caawp-3-544b987d99-fjf8k"
                            ]
                        },
                        "master": {
                            "failed": [],
                            "notReady": [],
                            "ready": [
                                "elasticsearch-cdm-l31caawp-1-7f9bc6c77-zg8xf",
                                "elasticsearch-cdm-l31caawp-2-857dd656cb-dft46",
                                "elasticsearch-cdm-l31caawp-3-544b987d99-fjf8k"
                            ]
                        }
                    },
                    "shardAllocationEnabled": "all"
                }
            ]
        },
        "visualization": {
            "kibanaStatus": [
                {
                    "deployment": "kibana",
                    "pods": {
                        "failed": [],
                        "notReady": [],
                        "ready": [
                            "kibana-f55d7f5c5-6r65n",
                            "kibana-f55d7f5c5-nrnnf"
                        ]
                    },
                    "replicaSets": [
                        "kibana-f55d7f5c5"
                    ],
                    "replicas": 2
                }
            ]
        }
    }

Comment 8 errata-xmlrpc 2020-07-13 17:42:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409