Bug 1840888 - Multiple Kibana and ES statuses showing as Invalid [NEEDINFO]
Summary: Multiple Kibana and ES statuses showing as Invalid
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.4
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: 4.5.0
Assignee: Jeff Cantrill
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-05-27 19:37 UTC by Greg Rodriguez II
Modified: 2020-07-13 17:42 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: CSV did not properly define status fields Consequence: The status was not able to be properly evaluated and displayed an error in the console Fix: Result:
Clone Of:
Environment:
Last Closed: 2020-07-13 17:42:22 UTC
Target Upstream Version:
grodrigu: needinfo? (jcantril)


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Github openshift cluster-logging-operator pull 440 None closed fixes the issue #439 where the status is not shown and gives a warning message that the field is invalid 2020-08-17 11:30:06 UTC
Red Hat Product Errata RHBA-2020:2409 None None None 2020-07-13 17:42:37 UTC

Description Greg Rodriguez II 2020-05-27 19:37:11 UTC
Description of problem:
Customer states that after installing clusterlogging.4.4.0-202005121717 and attempting to access ClusterLogging Details, they see the following:

Kibana Status
  The field status.visualization.kibanaStatus.pods is invalid
Elasticsearch Client Pod Status
  The field status.logStore.elasticsearchStatus.pods.client is invalid
Elasticsearch Data Pod Status
  The field status.logStore.elasticsearchStatus.pods.data is invalid
Elasticsearch Master Pod Status
  The field status.logStore.elasticsearchStatus.pods.master is invalid

Version-Release number of selected component (if applicable):
OCP 4.4

How reproducible:
Customer reproducible

Steps to Reproduce:
Confirmed customer used the Installing cluster logging using the CLI (https://docs.openshift.com/container-platform/4.4/logging/cluster-logging-deploying.html#cluster-logging-deploy-cli_cluster-logging-deploying) instructions and made no modifications

Actual results:
Several fields for ES and Kibana are showing as invalid status and is preventing functionality on customer end

Expected results:
To be able to use the EFK stack

Additional info:
- Attaching logging dump as a private attachment

Comment 3 Jeff Cantrill 2020-05-28 12:56:19 UTC
Should be fixed by https://github.com/openshift/cluster-logging-operator/pull/440  Lowering the severity as this is not a blocker

Comment 6 Greg Rodriguez II 2020-05-29 15:26:28 UTC
Is there any chance this can be backported to 4.4 at all?  I know that 4.5 is using ES6, so I'm not sure what limitations that may cause.

Comment 7 Anping Li 2020-06-01 14:32:56 UTC
The kibana status work as expected. move to verified.

        "logStore": {
            "elasticsearchStatus": [
                {
                    "cluster": {
                        "activePrimaryShards": 189,
                        "activeShards": 378,
                        "initializingShards": 0,
                        "numDataNodes": 3,
                        "numNodes": 3,
                        "pendingTasks": 0,
                        "relocatingShards": 0,
                        "status": "green",
                        "unassignedShards": 0
                    },
                    "clusterName": "elasticsearch",
                    "nodeConditions": {
                        "elasticsearch-cdm-l31caawp-1": [],
                        "elasticsearch-cdm-l31caawp-2": [],
                        "elasticsearch-cdm-l31caawp-3": []
                    },
                    "nodeCount": 3,
                    "pods": {
                        "client": {
                            "failed": [],
                            "notReady": [],
                            "ready": [
                                "elasticsearch-cdm-l31caawp-1-7f9bc6c77-zg8xf",
                                "elasticsearch-cdm-l31caawp-2-857dd656cb-dft46",
                                "elasticsearch-cdm-l31caawp-3-544b987d99-fjf8k"
                            ]
                        },
                        "data": {
                            "failed": [],
                            "notReady": [],
                            "ready": [
                                "elasticsearch-cdm-l31caawp-1-7f9bc6c77-zg8xf",
                                "elasticsearch-cdm-l31caawp-2-857dd656cb-dft46",
                                "elasticsearch-cdm-l31caawp-3-544b987d99-fjf8k"
                            ]
                        },
                        "master": {
                            "failed": [],
                            "notReady": [],
                            "ready": [
                                "elasticsearch-cdm-l31caawp-1-7f9bc6c77-zg8xf",
                                "elasticsearch-cdm-l31caawp-2-857dd656cb-dft46",
                                "elasticsearch-cdm-l31caawp-3-544b987d99-fjf8k"
                            ]
                        }
                    },
                    "shardAllocationEnabled": "all"
                }
            ]
        },
        "visualization": {
            "kibanaStatus": [
                {
                    "deployment": "kibana",
                    "pods": {
                        "failed": [],
                        "notReady": [],
                        "ready": [
                            "kibana-f55d7f5c5-6r65n",
                            "kibana-f55d7f5c5-nrnnf"
                        ]
                    },
                    "replicaSets": [
                        "kibana-f55d7f5c5"
                    ],
                    "replicas": 2
                }
            ]
        }
    }

Comment 8 errata-xmlrpc 2020-07-13 17:42:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.