Description of problem: Customer states that after installing clusterlogging.4.4.0-202005121717 and attempting to access ClusterLogging Details, they see the following: Kibana Status The field status.visualization.kibanaStatus.pods is invalid Elasticsearch Client Pod Status The field status.logStore.elasticsearchStatus.pods.client is invalid Elasticsearch Data Pod Status The field status.logStore.elasticsearchStatus.pods.data is invalid Elasticsearch Master Pod Status The field status.logStore.elasticsearchStatus.pods.master is invalid Version-Release number of selected component (if applicable): OCP 4.4 How reproducible: Customer reproducible Steps to Reproduce: Confirmed customer used the Installing cluster logging using the CLI (https://docs.openshift.com/container-platform/4.4/logging/cluster-logging-deploying.html#cluster-logging-deploy-cli_cluster-logging-deploying) instructions and made no modifications Actual results: Several fields for ES and Kibana are showing as invalid status and is preventing functionality on customer end Expected results: To be able to use the EFK stack Additional info: - Attaching logging dump as a private attachment
Should be fixed by https://github.com/openshift/cluster-logging-operator/pull/440 Lowering the severity as this is not a blocker
Is there any chance this can be backported to 4.4 at all? I know that 4.5 is using ES6, so I'm not sure what limitations that may cause.
The kibana status work as expected. move to verified. "logStore": { "elasticsearchStatus": [ { "cluster": { "activePrimaryShards": 189, "activeShards": 378, "initializingShards": 0, "numDataNodes": 3, "numNodes": 3, "pendingTasks": 0, "relocatingShards": 0, "status": "green", "unassignedShards": 0 }, "clusterName": "elasticsearch", "nodeConditions": { "elasticsearch-cdm-l31caawp-1": [], "elasticsearch-cdm-l31caawp-2": [], "elasticsearch-cdm-l31caawp-3": [] }, "nodeCount": 3, "pods": { "client": { "failed": [], "notReady": [], "ready": [ "elasticsearch-cdm-l31caawp-1-7f9bc6c77-zg8xf", "elasticsearch-cdm-l31caawp-2-857dd656cb-dft46", "elasticsearch-cdm-l31caawp-3-544b987d99-fjf8k" ] }, "data": { "failed": [], "notReady": [], "ready": [ "elasticsearch-cdm-l31caawp-1-7f9bc6c77-zg8xf", "elasticsearch-cdm-l31caawp-2-857dd656cb-dft46", "elasticsearch-cdm-l31caawp-3-544b987d99-fjf8k" ] }, "master": { "failed": [], "notReady": [], "ready": [ "elasticsearch-cdm-l31caawp-1-7f9bc6c77-zg8xf", "elasticsearch-cdm-l31caawp-2-857dd656cb-dft46", "elasticsearch-cdm-l31caawp-3-544b987d99-fjf8k" ] } }, "shardAllocationEnabled": "all" } ] }, "visualization": { "kibanaStatus": [ { "deployment": "kibana", "pods": { "failed": [], "notReady": [], "ready": [ "kibana-f55d7f5c5-6r65n", "kibana-f55d7f5c5-nrnnf" ] }, "replicaSets": [ "kibana-f55d7f5c5" ], "replicas": 2 } ] } }
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409