Bug 1832201 - Cluster Logging Overview page shows invalid status
Summary: Cluster Logging Overview page shows invalid status
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.5
Hardware: Unspecified
OS: Unspecified
medium
low
Target Milestone: ---
: 4.5.0
Assignee: Jeff Cantrill
QA Contact: Yadan Pei
URL:
Whiteboard:
Depends On:
Blocks: 1823870
TreeView+ depends on / blocked
 
Reported: 2020-05-06 10:25 UTC by Yadan Pei
Modified: 2023-05-28 23:36 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: Improperly defined status fields in the CSV Consequence: Fix: Correct the CSV Result:
Clone Of:
Environment:
Last Closed: 2020-07-13 17:35:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-07-13 17:35:43 UTC

Description Yadan Pei 2020-05-06 10:25:15 UTC
Description of problem:
Create ClusterLogging instance using the default YAML, on Cluster Logging Overview page, it shows invalid for several status

Version-Release number of selected component (if applicable):
4.5.0-0.nightly-2020-05-04-113741

How reproducible:
Always

Steps to Reproduce:
1. subscribe Cluster Logging Operator from console
2. wait Cluster Logging operator is successfully installed, create Cluster Logging CR from console or using YAML
$ cat cluster-logging.yaml 
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
  namespace: openshift-logging
  name: instance
  labels: {}
spec:
  collection:
    logs:
      type: fluentd
  curation:
    curator:
      schedule: 30 3 * * *
    type: curator
  logStore:
    elasticsearch:
      nodeCount: 3
      redundancyPolicy: SingleRedundancy
      storage:
        size: 20G
        storageClassName: gp2
      resources:
        requests:
          memory: "4Gi"
    type: elasticsearch
  managementState: Managed
  visualization:
    kibana:
      replicas: 1
    type: kibana
3. Check Cluster Logging instance details from Installed Operators -> choose 'Cluster Logging' operator -> click 'Cluster Logging' tab 

Actual results:
3. On Cluster Logging Details page, it shows 
Kibana Status
The field status.visualization.kibanaStatus.pods is invalid

Elasticsearch Client Pod Status
The field status.logStore.elasticsearchStatus.pods.client is invalid

Elasticsearch Data Pod Status
The field status.logStore.elasticsearchStatus.pods.data is invalid

Elasticsearch Master Pod Status
The field status.logStore.elasticsearchStatus.pods.master is invalid

Fluentd status
6 collection.logs.fluentdStatus.pods

# oc get clusterlogging instance -n openshift-logging -o json
{
    "apiVersion": "logging.openshift.io/v1",
    "kind": "ClusterLogging",
    "metadata": {
        "creationTimestamp": "2020-05-06T08:52:08Z",
        "generation": 1,
        "name": "instance",
        "namespace": "openshift-logging",
        "resourceVersion": "398116",
        "selfLink": "/apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterloggings/instance",
        "uid": "9e735bd5-0a21-4807-8012-423401204d1a"
    },
    "spec": {
        "collection": {
            "logs": {
                "type": "fluentd"
            }
        },
        "curation": {
            "curator": {
                "schedule": "30 3 * * *"
            },
            "type": "curator"
        },
        "logStore": {
            "elasticsearch": {
                "nodeCount": 3,
                "redundancyPolicy": "SingleRedundancy",
                "resources": {
                    "requests": {
                        "memory": "2Gi"
                    }
                },
                "storage": {
                    "size": "5G",
                    "storageClassName": "gp2"
                }
            },
            "type": "elasticsearch"
        },
        "managementState": "Managed",
        "visualization": {
            "kibana": {
                "replicas": 1
            },
            "type": "kibana"
        }
    },
    "status": {
        "collection": {
            "logs": {
                "fluentdStatus": {
                    "daemonSet": "fluentd",
                    "nodes": {
                        "fluentd-8pptr": "ip-10-0-174-222.ap-southeast-1.compute.internal",
                        "fluentd-9mdvz": "ip-10-0-151-74.ap-southeast-1.compute.internal",
                        "fluentd-f2tm9": "ip-10-0-172-225.ap-southeast-1.compute.internal",
                        "fluentd-jpmdd": "ip-10-0-143-204.ap-southeast-1.compute.internal",
                        "fluentd-wp8t9": "ip-10-0-145-171.ap-southeast-1.compute.internal",
                        "fluentd-xgkr8": "ip-10-0-140-96.ap-southeast-1.compute.internal"
                    },
                    "pods": {
                        "failed": [],
                        "notReady": [],
                        "ready": [
                            "fluentd-8pptr",
                            "fluentd-9mdvz",
                            "fluentd-f2tm9",
                            "fluentd-jpmdd",
                            "fluentd-wp8t9",
                            "fluentd-xgkr8"
                        ]
                    }
                }
            }
        },
        "curation": {
            "curatorStatus": [
                {
                    "cronJobs": "curator",
                    "schedules": "30 3 * * *",
                    "suspended": false
                }
            ]
        },
        "logStore": {
            "elasticsearchStatus": [
                {
                    "cluster": {
                        "activePrimaryShards": 27,
                        "activeShards": 50,
                        "initializingShards": 1,
                        "numDataNodes": 3,
                        "numNodes": 3,
                        "pendingTasks": 0,
                        "relocatingShards": 0,
                        "status": "red",
                        "unassignedShards": 5
                    },
                    "clusterName": "elasticsearch",
                    "nodeConditions": {
                        "elasticsearch-cdm-ogr31kyl-1": [],
                        "elasticsearch-cdm-ogr31kyl-2": [
                            {
                                "lastTransitionTime": "2020-05-06T10:06:16Z",
                                "message": "Disk storage usage for node is 4.70Gb (96.65894674035926%). Shards will be relocated from this node.",
                                "reason": "Disk Watermark High",
                                "status": "True",
                                "type": "NodeStorage"
                            }
                        ],
                        "elasticsearch-cdm-ogr31kyl-3": [
                            {
                                "lastTransitionTime": "2020-05-06T10:06:16Z",
                                "message": "Disk storage usage for node is 4.83Gb (99.33386195201608%). Shards will be relocated from this node.",
                                "reason": "Disk Watermark High",
                                "status": "True",
                                "type": "NodeStorage"
                            }
                        ]
                    },
                    "nodeCount": 3,
                    "pods": {
                        "client": {
                            "failed": [],
                            "notReady": [],
                            "ready": [
                                "elasticsearch-cdm-ogr31kyl-1-5c94885974-4qr4p",
                                "elasticsearch-cdm-ogr31kyl-2-75d4f7b4f4-kltmv",
                                "elasticsearch-cdm-ogr31kyl-3-cbcdcfcbf-lc9wg"
                            ]
                        },
                        "data": {
                            "failed": [],
                            "notReady": [],
                            "ready": [
                                "elasticsearch-cdm-ogr31kyl-1-5c94885974-4qr4p",
                                "elasticsearch-cdm-ogr31kyl-2-75d4f7b4f4-kltmv",
                                "elasticsearch-cdm-ogr31kyl-3-cbcdcfcbf-lc9wg"
                            ]
                        },
                        "master": {
                            "failed": [],
                            "notReady": [],
                            "ready": [
                                "elasticsearch-cdm-ogr31kyl-1-5c94885974-4qr4p",
                                "elasticsearch-cdm-ogr31kyl-2-75d4f7b4f4-kltmv",
                                "elasticsearch-cdm-ogr31kyl-3-cbcdcfcbf-lc9wg"
                            ]
                        }
                    },
                    "shardAllocationEnabled": "all"
                }
            ]
        },
        "visualization": {
            "kibanaStatus": [
                {
                    "deployment": "kibana",
                    "pods": {
                        "failed": [],
                        "notReady": [],
                        "ready": [
                            "kibana-74484f9dbf-n25ql"
                        ]
                    },
                    "replicaSets": [
                        "kibana-74484f9dbf"
                    ],
                    "replicas": 1
                }
            ]
        }
    }
}

# oc get pods -n openshift-logging
NAME                                            READY   STATUS    RESTARTS   AGE
cluster-logging-operator-6f8575d57d-xbgrp       1/1     Running   0          98m
elasticsearch-cdm-ogr31kyl-1-5c94885974-4qr4p   2/2     Running   0          78m
elasticsearch-cdm-ogr31kyl-2-75d4f7b4f4-kltmv   2/2     Running   0          77m
elasticsearch-cdm-ogr31kyl-3-cbcdcfcbf-lc9wg    2/2     Running   0          76m
fluentd-8pptr                                   1/1     Running   0          78m
fluentd-9mdvz                                   1/1     Running   0          78m
fluentd-f2tm9                                   1/1     Running   0          78m
fluentd-jpmdd                                   1/1     Running   0          78m
fluentd-wp8t9                                   1/1     Running   0          78m
fluentd-xgkr8                                   1/1     Running   0          78m
kibana-74484f9dbf-n25ql                         2/2     Running   0          78m

Expected results:
3. console should parse correctly and show correct number in a donut

Additional info:

Comment 1 Robb Hamilton 2020-05-06 14:02:10 UTC
This bug is the result of invalid values in the operator.  See https://access.redhat.com/support/cases/#/case/02612114?commentId=a0a2K00000U6DtBQAV.  Changing component to logging.

Comment 2 Jeff Cantrill 2020-05-12 00:00:23 UTC
Please verify this is still an issue in 4.5 as https://github.com/openshift/cluster-logging-operator/pull/440 resolves a number of these prior to the bug being logged

Comment 3 Yadan Pei 2020-05-18 10:31:17 UTC
Please see https://bugzilla.redhat.com/show_bug.cgi?id=1823870#c6


The issue has been resolved.

Comment 5 errata-xmlrpc 2020-07-13 17:35:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409

Comment 6 zehenna 2023-03-29 04:47:57 UTC Comment hidden (spam)

Note You need to log in before you can comment on or make changes to this bug.