Bug 1874746 - The `status` field in the clusterlogging/instance is empty
Summary: The `status` field in the clusterlogging/instance is empty
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.6
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.6.0
Assignee: Vimal Kumar
QA Contact: Qiaoling Tang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-02 07:03 UTC by Qiaoling Tang
Modified: 2020-10-27 15:12 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-27 15:10:28 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-logging-operator pull 691 0 None closed Bug 1874746: Fixed ClusterLogging CR status 2021-01-29 01:21:27 UTC
Red Hat Product Errata RHBA-2020:4198 0 None None None 2020-10-27 15:12:43 UTC

Description Qiaoling Tang 2020-09-02 07:03:31 UTC
Description of problem:

The status field in clusterlogging/instance is empty:
$ oc get cl instance -oyaml
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
  creationTimestamp: "2020-09-02T06:54:08Z"
  generation: 1
  managedFields:
  - apiVersion: logging.openshift.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        .: {}
        f:collection:
          .: {}
          f:logs:
            .: {}
            f:fluentd: {}
            f:type: {}
        f:logStore:
          .: {}
          f:elasticsearch:
            .: {}
            f:nodeCount: {}
            f:redundancyPolicy: {}
            f:resources:
              .: {}
              f:requests:
                .: {}
                f:cpu: {}
                f:memory: {}
            f:storage: {}
          f:retentionPolicy:
            .: {}
            f:application:
              .: {}
              f:maxAge: {}
            f:audit:
              .: {}
              f:maxAge: {}
            f:infra:
              .: {}
              f:maxAge: {}
          f:type: {}
        f:managementState: {}
        f:visualization:
          .: {}
          f:kibana:
            .: {}
            f:replicas: {}
          f:type: {}
    manager: oc
    operation: Update
    time: "2020-09-02T06:54:08Z"
  - apiVersion: logging.openshift.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        f:collection:
          f:logs:
            f:fluentd:
              f:resources: {}
        f:logStore:
          f:elasticsearch:
            f:proxy:
              .: {}
              f:resources: {}
        f:visualization:
          f:kibana:
            f:proxy:
              .: {}
              f:resources: {}
            f:resources: {}
      f:status: {}
    manager: cluster-logging-operator
    operation: Update
    time: "2020-09-02T06:56:17Z"
  name: instance
  namespace: openshift-logging
  resourceVersion: "394747"
  selfLink: /apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterloggings/instance
  uid: 3c2ab722-d862-431c-a207-eceaf93eae11
spec:
  collection:
    logs:
      fluentd: {}
      type: fluentd
  logStore:
    elasticsearch:
      nodeCount: 1
      redundancyPolicy: ZeroRedundancy
      resources:
        requests:
          cpu: 100m
          memory: 1Gi
      storage: {}
    retentionPolicy:
      application:
        maxAge: 1d
      audit:
        maxAge: 1w
      infra:
        maxAge: 7d
    type: elasticsearch
  managementState: Managed
  visualization:
    kibana:
      replicas: 1
    type: kibana
status: {}


Version-Release number of selected component (if applicable):
clusterlogging.4.6.0-202009011832.p0
elasticsearch-operator.4.6.0-202008312113.p0

How reproducible:
Always

Steps to Reproduce:
1. deploy CLO and EO
2. create clusterlogging
3. wait until all the EFK pods start, check the status in clusterlogging/instance

Actual results:


Expected results:
There could have something like:
status:
  collection:
    logs:
      fluentdStatus:
        daemonSet: fluentd
        nodes:
          fluentd-69rvj: ip-10-0-60-20.us-east-2.compute.internal
          fluentd-gzgp2: ip-10-0-50-193.us-east-2.compute.internal
          fluentd-tkd99: ip-10-0-55-30.us-east-2.compute.internal
          fluentd-vgnw2: ip-10-0-60-250.us-east-2.compute.internal
          fluentd-x7s2w: ip-10-0-52-236.us-east-2.compute.internal
          fluentd-zdh62: ip-10-0-65-226.us-east-2.compute.internal
        pods:
          failed: []
          notReady: []
          ready:
          - fluentd-69rvj
          - fluentd-gzgp2
          - fluentd-tkd99
          - fluentd-vgnw2
          - fluentd-x7s2w
          - fluentd-zdh62
  curation: {}
  logStore:
    elasticsearchStatus:
    - cluster:
        activePrimaryShards: 5
        activeShards: 5
        initializingShards: 0
        numDataNodes: 1
        numNodes: 1
        pendingTasks: 0
        relocatingShards: 0
        status: green
        unassignedShards: 0
      clusterName: elasticsearch
      nodeConditions:
        elasticsearch-cdm-c6hxpap9-1: []
      nodeCount: 1
      pods:
        client:
          failed: []
          notReady: []
          ready:
          - elasticsearch-cdm-c6hxpap9-1-5fdcbb7d6b-bxd2p
        data:
          failed: []
          notReady: []
          ready:
          - elasticsearch-cdm-c6hxpap9-1-5fdcbb7d6b-bxd2p
        master:
          failed: []
          notReady: []
          ready:
          - elasticsearch-cdm-c6hxpap9-1-5fdcbb7d6b-bxd2p
      shardAllocationEnabled: all
  visualization:
    kibanaStatus:
    - clusterCondition:
        kibana-64b8bdd68-2kw5f:
        - lastTransitionTime: "2020-09-05T08:17:21Z"
          reason: ContainerCreating
          status: "True"
          type: ""
        - lastTransitionTime: "2020-09-05T08:17:21Z"
          reason: ContainerCreating
          status: "True"
          type: ""
      deployment: kibana
      pods:
        failed: []
        notReady:
        - kibana-64b8bdd68-2kw5f
        ready: []
      replicaSets:
      - kibana-64b8bdd68
      replicas: 1


Additional info:

Comment 1 Jeff Cantrill 2020-09-03 14:44:12 UTC
Ref similar fix in EO https://github.com/openshift/elasticsearch-operator/pull/468

Comment 5 Qiaoling Tang 2020-09-16 02:44:37 UTC
Verified with clusterlogging.4.6.0-202009152100.p0

Comment 7 errata-xmlrpc 2020-10-27 15:10:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6.1 extras update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4198


Note You need to log in before you can comment on or make changes to this bug.