Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1867448

Summary: Kibana status is not updated in clusterlogging/instance.
Product: OpenShift Container Platform Reporter: Qiaoling Tang <qitang>
Component: LoggingAssignee: Periklis Tsirakidis <periklis>
Status: CLOSED DUPLICATE QA Contact: Anping Li <anli>
Severity: low Docs Contact:
Priority: low    
Version: 4.6CC: aos-bugs, periklis
Target Milestone: ---   
Target Release: 4.6.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: logging-exploration
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-09-16 15:20:48 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1849564    
Bug Blocks:    

Description Qiaoling Tang 2020-08-10 03:32:34 UTC
Description of problem:
The kibana status is not updated in clusterlogging/instance.

  logStore:
    elasticsearchStatus:
    - cluster:
        activePrimaryShards: 0
        activeShards: 0
        initializingShards: 0
        numDataNodes: 0
        numNodes: 0
        pendingTasks: 0
        relocatingShards: 0
        status: cluster health unknown
        unassignedShards: 0
      clusterName: elasticsearch
      nodeConditions:
        elasticsearch-cdm-oqo6oo7f-1:
        - lastTransitionTime: "2020-08-10T03:03:08Z"
          message: '0/6 nodes are available: 6 node(s) didn''t match node selector.'
          reason: Unschedulable
          status: "True"
          type: Unschedulable
      nodeCount: 0
      pods:
        client:
          failed: []
          notReady:
          - elasticsearch-cdm-oqo6oo7f-1-fb555679f-f6z2x
          ready: []
        data:
          failed: []
          notReady:
          - elasticsearch-cdm-oqo6oo7f-1-fb555679f-f6z2x
          ready: []
        master:
          failed: []
          notReady:
          - elasticsearch-cdm-oqo6oo7f-1-fb555679f-f6z2x
          ready: []
      shardAllocationEnabled: shard allocation unknown
  visualization: {}

$ oc get po
NAME                                            READY   STATUS    RESTARTS   AGE
cluster-logging-operator-755f6955ff-n22p9       1/1     Running   0          167m
elasticsearch-cdm-oqo6oo7f-1-fb555679f-f6z2x    0/2     Pending   0          23m
elasticsearch-delete-app-1597029300-m2glr       0/1     Pending   0          11m
elasticsearch-delete-audit-1597029300-rxr4v     0/1     Pending   0          11m
elasticsearch-delete-infra-1597029300-98kcx     0/1     Pending   0          11m
elasticsearch-rollover-app-1597029300-7vw5q     0/1     Pending   0          11m
elasticsearch-rollover-audit-1597029300-lfbng   0/1     Pending   0          11m
elasticsearch-rollover-infra-1597029300-8t8vf   0/1     Pending   0          11m
kibana-76bf6859dc-hzd2k                         0/2     Pending   0          22m



Version-Release number of selected component (if applicable):
clusterlogging.4.6.0-202008080127.p0 
elasticsearch-operator.4.6.0-202008080127.p0

How reproducible:
100%

Steps to Reproduce:
1. deploy logging operators
2. create clusterlogging CR, set nodeSelector to make all the pods unschedulable:
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
  name: "instance"
  namespace: openshift-logging
spec:
  managementState: "Managed"
  logStore:
    type: "elasticsearch"
    elasticsearch:
      nodeSelector:
        es: deploy
      nodeCount: 1
      resources:
        requests:
          cpu: 100m
          memory: 1Gi
      storage: {}
      redundancyPolicy: "ZeroRedundancy"
  visualization:
    type: "kibana"
    kibana:
      nodeSelector:
        kibana: deploy
      replicas: 1
  collection:
    logs:
      type: "fluentd"
      fluentd:
        nodeSelector:
          fluentd: deploy

3. check status in the clusterlogging/instance

Actual results:


Expected results:


Additional info:

Comment 1 Qiaoling Tang 2020-08-12 07:56:55 UTC
Found some error logs in the EO:

time="2020-08-12T07:53:28Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-08-12T07:53:28Z" level=info msg="Updating status of Kibana"
{"level":"error","ts":1597218808.570181,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"kibana-controller","request":"openshift-logging/kibana","error":"Failed to update Kibana status for \"kibana\": kibanas.logging.openshift.io \"kibana\" not found","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90"}

The kibana/kibana exists, and the kibana pod is Running:
$ oc get kibana
NAME     MANAGEMENT STATE   REPLICAS
kibana   Managed            1
$ oc get pod -l component=kibana
NAME                     READY   STATUS    RESTARTS   AGE
kibana-75f788c7c-twg95   2/2     Running   0          5m29s

Comment 2 Jeff Cantrill 2020-08-21 14:11:00 UTC
Moving to UpcomingSprint for future evaluation

Comment 3 Periklis Tsirakidis 2020-09-09 14:28:20 UTC
This PR is fixed as part of [1] namely the missing status subresource in the CRD.

Comment 4 Jeff Cantrill 2020-09-12 01:58:17 UTC
Moving to UpcomingSprint as unlikely to be addressed by EOD

Comment 5 Periklis Tsirakidis 2020-09-16 15:20:48 UTC
Closing this because it is a duplicate issue of https://bugzilla.redhat.com/show_bug.cgi?id=1849564

*** This bug has been marked as a duplicate of bug 1849564 ***