Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1849564

Summary: The status of Kibana pod is always notReady in the Kibana CR.
Product: OpenShift Container Platform Reporter: Qiaoling Tang <qitang>
Component: LoggingAssignee: Periklis Tsirakidis <periklis>
Status: CLOSED ERRATA QA Contact: Qiaoling Tang <qitang>
Severity: low Docs Contact:
Priority: low    
Version: 4.5CC: aos-bugs, periklis
Target Milestone: ---   
Target Release: 4.6.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: logging-exploration
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-10-27 15:09:31 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1867448    

Description Qiaoling Tang 2020-06-22 09:09:48 UTC
Description of problem:
Deploying logging, after all the pods are running, check the status in the Kibana CR, the Kibana pod is always notReady.

oc get clusterlogging -oyaml

          master:
            failed: []
            notReady: []
            ready:
            - elasticsearch-cdm-xw210w86-1-5db75ffd59-nmshc
            - elasticsearch-cdm-xw210w86-2-58cdb965c4-sd4xw
            - elasticsearch-cdm-xw210w86-3-7f4c96dd6f-4vq4w
        shardAllocationEnabled: all
    visualization:
      kibanaStatus:
      - deployment: kibana
        pods:
          failed: []
          notReady:
          - kibana-6df7489589-rr8kc
          ready: []
        replicaSets:
        - kibana-6df7489589
        replicas: 1

apiVersion: v1
items:
- apiVersion: logging.openshift.io/v1
  kind: Kibana
  metadata:
    creationTimestamp: "2020-06-22T02:33:36Z"
    generation: 1
    managedFields:
    - apiVersion: logging.openshift.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:ownerReferences:
            .: {}
            k:{"uid":"de92cae7-aaaa-4250-b188-026fce1dd4f3"}:
              .: {}
              f:apiVersion: {}
              f:controller: {}
              f:kind: {}
              f:name: {}
              f:uid: {}
        f:spec:
          .: {}
          f:managementState: {}
          f:proxy:
            .: {}
            f:resources:
              .: {}
              f:limits:
                .: {}
                f:memory: {}
              f:requests:
                .: {}
                f:cpu: {}
                f:memory: {}
          f:replicas: {}
          f:resources:
            .: {}
            f:limits:
              .: {}
              f:cpu: {}
              f:memory: {}
            f:requests:
              .: {}
              f:cpu: {}
              f:memory: {}
      manager: cluster-logging-operator
      operation: Update
      time: "2020-06-22T02:33:36Z"
    - apiVersion: logging.openshift.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:status: {}
      manager: elasticsearch-operator
      operation: Update
      time: "2020-06-22T02:34:17Z"
    name: kibana
    namespace: openshift-logging
    ownerReferences:
    - apiVersion: logging.openshift.io/v1
      controller: true
      kind: ClusterLogging
      name: instance
      uid: de92cae7-aaaa-4250-b188-026fce1dd4f3
    resourceVersion: "71526"
    selfLink: /apis/logging.openshift.io/v1/namespaces/openshift-logging/kibanas/kibana
    uid: 26695576-4909-4a58-8477-7b0acbf574c6
  spec:
    managementState: Managed
    proxy:
      resources:
        limits:
          memory: 1Gi
        requests:
          cpu: 100m
          memory: 1Gi
    replicas: 1
    resources:
      limits:
        cpu: "1"
        memory: 4Gi
      requests:
        cpu: 800m
        memory: 2Gi
  status:
  - deployment: kibana
    pods:
      failed: []
      notReady:
      - kibana-6df7489589-rr8kc
      ready: []
    replicaSets:
    - kibana-6df7489589
    replicas: 1
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""


$ oc get pod
NAME                                            READY   STATUS              RESTARTS   AGE
cluster-logging-operator-8c8ffd6c4-nsg7m        1/1     Running             0          7h4m
elasticsearch-cdm-xw210w86-1-5db75ffd59-nmshc   2/2     Running             0          5h56m
elasticsearch-cdm-xw210w86-2-58cdb965c4-sd4xw   2/2     Running             0          5h56m
elasticsearch-cdm-xw210w86-3-7f4c96dd6f-4vq4w   2/2     Running             0          5h56m
elasticsearch-delete-app-1592813700-pcvcx       0/1     Completed           0          14m
elasticsearch-delete-app-1592814600-fpt5h       0/1     ContainerCreating   0          0s
elasticsearch-delete-audit-1592813700-2zmfb     0/1     Completed           0          14m
elasticsearch-delete-audit-1592814600-r4llf     0/1     ContainerCreating   0          0s
elasticsearch-delete-infra-1592813700-5lmp2     0/1     Completed           0          14m
elasticsearch-delete-infra-1592814600-vx4t7     0/1     ContainerCreating   0          0s
elasticsearch-rollover-app-1592813700-dktk2     0/1     Completed           0          14m
elasticsearch-rollover-audit-1592813700-h4xmz   0/1     Completed           0          14m
elasticsearch-rollover-infra-1592813700-l7qft   0/1     Completed           0          14m
fluentd-66lmm                                   1/1     Running             0          5h56m
fluentd-brfpf                                   1/1     Running             0          5h56m
fluentd-cqjwl                                   1/1     Running             0          5h56m
fluentd-gbs8h                                   1/1     Running             0          5h56m
fluentd-mlvcv                                   1/1     Running             0          5h56m
fluentd-nlmk9                                   1/1     Running             0          5h56m
kibana-6df7489589-rr8kc                         2/2     Running             0          5h55m

Version-Release number of selected component (if applicable):
$ oc get csv
NAME                                        DISPLAY                  VERSION              REPLACES   PHASE
clusterlogging.4.5.0-202006180838           Cluster Logging          4.5.0-202006180838              Succeeded
elasticsearch-operator.4.5.0-202006180838   Elasticsearch Operator   4.5.0-202006180838              Succeeded


How reproducible:
Always

Steps to Reproduce:
1. create clusterlogging with:
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
  name: "instance"
  namespace: "openshift-logging"
spec:
  managementState: "Managed"
  logStore:
    type: "elasticsearch"
    retentionPolicy: 
      application:
        maxAge: 1d
      infra:
        maxAge: 3h
      audit:
        maxAge: 2w
    elasticsearch:
      nodeCount: 3
      redundancyPolicy: "SingleRedundancy"
      resources:
        requests:
          memory: "2Gi"
      storage:
        storageClassName: "standard"
        size: "20Gi"
  visualization:
    type: "kibana"
    kibana:
      proxy:
        resources:
          limits:
            memory: "1Gi"
          requests:
            cpu: "100m"
            memory: "1Gi"
      resources:
        limits:
          cpu: "1000m"
          memory: "4Gi"
        requests:
          cpu: "800m"
          memory: "2Gi"
      replicas: 1
  collection:
    logs:
      type: "fluentd"
      fluentd: {}

2. wait until all pods start, check status in the Kibana CR
3.

Actual results:


Expected results:


Additional info:

Comment 1 Periklis Tsirakidis 2020-07-10 14:22:32 UTC
Moving to UpcomingSprint as unlikely to be resolved by EOS

Comment 4 Qiaoling Tang 2020-09-16 02:47:15 UTC
Verified with elasticsearch-operator.4.6.0-202009152100.p0

Comment 5 Periklis Tsirakidis 2020-09-16 15:20:48 UTC
*** Bug 1867448 has been marked as a duplicate of this bug. ***

Comment 8 errata-xmlrpc 2020-10-27 15:09:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6.1 extras update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4198