Bug 1832662 - The kibana pod doesn't use the resource limits and requests configurations
Summary: The kibana pod doesn't use the resource limits and requests configurations
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.5
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.5.0
Assignee: Periklis Tsirakidis
QA Contact: Qiaoling Tang
URL:
Whiteboard:
Depends On:
Blocks: 1832652
TreeView+ depends on / blocked
 
Reported: 2020-05-07 03:18 UTC by Qiaoling Tang
Modified: 2020-07-13 17:35 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: Kibana CRD is a managed resource by elasticsearch-operator in 4.5. The cluster-logging-operator only creates a custom resource for kibana and not the kibana pods anymore. Consequence: Missed to passed resources and proxy resources from ClusterLogging CR to Kibana CR. Fix: Pass resources and proxy resources to Kibana CR Result: A Kibana CR with custom resources and proxy resources is reconciled by the elasticsearch-operator to a pod spec with customized resources for the kibana container and for the kibana proxy container.
Clone Of:
Environment:
Last Closed: 2020-07-13 17:35:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-logging-operator pull 512 0 None closed Bug 1832662: Add resources, nodeselector and tolerations to KibanaCR 2020-06-29 11:19:57 UTC
Github openshift cluster-logging-operator pull 519 0 None closed Bug 1832662: Add proxy resources to KibanaCR 2020-06-29 11:19:57 UTC
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-07-13 17:35:57 UTC

Description Qiaoling Tang 2020-05-07 03:18:38 UTC
Description of problem:
Deploy clusterlogging, set resource requests for Kibana, then check the the Kibana pod, it still use the default value.

$ oc get clusterlogging instance -oyaml
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
  creationTimestamp: "2020-05-07T02:54:23Z"
  generation: 1
  name: instance
  namespace: openshift-logging
  resourceVersion: "95941"
  selfLink: /apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterloggings/instance
  uid: 43c50fa1-9d30-4819-a002-2653a0cc9c88
spec:
  collection:
    logs:
      fluentd: {}
      type: fluentd
  logStore:
    elasticsearch:
      nodeCount: 3
      redundancyPolicy: SingleRedundancy
      resources:
        requests:
          memory: 2Gi
      storage:
        size: 20Gi
        storageClassName: standard
    retentionPolicy:
      application:
        maxAge: 1d
      audit:
        maxAge: 1w
      infra:
        maxAge: 7d
    type: elasticsearch
  managementState: Managed
  visualization:
    kibana:
      replicas: 1
      resources:
        limits:
          cpu: 1000m
          memory: 4Gi
        requests:
          cpu: 800m
          memory: 2Gi
    type: kibana

$ oc get kibana instance -oyaml
apiVersion: logging.openshift.io/v1
kind: Kibana
metadata:
  creationTimestamp: "2020-05-07T02:54:32Z"
  generation: 1
  name: instance
  namespace: openshift-logging
  ownerReferences:
  - apiVersion: logging.openshift.io/v1
    controller: true
    kind: ClusterLogging
    name: instance
    uid: 43c50fa1-9d30-4819-a002-2653a0cc9c88
  resourceVersion: "90983"
  selfLink: /apis/logging.openshift.io/v1/namespaces/openshift-logging/kibanas/instance
  uid: 4da11f15-020d-405b-b963-d29728016efb
spec:
  image: ""
  managementState: Managed
  proxy:
    image: ""
    resources: null
  replicas: 1
  resources:
    limits:
      memory: 736Mi
    requests:
      cpu: 100m
      memory: 736Mi
status:
- deployment: kibana
  pods:
    failed: []
    notReady: []
    ready:
    - kibana-7786c97bd4-smbwc
  replicaSets:
  - kibana-7786c97bd4
  replicas: 1


Version-Release number of selected component (if applicable):
Logging images are from 4.5.0-0.ci-2020-05-06-225918	
Manifests are copied from the master branch
Cluster version: 4.5.0-0.nightly-2020-05-06-003431


How reproducible:
Always

Steps to Reproduce:
1. deploy clusterlogging with:
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
  name: "instance"
  namespace: "openshift-logging"
spec:
  managementState: "Managed"
  logStore:
    type: "elasticsearch"
    retentionPolicy: 
      application:
        maxAge: 1d
      infra:
        maxAge: 7d
      audit:
        maxAge: 1w
    elasticsearch:
      nodeCount: 3
      redundancyPolicy: "SingleRedundancy"
      resources:
        requests:
          memory: "2Gi"
      storage:
        storageClassName: "standard"
        size: "20Gi"
  visualization:
    type: "kibana"
    kibana:
      resources:
        limits:
          cpu: "1000m"
          memory: "4Gi"
        requests:
          cpu: "800m"
          memory: "2Gi"
      replicas: 1
  collection:
    logs:
      type: "fluentd"
      fluentd: {}
2. check the resources configurations in the kibana instance
3.

Actual results:
The kibana doesn't use the resources configurations in the clusterlogging instance.

Expected results:


Additional info:

Comment 4 Qiaoling Tang 2020-05-12 07:31:55 UTC
Tested with images from 4.5.0-0.ci-2020-05-12-030109, the kibana container's resources are same to the clusterlogging intance, but in the kibana/kibana the proxy container resources field is always `null`

$ oc get clusterlogging -oyaml
......
    managementState: Managed
    visualization:
      kibana:
        proxy:
          resources:
            limits:
              memory: 1Gi
            requests:
              cpu: 100m
              memory: 1Gi
        replicas: 1
        resources:
          limits:
            cpu: 1000m
            memory: 4Gi
          requests:
            cpu: 800m
            memory: 2Gi
      type: kibana
......

$ oc get kibana -oyaml
apiVersion: v1
items:
- apiVersion: logging.openshift.io/v1
  kind: Kibana
  metadata:
    creationTimestamp: "2020-05-12T07:23:06Z"
    generation: 1
    managedFields:
    - apiVersion: logging.openshift.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:ownerReferences:
            .: {}
            k:{"uid":"b225e557-6652-4dbc-9bd7-38def282d94a"}:
              .: {}
              f:apiVersion: {}
              f:controller: {}
              f:kind: {}
              f:name: {}
              f:uid: {}
        f:spec:
          .: {}
          f:managementState: {}
          f:proxy:
            .: {}
            f:resources: {}
          f:replicas: {}
          f:resources:
            .: {}
            f:limits:
              .: {}
              f:cpu: {}
              f:memory: {}
            f:requests:
              .: {}
              f:cpu: {}
              f:memory: {}
      manager: cluster-logging-operator
      operation: Update
      time: "2020-05-12T07:23:06Z"
    - apiVersion: logging.openshift.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:status: {}
      manager: elasticsearch-operator
      operation: Update
      time: "2020-05-12T07:23:39Z"
    name: kibana
    namespace: openshift-logging
    ownerReferences:
    - apiVersion: logging.openshift.io/v1
      controller: true
      kind: ClusterLogging
      name: instance
      uid: b225e557-6652-4dbc-9bd7-38def282d94a
    resourceVersion: "265230"
    selfLink: /apis/logging.openshift.io/v1/namespaces/openshift-logging/kibanas/kibana
    uid: acdf4c2b-c62a-4286-846a-31f0c913aab3
  spec:
    image: ""
    managementState: Managed
    proxy:
      image: ""
      resources: null
    replicas: 1
    resources:
      limits:
        cpu: "1"
        memory: 4Gi
      requests:
        cpu: 800m
        memory: 2Gi
  status:
  - deployment: kibana
    pods:
      failed: []
      notReady: []
      ready:
      - kibana-cb66bcf65-8bdl4
    replicaSets:
    - kibana-cb66bcf65
    replicas: 1
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""


$ oc get deploy kibana -oyaml |grep -A 6 resources
                f:resources:
                  .: {}
                  f:limits:
                    .: {}
                    f:cpu: {}
                    f:memory: {}
                  f:requests:
--
                f:resources:
                  .: {}
                  f:limits:
                    .: {}
                    f:memory: {}
                  f:requests:
                    .: {}
--
        resources:
          limits:
            cpu: "1"
            memory: 4Gi
          requests:
            cpu: 800m
            memory: 2Gi
--
        resources:
          limits:
            memory: 256Mi
          requests:
            cpu: 100m
            memory: 256Mi
        terminationMessagePath: /dev/termination-log

Comment 6 Qiaoling Tang 2020-05-13 06:07:32 UTC
Verified with images from 4.5.0-0.ci-2020-05-12-205117

In the clusterlogging/instance

    managementState: Managed
    visualization:
      kibana:
        proxy:
          resources:
            limits:
              memory: 1Gi
            requests:
              cpu: 100m
              memory: 1Gi
        replicas: 1
        resources:
          limits:
            cpu: 1000m
            memory: 4Gi
          requests:
            cpu: 800m
            memory: 2Gi
      type: kibana


$ oc get kibana -oyaml |grep -A 6 resources
            f:resources:
              .: {}
              f:limits:
                .: {}
                f:memory: {}
              f:requests:
                .: {}
--
          f:resources:
            .: {}
            f:limits:
              .: {}
              f:cpu: {}
              f:memory: {}
            f:requests:
--
      resources:
        limits:
          memory: 1Gi
        requests:
          cpu: 100m
          memory: 1Gi
    replicas: 1
    resources:
      limits:
        cpu: "1"
        memory: 4Gi
      requests:
        cpu: 800m
        memory: 2Gi


$ oc get deploy kibana -oyaml |grep -A 6 resources
                f:resources:
                  .: {}
                  f:limits:
                    .: {}
                    f:cpu: {}
                    f:memory: {}
                  f:requests:
--
                f:resources:
                  .: {}
                  f:limits:
                    .: {}
                    f:memory: {}
                  f:requests:
                    .: {}
--
        resources:
          limits:
            cpu: "1"
            memory: 4Gi
          requests:
            cpu: 800m
            memory: 2Gi
--
        resources:
          limits:
            memory: 1Gi
          requests:
            cpu: 100m
            memory: 1Gi
        terminationMessagePath: /dev/termination-log

Comment 7 errata-xmlrpc 2020-07-13 17:35:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.