Bug 1832662
Summary: | The kibana pod doesn't use the resource limits and requests configurations | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Qiaoling Tang <qitang> |
Component: | Logging | Assignee: | Periklis Tsirakidis <periklis> |
Status: | CLOSED ERRATA | QA Contact: | Qiaoling Tang <qitang> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 4.5 | CC: | aos-bugs, periklis |
Target Milestone: | --- | ||
Target Release: | 4.5.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
Cause:
Kibana CRD is a managed resource by elasticsearch-operator in 4.5. The cluster-logging-operator only creates a custom resource for kibana and not the kibana pods anymore.
Consequence:
Missed to passed resources and proxy resources from ClusterLogging CR to Kibana CR.
Fix:
Pass resources and proxy resources to Kibana CR
Result:
A Kibana CR with custom resources and proxy resources is reconciled by the elasticsearch-operator to a pod spec with customized resources for the kibana container and for the kibana proxy container.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2020-07-13 17:35:44 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1832652 |
Description
Qiaoling Tang
2020-05-07 03:18:38 UTC
Tested with images from 4.5.0-0.ci-2020-05-12-030109, the kibana container's resources are same to the clusterlogging intance, but in the kibana/kibana the proxy container resources field is always `null` $ oc get clusterlogging -oyaml ...... managementState: Managed visualization: kibana: proxy: resources: limits: memory: 1Gi requests: cpu: 100m memory: 1Gi replicas: 1 resources: limits: cpu: 1000m memory: 4Gi requests: cpu: 800m memory: 2Gi type: kibana ...... $ oc get kibana -oyaml apiVersion: v1 items: - apiVersion: logging.openshift.io/v1 kind: Kibana metadata: creationTimestamp: "2020-05-12T07:23:06Z" generation: 1 managedFields: - apiVersion: logging.openshift.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:ownerReferences: .: {} k:{"uid":"b225e557-6652-4dbc-9bd7-38def282d94a"}: .: {} f:apiVersion: {} f:controller: {} f:kind: {} f:name: {} f:uid: {} f:spec: .: {} f:managementState: {} f:proxy: .: {} f:resources: {} f:replicas: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} manager: cluster-logging-operator operation: Update time: "2020-05-12T07:23:06Z" - apiVersion: logging.openshift.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: {} manager: elasticsearch-operator operation: Update time: "2020-05-12T07:23:39Z" name: kibana namespace: openshift-logging ownerReferences: - apiVersion: logging.openshift.io/v1 controller: true kind: ClusterLogging name: instance uid: b225e557-6652-4dbc-9bd7-38def282d94a resourceVersion: "265230" selfLink: /apis/logging.openshift.io/v1/namespaces/openshift-logging/kibanas/kibana uid: acdf4c2b-c62a-4286-846a-31f0c913aab3 spec: image: "" managementState: Managed proxy: image: "" resources: null replicas: 1 resources: limits: cpu: "1" memory: 4Gi requests: cpu: 800m memory: 2Gi status: - deployment: kibana pods: failed: [] notReady: [] ready: - kibana-cb66bcf65-8bdl4 replicaSets: - kibana-cb66bcf65 replicas: 1 kind: List metadata: resourceVersion: "" selfLink: "" $ oc get deploy kibana -oyaml |grep -A 6 resources f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: -- f:resources: .: {} f:limits: .: {} f:memory: {} f:requests: .: {} -- resources: limits: cpu: "1" memory: 4Gi requests: cpu: 800m memory: 2Gi -- resources: limits: memory: 256Mi requests: cpu: 100m memory: 256Mi terminationMessagePath: /dev/termination-log Verified with images from 4.5.0-0.ci-2020-05-12-205117 In the clusterlogging/instance managementState: Managed visualization: kibana: proxy: resources: limits: memory: 1Gi requests: cpu: 100m memory: 1Gi replicas: 1 resources: limits: cpu: 1000m memory: 4Gi requests: cpu: 800m memory: 2Gi type: kibana $ oc get kibana -oyaml |grep -A 6 resources f:resources: .: {} f:limits: .: {} f:memory: {} f:requests: .: {} -- f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: -- resources: limits: memory: 1Gi requests: cpu: 100m memory: 1Gi replicas: 1 resources: limits: cpu: "1" memory: 4Gi requests: cpu: 800m memory: 2Gi $ oc get deploy kibana -oyaml |grep -A 6 resources f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: -- f:resources: .: {} f:limits: .: {} f:memory: {} f:requests: .: {} -- resources: limits: cpu: "1" memory: 4Gi requests: cpu: 800m memory: 2Gi -- resources: limits: memory: 1Gi requests: cpu: 100m memory: 1Gi terminationMessagePath: /dev/termination-log Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409 |