Bug 1732936 - resources spec don't get updated by the cluster logging operator
Summary: resources spec don't get updated by the cluster logging operator
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 4.2.0
Assignee: Jeff Cantrill
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks: 1746542
TreeView+ depends on / blocked
 
Reported: 2019-07-24 18:38 UTC by raffaele spazzoli
Modified: 2019-10-16 06:31 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1746542 (view as bug list)
Environment:
Last Closed: 2019-10-16 06:31:08 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:2922 None None None 2019-10-16 06:31:18 UTC

Description raffaele spazzoli 2019-07-24 18:38:20 UTC
Description of problem:
changing the resource spec in the ClusterLogging CR for the elasticsearch pods does not seem to have any effect if the pods are already running. 

fragment from the cluster logging object

      redundancyPolicy: SingleRedundancy
      resources:
        limits:
          cpu: '1'
          memory: 8Gi
        requests:
          cpu: '1'
          memory: 8Gi

fragment from one of the Elastic search deployment:

      containers:
        - resources:
            limits:
              cpu: '1'
              memory: 8Gi
            requests:
              cpu: '1'
              memory: 8Gi

fragment from the replicaset

      containers:
        - resources:
            limits:
              cpu: '1'
              memory: 16Gi
            requests:
              cpu: '1'
              memory: 16Gi

and pods also have 16, so in my case they could not be scheduled.

Also along the same lines, I changed the node selector in the cluster logging resource and it never got updated the deployment and dependent resources:

cluster logging:
      nodeSelector:
        node-role.kubernetes.io/es: ''

deployment:
      nodeSelector:
        machine.openshift.io/cluster-api-machine-role: es
        node-role.kubernetes.io/es: ''

Comment 2 Jeff Cantrill 2019-08-26 19:44:37 UTC
Verified success in 4.2 by:

* Created CL instance with request.memory only
* Verified Elasticsearch Instance change
* Verified Pods change
* Modifed CL instance to add limit.memory
* Verified Elasticsearch Instance change
* Verified Pods change

Note: Did find that EO is defaulting values in limits/requests that it should not which is a separate issue but confirmed the EO is reacting to changes unlike what is reported in this BZ

Comment 4 Anping Li 2019-09-03 15:21:06 UTC
The resource changes in deployment. Move to verified

Comment 5 errata-xmlrpc 2019-10-16 06:31:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2922


Note You need to log in before you can comment on or make changes to this bug.