Bug 1692796 - Changing ES resource limits in clusterlogging CR does not trigger a new ES deployment
Summary: Changing ES resource limits in clusterlogging CR does not trigger a new ES de...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.1.0
Assignee: ewolinet
QA Contact: Mike Fiedler
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-26 13:20 UTC by Mike Fiedler
Modified: 2019-06-04 10:46 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-04 10:46:25 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
clusterlogging and elasticsearch operator logs (942 bytes, application/gzip)
2019-03-26 13:20 UTC, Mike Fiedler
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 0 None None None 2019-06-04 10:46:33 UTC

Description Mike Fiedler 2019-03-26 13:20:21 UTC
Created attachment 1548084 [details]
clusterlogging and elasticsearch operator logs

Description of problem:

Updating resource limits for ES in the clusterlogging CR does not trigger a new ES deployment.   The change is successfully propagated to the elasticsearch CR but the ES operator does not seem to notice the change and does not trigger a new deployment.

Version-Release number of selected component (if applicable): 4.0.0-0.nightly-2019-03-25-180911


How reproducible: Always (?).   I saw a new deployment trigger at least once, but on current builds I can reliably reproduce the issue.


Steps to Reproduce:
1. Create a default clusterlogging deployment.   clusterlogging operator and  CRs are in the openshift-logging namespace and the elasticsearch operator is in the openshift-operators namespace
2. oc edit clusterlogging instance and add the following to resources:      
     resources:
        limits:
          cpu: "4"
          memory: 24Gi
        requests:
          cpu: "1"
          memory: 24Gi


3. Verify that the elasticsearch CR is updated with the new requests/limits

Actual results:

No new ES deployment is triggered


Expected results:

ES operator notices the change to the ES CR and triggers a new deployment


Additional info:

clusterlogging and ES operator pod logs attached

Comment 2 Mike Fiedler 2019-03-28 19:57:14 UTC
Verified on the elasticsearch operator pushed to quay.io today:

Name:        quay.io/openshift/origin-elasticsearch-operator:latest
Digest:      sha256:d7246cfee429b08e98d8b357e185e18956e075339882e3c403758c57ada0bc4b
Media Type:  application/vnd.docker.distribution.manifest.v1+prettyjws
Created:     5h ago
Image Size:  5 layers (size unavailable)
Layers:      -- sha256:68cddb23acfeddaee12b95b560b510d2ce2643a3c6a892d9df10da63a3089e78
             -- sha256:b1ae8487cc2f3db5714707986ee6537551c8a2d9bea919e7d6e07b67461e3292
             -- sha256:e4b71d26d12a7cbf0c66150703733177a20b17e70d9c7d1e3b5f639dbdb97a4d
             -- sha256:0eddba817d9243f8830665b0091d776899eb42c4c986a1f4ad7d9af86e5a2999
             -- sha256:03bfdf25e03ebe75abdd508999c511fdd9dfeb3fa4def18a154aeb6b6b290f85

Changed CPU and memory requests/limits and the ES cluster rolled out again, node-by-node.

Comment 4 errata-xmlrpc 2019-06-04 10:46:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.