Hide Forgot
Created attachment 1548084 [details] clusterlogging and elasticsearch operator logs Description of problem: Updating resource limits for ES in the clusterlogging CR does not trigger a new ES deployment. The change is successfully propagated to the elasticsearch CR but the ES operator does not seem to notice the change and does not trigger a new deployment. Version-Release number of selected component (if applicable): 4.0.0-0.nightly-2019-03-25-180911 How reproducible: Always (?). I saw a new deployment trigger at least once, but on current builds I can reliably reproduce the issue. Steps to Reproduce: 1. Create a default clusterlogging deployment. clusterlogging operator and CRs are in the openshift-logging namespace and the elasticsearch operator is in the openshift-operators namespace 2. oc edit clusterlogging instance and add the following to resources: resources: limits: cpu: "4" memory: 24Gi requests: cpu: "1" memory: 24Gi 3. Verify that the elasticsearch CR is updated with the new requests/limits Actual results: No new ES deployment is triggered Expected results: ES operator notices the change to the ES CR and triggers a new deployment Additional info: clusterlogging and ES operator pod logs attached
https://github.com/openshift/elasticsearch-operator/pull/110
Verified on the elasticsearch operator pushed to quay.io today: Name: quay.io/openshift/origin-elasticsearch-operator:latest Digest: sha256:d7246cfee429b08e98d8b357e185e18956e075339882e3c403758c57ada0bc4b Media Type: application/vnd.docker.distribution.manifest.v1+prettyjws Created: 5h ago Image Size: 5 layers (size unavailable) Layers: -- sha256:68cddb23acfeddaee12b95b560b510d2ce2643a3c6a892d9df10da63a3089e78 -- sha256:b1ae8487cc2f3db5714707986ee6537551c8a2d9bea919e7d6e07b67461e3292 -- sha256:e4b71d26d12a7cbf0c66150703733177a20b17e70d9c7d1e3b5f639dbdb97a4d -- sha256:0eddba817d9243f8830665b0091d776899eb42c4c986a1f4ad7d9af86e5a2999 -- sha256:03bfdf25e03ebe75abdd508999c511fdd9dfeb3fa4def18a154aeb6b6b290f85 Changed CPU and memory requests/limits and the ES cluster rolled out again, node-by-node.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758