Description of problem: The resource configurations of the kibana proxy container in the kibana CR are not updated after update them in the clusterlogging instance. Deploy logging with: visualization: type: "kibana" kibana: replicas: 1 after all pods started, update the resource requests and limits of the kibana in the clusterlogging instance to: managementState: Managed visualization: kibana: proxy: resources: requests: cpu: 50m memory: 128Mi replicas: 1 resources: requests: memory: 500Mi type: kibana Then check the resources in the kibana CR, the configurations of the kibana container are updated, but the proxy container are not updated: spec: managementState: Managed proxy: resources: limits: memory: 256Mi requests: cpu: 100m memory: 256Mi replicas: 1 resources: requests: memory: 500Mi How reproducible: Always Actual results: Expected results: Additional info:
@Quaoling Tang please provide EO logs
$ oc logs -n openshift-operators-redhat elasticsearch-operator-799579d5cf-v48sk {"level":"info","ts":1590475368.6210077,"logger":"cmd","msg":"Go Version: go1.13.8"} {"level":"info","ts":1590475368.6213279,"logger":"cmd","msg":"Go OS/Arch: linux/amd64"} {"level":"info","ts":1590475368.621334,"logger":"cmd","msg":"Version of operator-sdk: v0.8.2"} {"level":"info","ts":1590475368.6218479,"logger":"leader","msg":"Trying to become the leader."} {"level":"info","ts":1590475368.8251948,"logger":"leader","msg":"No pre-existing lock was found."} {"level":"info","ts":1590475368.834537,"logger":"leader","msg":"Became the leader."} {"level":"info","ts":1590475368.9861891,"logger":"cmd","msg":"Registering Components."} {"level":"info","ts":1590475368.9867206,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"kibana-controller","source":"kind source: /, Kind="} {"level":"info","ts":1590475368.986937,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"elasticsearch-controller","source":"kind source: /, Kind="} {"level":"info","ts":1590475368.9871032,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"proxyconfig-controller","source":"kind source: /, Kind="} {"level":"info","ts":1590475368.9872077,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"kibanasecret-controller","source":"kind source: /, Kind="} {"level":"info","ts":1590475368.9873521,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"trustedcabundle-controller","source":"kind source: /, Kind="} {"level":"info","ts":1590475369.1682222,"logger":"metrics","msg":"Metrics Service object created","Service.Name":"elasticsearch-operator","Service.Namespace":"openshift-operators-redhat"} {"level":"info","ts":1590475369.1682885,"logger":"cmd","msg":"This operator no longer honors the image specified by the custom resources so that it is able to properly coordinate the configuration with the image."} {"level":"info","ts":1590475369.1683016,"logger":"cmd","msg":"Starting the Cmd."} {"level":"info","ts":1590475369.9686494,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"trustedcabundle-controller"} {"level":"info","ts":1590475369.9687443,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"kibanasecret-controller"} {"level":"info","ts":1590475369.968725,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"elasticsearch-controller"} {"level":"info","ts":1590475369.9687617,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"kibana-controller"} {"level":"info","ts":1590475369.9687643,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"proxyconfig-controller"} {"level":"info","ts":1590475370.0688834,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"kibana-controller","worker count":1} {"level":"info","ts":1590475370.0689306,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"elasticsearch-controller","worker count":1} {"level":"info","ts":1590475370.0689335,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"proxyconfig-controller","worker count":1} {"level":"info","ts":1590475370.0689275,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"kibanasecret-controller","worker count":1} {"level":"info","ts":1590475370.0689063,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"trustedcabundle-controller","worker count":1} time="2020-05-26T06:43:15Z" level=error msg="Operator unable to read local file to get contents: open /tmp/ocp-eo/ca.crt: no such file or directory" time="2020-05-26T06:43:15Z" level=error msg="Operator unable to read local file to get contents: open /tmp/ocp-eo/ca.crt: no such file or directory" {"level":"error","ts":1590475395.7800205,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"kibana-controller","request":"openshift-logging/kibana","error":" does not yet contain expected key ca-bundle.crt","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} time="2020-05-26T06:43:16Z" level=info msg="Updating status of Kibana" time="2020-05-26T06:43:16Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:43:17Z" level=info msg="Updating status of Kibana" time="2020-05-26T06:43:17Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:43:17Z" level=info msg="Updating status of Kibana" time="2020-05-26T06:43:17Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:43:17Z" level=info msg="Updating status of Kibana" time="2020-05-26T06:43:17Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:43:18Z" level=info msg="Updating status of Kibana" time="2020-05-26T06:43:18Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:43:18Z" level=info msg="Updating status of Kibana" time="2020-05-26T06:43:18Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:43:37Z" level=info msg="Updating status of Kibana" time="2020-05-26T06:43:37Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:43:37Z" level=info msg="Updating status of Kibana" time="2020-05-26T06:43:37Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:43:38Z" level=info msg="Updating status of Kibana" time="2020-05-26T06:43:38Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:43:38Z" level=info msg="Updating status of Kibana" time="2020-05-26T06:43:38Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:43:47Z" level=info msg="Updating status of Kibana" time="2020-05-26T06:43:47Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:43:47Z" level=warning msg="unable to get cluster node count. E: Get https://elasticsearch.openshift-logging.svc:9200/_cluster/health: dial tcp 172.30.97.178:9200: i/o timeout\r\n" time="2020-05-26T06:43:48Z" level=info msg="Updating status of Kibana" time="2020-05-26T06:43:48Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:43:48Z" level=info msg="Updating status of Kibana" time="2020-05-26T06:43:48Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:44:18Z" level=info msg="Updating status of Kibana" time="2020-05-26T06:44:18Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:44:18Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:44:48Z" level=info msg="Updating status of Kibana" time="2020-05-26T06:44:48Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:44:48Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:45:18Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:45:49Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:46:19Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:46:50Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:47:20Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:47:50Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:48:21Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:48:51Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:49:21Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:49:52Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:50:23Z" level=info msg="Kibana status successfully updated" time="2020-05-26T06:50:53Z" level=info msg="Kibana status successfully updated"
@Qiaoling Tang fix is merged: https://github.com/openshift/cluster-logging-operator/pull/541. Please test
Verified on clusterserviceversion.operators.coreos.com/clusterlogging.4.5.0-202005280857 clusterserviceversion.operators.coreos.com/elasticsearch-operator.4.5.0-202005290037
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409