Description of problem: Deploy logging, don't set cpu and memory for fluentd, the default value of requested memory is 736Mi, then set the fluentd requested memory to `500Mi` in the clusterlogging instance, check the fluentd pods, the pods aren't updated, get some error messages in the CLO pod log: time="2020-03-16T07:01:54Z" level=error msg="Error updating &TypeMeta{Kind:DaemonSet,APIVersion:apps/v1,}: DaemonSet.apps \"fluentd\" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: \"500Mi\": must be less than or equal to memory limit" {"level":"error","ts":1584342114.8858852,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"clusterlogging-controller","request":"openshift-logging/instance","error":"Unable to create or update collection for \"instance\": DaemonSet.apps \"fluentd\" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: \"500Mi\": must be less than or equal to memory limit","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} Version-Release number of selected component (if applicable): ose-cluster-logging-operator-v4.4.0-202003130257 How reproducible: Always Steps to Reproduce: 1. deploy CLO and EO 2. create clusterlogging instance with https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/logging/clusterlogging/example.yaml, the default values are: Containers: fluentd: Image: image-registry.openshift-image-registry.svc:5000/openshift/ose-logging-fluentd:v4.4.0 Port: 24231/TCP Host Port: 0/TCP Limits: memory: 736Mi Requests: cpu: 100m memory: 736Mi 3. add fluentd requested memory to clusterlogging instance spec: collection: logs: fluentd: resources: requests: memory: 500Mi type: fluentd 4. check the fluentd pod status Actual results: The fluentd pods aren't updated Expected results: The fluentd pods should be updated to use a smaller memory Additional info: If creating clusterlogging instance with resources settings, then change the resource requests, the fluentd pod could be updated.
Verified using quay.io/openshift/origin-cluster-logging-operator@sha256:9057825a57c65b098132257add099cbca2e5f2e5032f3a370c9329025f60462b
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409