Bug 1813810 - Decrease the requested memory for fluentd get error `must be less than or equal to memory limit`
Summary: Decrease the requested memory for fluentd get error `must be less than or equ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.4
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.5.0
Assignee: ewolinet
QA Contact: Qiaoling Tang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-03-16 07:40 UTC by Qiaoling Tang
Modified: 2020-07-13 17:20 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-13 17:20:18 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-logging-operator pull 421 0 None closed Bug 1813810: Updating resource update policy and adding unit test to verify 2020-03-24 14:14:05 UTC
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-07-13 17:20:41 UTC

Description Qiaoling Tang 2020-03-16 07:40:03 UTC
Description of problem:
Deploy logging, don't set cpu and memory for fluentd, the default value of requested memory is 736Mi, then set the fluentd requested memory to `500Mi` in the clusterlogging instance, check the fluentd pods, the pods aren't updated, get some error messages in the CLO pod log:
time="2020-03-16T07:01:54Z" level=error msg="Error updating &TypeMeta{Kind:DaemonSet,APIVersion:apps/v1,}: DaemonSet.apps \"fluentd\" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: \"500Mi\": must be less than or equal to memory limit"
{"level":"error","ts":1584342114.8858852,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"clusterlogging-controller","request":"openshift-logging/instance","error":"Unable to create or update collection for \"instance\": DaemonSet.apps \"fluentd\" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: \"500Mi\": must be less than or equal to memory limit","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}

Version-Release number of selected component (if applicable):
ose-cluster-logging-operator-v4.4.0-202003130257


How reproducible:
Always

Steps to Reproduce:
1. deploy CLO and EO
2. create clusterlogging instance with https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/logging/clusterlogging/example.yaml, the default values are:
Containers:
  fluentd:
    Image:      image-registry.openshift-image-registry.svc:5000/openshift/ose-logging-fluentd:v4.4.0
    Port:       24231/TCP
    Host Port:  0/TCP
    Limits:
      memory:  736Mi
    Requests:
      cpu:     100m
      memory:  736Mi

3. add fluentd requested memory to clusterlogging instance

  spec:
    collection:
      logs:
        fluentd:
          resources:
            requests:
              memory: 500Mi
        type: fluentd
4. check the fluentd pod status

Actual results:
The fluentd pods aren't updated

Expected results:
The fluentd pods should be updated to use a smaller memory

Additional info:
If creating clusterlogging instance with resources settings, then change the resource requests, the fluentd pod could be updated.

Comment 3 Anping Li 2020-03-26 13:20:11 UTC
Verified using quay.io/openshift/origin-cluster-logging-operator@sha256:9057825a57c65b098132257add099cbca2e5f2e5032f3a370c9329025f60462b

Comment 5 errata-xmlrpc 2020-07-13 17:20:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.