Bug 2126626 - ocs-operator pods getting OOMKilled failing the ocs-consumer installations on the respective cluster
Summary: ocs-operator pods getting OOMKilled failing the ocs-consumer installations on...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.12
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ODF 4.12.0
Assignee: Malay Kumar parida
QA Contact: Elena Bondarenko
URL:
Whiteboard:
: 2131662 (view as bug list)
Depends On:
Blocks: 2121329 2131662 2161650
TreeView+ depends on / blocked
 
Reported: 2022-09-14 07:41 UTC by Gobinda Das
Modified: 2023-08-09 17:00 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: OCS-operator is relying on the default "AllNamespaces" cache in the controller-runtime, which works by syncing all the Kubernetes resources in it when the operator starts running for the first time. Consequence: The initial informer cache sync is so huge that it causes a sudden massive spike in the memory usage of the operator. And this spike is directly proportional to the amount of resources present in the underlying Kubernetes/Openshift cluster. The kind of memory limits configured for ocs-operator mostly compensates for the memory spike by a close margin, yet there are a few situations where the memory spike is not compensated by the set memory limits which causes OOMKilled failures for the ocs-operator pods. Fix: Rather than the default "AllNamespaces" cache, We specify a cache which would only cache-sync the resources / custom-resources in the same namespace. Result: This massively reduces operator memory usage spike & would help to avoid OOMKIlled situation for OCS operator pod.
Clone Of: 2121329
: 2161650 (view as bug list)
Environment:
Last Closed: 2023-02-08 14:06:28 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-operator pull 1817 0 None Merged Limit OCS operator leaderElection & cache to operator namespace 2022-09-21 17:27:40 UTC

Description Gobinda Das 2022-09-14 07:41:31 UTC
Issue 2

Description of the problem:

Currently, a bunch of ocs-consumers are failing. The trigger is the ocs-operator pod getting OOMKilled because of its demand of higher memory usage than the memory limits allocated to them.
The root cause behind this is that ocs-operator is relying on the default "AllNamespaces" cache in controller-runtime, which works by syncing all the Kubernetes resources in it when the operator starts running for the first time.

The problem here is that the initial informer cache sync is so huge that it causes a sudden massive spike in the memory usage of the operator. And even worse, this spike is directly proportional to the amount of resources present in the underlying Kubernetes/Openshift cluster.

The kind of memory limits configured for ocs-operator mostly compensates for the memory spike by a close margin, yet there are a few situations where the memory spike is not compensated by the set memory limits which causes OOMKilled failures for the ocs-operator pods.

Reproducing the issue:

Currently, there isn't any sure short way of reproducing this issue except somehow making the ocs-operator pod consume higher memory. You can start off with having a lot of resources on the cluster, and then installing ocs-operator on it.

Component controlling this:

ocs-operator 4.10.4

Proposed Mitigation:

This is not a solution but just a mitigation for the existing problems (solution in the next section):

Bump the memory limits of ocs-operator from 200Mi to 800Mi. We came to 800Mi as per our experience of manually fixing the clusters facing the same issue.

Specifically, this line (https://github.com/red-hat-storage/ocs-osd-deployer/blob/02ebe3916210326d00fae53bf55cbfef53ac1edb/utils/resources.go#L90) has to be modified to resource.MustParse("200Mi"),

Proposed solution:

The ideal solution would be to have a dedicated informer caching mechanism set, rather than the default "AllNamespaces" one, which would only cache-sync the resources / custom-resources which ocs-operator cares about.

ocs-operator's code would have to use the controller-runtime's cache package to build a customer cache builder for itself.

For example: https://github.com/openshift/addon-operator/blob/main/cmd/addon-operator-manager/main.go#L179-L188

Expected Results:
ocs-operator pods running without facing any OOMKill failures and without needing any restarts.

 Actual Results:
❯ oc get pods -n openshift-storage ocs-operator-5bf7c58cc9-pbjtj
NAME                            READY   STATUS             RESTARTS         AGE
ocs-operator-5bf7c58cc9-pbjtj   0/1     CrashLoopBackOff   9520 (74s ago)   37d
ocs-operator pod stuck in a CrashLoopBackOff because of getting OOMKilled and restarting itself in the hopes of working properly.

--- Additional comment from Leela Venkaiah Gangavarapu on 2022-09-12 10:49:47 UTC ---

- Interim fix will be delivered via https://bugzilla.redhat.com/show_bug.cgi?id=2125254

--- Additional comment from Yashvardhan Kukreja on 2022-09-13 07:26:38 UTC ---

Comment 2 Nitin Goyal 2022-10-11 14:57:02 UTC
*** Bug 2131662 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.