Description of problem (please be detailed as possible and provide log snippests): odf-operator-controller-manager is in CLBO Looking at the describe output we notice that we have hit OOM kill. It seems that the default memory limit is lower than required. Unfortunately, we have only 2 known occurrences of this issue till now. It is still a question why we did not see this issue in QE environment. Version of all relevant components (if applicable): Upgrade from OCS-4.8 to ODF-4.9 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? yes. Is there any workaround available to the best of your knowledge? Yes, Increasing the memory limit helps. Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? 2 such instance of the issue has been reported. One from a customer case and the other by an internal associate. Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. Upgrade from OCS-4.8 to ODF-4.9 2. 3. Actual results: Pod is CLBO Expected results: Pod should be in running state
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.10.0 enhancement, security & bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:1372
Hello, My customer is still seeing this same behavior running 4.10.3 even after running the fixed version (i.e. memory limits = 300Mi). Does it need to be increased even further? Is this a legitimate increase in memory needed or is it indicative of a memory leak ?