Bug 1846402
| Summary: | Pod noobaa-core-0 memory consumption is increasing 0.41 MiB per hour | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | Martin Bukatovic <mbukatov> | ||||||
| Component: | Multi-Cloud Object Gateway | Assignee: | Igor Pick <ipick> | ||||||
| Status: | CLOSED WONTFIX | QA Contact: | Raz Tamir <ratamir> | ||||||
| Severity: | medium | Docs Contact: | |||||||
| Priority: | unspecified | ||||||||
| Version: | 4.2 | CC: | ebenahar, etamir, kramdoss, muagarwa, nbecker, ocs-bugs, odf-bz-bot, rcyriac | ||||||
| Target Milestone: | --- | Keywords: | AutomationBackLog | ||||||
| Target Release: | --- | ||||||||
| Hardware: | Unspecified | ||||||||
| OS: | Unspecified | ||||||||
| Whiteboard: | |||||||||
| Fixed In Version: | Doc Type: | No Doc Update | |||||||
| Doc Text: | Story Points: | --- | |||||||
| Clone Of: | Environment: | ||||||||
| Last Closed: | 2021-08-31 09:19:17 UTC | Type: | Bug | ||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | --- | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Embargoed: | |||||||||
| Attachments: |
|
||||||||
|
Description
Martin Bukatovic
2020-06-11 13:53:05 UTC
4.2 still contains the DB inside the core (it was split in 4.3). We currently think this is expected since Mongo keeps things in mem as long as he can, and even on an idle cluster, we keep metrics and statistics which are collected regularly. We will check and verify if this is the case. Pushing out of 4.5 for now , not a blocker. Need to test with a new noobaa image which contains more metrics for the different services. Martin, can you sync with Elad and Ohad ? Talking with Elad, this is not a blocker at this point. We will try to repro with a custom image (see comment #4) and then decide if this needs to be moved back to 4.6 For now we agreed it would be moved to 4.7 I'm rechecking this on the following cluster: - OCP 4.6.0-0.nightly-2020-11-05-215543 - OCS: 4.6.0-154.ci - baremetal platform The cluster is running for about 9 days. Querying prometheus for memory consumption of noobaa pods (see query below) shows while some pods (such as noobaa-core-0) gradually allocates memory, every now and then a pod frees some memory so that unlimited growth to the numbers originally reported in this bug is not happening. See attached screenshot #2. Created attachment 1733872 [details]
screenshot #2: memory cunsumption of every noobaa pod on 4.6 BM cluster for a 9 day period
Screenshot with chart of the following prometheus query:
pod:container_memory_usage_bytes:sum{namespace='openshift-storage', pod=~'noobaa-.*'}
Talking to Elad, we have assigned someone to look and verify if its an issue or just a usage pattern. Looking at comment #6 doesn't seem like we risk OOM. Not a blocker for 4.8 |