Bug 2049288 - podman increase percpu memory and can't be freed
Summary: podman increase percpu memory and can't be freed
Keywords:
Status: CLOSED DUPLICATE of bug 2049289
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Containers
Version: 4.6
Hardware: All
OS: All
unspecified
high
Target Milestone: ---
: ---
Assignee: Tom Sweeney
QA Contact: pmali
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-02-01 20:56 UTC by Pamela Escorza
Modified: 2022-02-02 16:44 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-02-02 16:44:53 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Pamela Escorza 2022-02-01 20:56:24 UTC
Description of problem:
as per investigation in bug https://bugzilla.redhat.com/show_bug.cgi?id=2004037, from kernel side, at user level application podman has been identified as it's not able freed percpu memory which let the node out of resources.


Version-Release number of selected component (if applicable):
OCP version 4.6.25
podman-1.9.3-3.rhaos4.6.el8.x86_64   
kernel-4.18.0-193.47.1.el8_2.x86_64 

How reproducible:

How reproducible: 100%


Steps to Reproduce:
1. Install RHEL 8.2 

2. Install container tools.

$ dnf install -y @container-tools  

3. Run below podman command in a loop, you may run multiple loops with different name to get quick spike in Percpu counter value in /proc/meminfo output.

$ while :; do podman run --name=test --replace centos /bin/echo 'running'; done

Actual results:  
Percpu usage is getting increasing gradually.

Expected results: 
Memory should get released in Percpu usage.


Additional info:
This bug has been opened to verify if at user level applications like podman can change the way some of the interprocess communication works and if it's possible to work around this percpu memory increase problem.

Comment 1 Daniel Walsh 2022-02-02 01:34:45 UTC
Can you get a little more recent version of Podman?  Also is Podman using the events logger, which could be taking up space on the /run file system.

Comment 2 Giuseppe Scrivano 2022-02-02 16:35:09 UTC
could you try removing the file /run/libpod/events/events.log when the memory usage grew so much?

Can you please also show the content of /proc/cgroups before and after you remove the file?

Comment 3 Pamela Escorza 2022-02-02 16:44:01 UTC
Hi! 
Seems this bug was duplicated from the good one https://bugzilla.redhat.com/show_bug.cgi?id=2049289, I will proceed to close this one and I will provide the information on the correct bug. thank you

Comment 4 Pamela Escorza 2022-02-02 16:44:53 UTC

*** This bug has been marked as a duplicate of bug 2049289 ***


Note You need to log in before you can comment on or make changes to this bug.