Created attachment 1736442 [details] Build logs with a high log-level We are running a Build which effectively does a `git clone` of a repository. During the run, our container_memory_usage_rss peaks at about 350MiB and container_memory_max_usage_bytes peaks at ~4.2GiB. As per the logs, our user-provided Dockerfile ends execution at 2020-12-03T22:18:44.405032836Z. Between that point and 22:20:28 we spike to 10.5GiB container_memory_max_usage_bytes and do not increase any in RSS. Given the logging, this is in the COMMIT stage of the build. Our build clones a repository that's ~1.5GB on disk and ~2.3GB uncompressed as a layer, so even if *all* of that were in memory at once, it still does not explain the ~6GiB memory that the container spikes to. In any case, it is frustrating to have the build OOMKilled at memory limits of 5, 6, 7, 8 GiB when the user-provided Dockerfile does nothing at that scale.
Created attachment 1736443 [details] container_memory_rss during the run
Created attachment 1736445 [details] container_memory_max_usage_bytes during the run
Created attachment 1736462 [details] various container memory metrics for the duration of the build Here's a screenshot with all the various memory metrics overlaid on each other with a vertical line indicating when the user-provided Dockerfile stopped executing.
moving to modified as the change is likely propagated into the kernel now
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days