Created attachment 1773820 [details] First memory climb spotted, excluding most recent jump Description of problem: Created 1000 unique cluster deployments and installenvs using assisted service. After successfully generating the ISOs, I left the cluster over the weekend (2021-04-17 and 2021-04-18) untouched. When viewing the cluster again (on the following Monday, 2021-04-19), the memory usage had climbed much higher during the time I was not using it over the weekend. It climbed to 4Gib usage and continuously used that much through Monday (2021-04-19). There was another memory jump, again while the cluster was not in use, overnight (2021-04-19 to 2021-04-20), resulting in a continuous 5Gib usage currently (2021-04-20 16:08:00). How reproducible: Enable observability and load assisted-service on a cluster, generate 1000 cluster deployments and installenv, (make sure 1000 ISOs were generated), leave cluster alone for a couple of days and observe. Additional info: container image and sha: image: quay.io/djzager/assisted-service:use-storage imageID: quay.io/djzager/assisted-service@sha256:c6468ded6971cec152c23c3881429e183f06b9168586e0f7a54d3ec3f8dd336c
Created attachment 1773821 [details] Second memory climb between 2021-04-19 and 2021-04-20
Need to focus on cloud use case.
verified in v1.0.19.3 of assisted installer that the memory leak is no longer being observed. monitored over 16 hr
moving to verified since Hao said he verified it. https://bugzilla.redhat.com/show_bug.cgi?id=1951646#c3
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438