Description of problem:
Invoking a task in a loop causes memory consumption of wsgi:pulp processes slowly growing over time. Particular example when the increase was seen is creating a repo, uploading some RPM unit and deleting the repo - done in a loop. Since the wsgi script shall not depend on particular task type, I deduce the leak is generic and can occur by repetitive invoking any kind of pulp task.
Version-Release number of selected component (if applicable):
Sat 6.2.4
pulp-server-2.8.7.3-1.el7sat.noarch
How reproducible:
100% (in reasonable bigger time)
Steps to Reproduce:
(Satellite reproducer)
1. the reproducer itself:
hammer product create --name=custom_product --organization-id=1 --label=custom_product
while true; do
date
sleep 2
hammer repository create --content-type=yum --download-policy=immediate --label=custom_repo --name=custom_repo --organization-id=1 --product=custom_product
hammer repository delete --name=custom_repo --organization-id=1 --product=custom_product
done
2. monitor RSS of wsgi:pulp processes (there are multiple, just few of them really do a job and only those will have growing RSS). Example how I collected it:
i=0; while true; do date; ps aux | grep pulp | grep wsgi > ps.pulp.${i}.txt; sleep 300; i=$((i+1)); done
(and monitor it via:
for i in $(grep -v grep ps.pulp.0.txt | awk '{ print $2 }'); do echo $i; cat $(ls ps.pulp.*.txt -tr) | grep "^apache[ ]*$i "; echo; done
)
---
standalone pulp reproducer:
name="some-very-long-repo-name-some-very-long-repo-name-some-very-long-repo-name-test"
i=1;
while true; do
pulp-admin rpm repo create --repo-id=${name}-${i} --display-name=${name}-${i}
pulp-admin rpm repo delete --repo-id=${name}-${i}
sleep 2
done
(plus some ps monitoring)
Actual results:
RSS grows approx 110bytes per one task /220bytes per one iteration of above cycle (sum of RSS increases per individual processes)
Expected results:
no RSS growth
Additional info:
Thank you for your interest in Satellite 6. We have evaluated this request, and we do not expect this to be implemented in the product in the foreseeable future. We are therefore closing this out as WONTFIX. If you have any concerns about this, please feel free to contact Rich Jerrido or Bryan Kearney. Thank you.
Comment 11pulp-infra@redhat.com
2020-08-31 15:06:57 UTC
The Pulp upstream bug status is at CLOSED - WONTFIX. Updating the external tracker on this bug.