Bug 1401331

Summary: Small memory leak in wsgi process/script when invoking any pulp task
Product: Red Hat Satellite Reporter: Pavel Moravec <pmoravec>
Component: PulpAssignee: satellite6-bugs <satellite6-bugs>
Status: CLOSED WONTFIX QA Contact: Katello QA List <katello-qa-list>
Severity: medium Docs Contact:
Priority: medium    
Version: 6.2.5CC: bmbouter, daniele, daviddavis, dkliban, ggainey, ipanova, jcallaha, mhrivnak, oshtaier, pcreech, rchan, ttereshc
Target Milestone: UnspecifiedKeywords: Triaged
Target Release: Unused   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-09-04 18:03:03 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Pavel Moravec 2016-12-04 20:51:09 UTC
Description of problem:
Invoking a task in a loop causes memory consumption of wsgi:pulp processes slowly growing over time. Particular example when the increase was seen is creating a repo, uploading some RPM unit and deleting the repo - done in a loop. Since the wsgi script shall not depend on particular task type, I deduce the leak is generic and can occur by repetitive invoking any kind of pulp task.


Version-Release number of selected component (if applicable):
Sat 6.2.4
pulp-server-2.8.7.3-1.el7sat.noarch


How reproducible:
100% (in reasonable bigger time)


Steps to Reproduce:
(Satellite reproducer)
1. the reproducer itself:

hammer product create --name=custom_product --organization-id=1 --label=custom_product

while true; do
	date
	sleep 2
	hammer repository create --content-type=yum --download-policy=immediate --label=custom_repo --name=custom_repo --organization-id=1 --product=custom_product
	hammer repository delete --name=custom_repo --organization-id=1 --product=custom_product
done

2. monitor RSS of wsgi:pulp processes (there are multiple, just few of them really do a job and only those will have growing RSS). Example how I collected it:

i=0; while true; do date; ps aux | grep pulp | grep wsgi > ps.pulp.${i}.txt; sleep 300; i=$((i+1)); done

(and monitor it via:

for i in $(grep -v grep ps.pulp.0.txt | awk '{ print $2 }'); do echo $i; cat $(ls ps.pulp.*.txt -tr) | grep "^apache[ ]*$i "; echo; done

)

---
standalone pulp reproducer:

name="some-very-long-repo-name-some-very-long-repo-name-some-very-long-repo-name-test"

i=1;
while true; do
  pulp-admin rpm repo create --repo-id=${name}-${i} --display-name=${name}-${i}
  pulp-admin rpm repo delete --repo-id=${name}-${i} 
  sleep 2
done

(plus some ps monitoring)


Actual results:
RSS grows approx 110bytes per one task  /220bytes per one iteration of above cycle (sum of RSS increases per individual processes)


Expected results:
no RSS growth


Additional info:

Comment 3 Michael Hrivnak 2016-12-13 21:52:30 UTC
Brian, does this look familiar at all?

Comment 5 pulp-infra@redhat.com 2016-12-16 14:01:01 UTC
The Pulp upstream bug status is at NEW. Updating the external tracker on this bug.

Comment 6 pulp-infra@redhat.com 2016-12-16 14:01:05 UTC
The Pulp upstream bug priority is at Normal. Updating the external tracker on this bug.

Comment 7 pulp-infra@redhat.com 2016-12-19 19:06:11 UTC
The Pulp upstream bug status is at ASSIGNED. Updating the external tracker on this bug.

Comment 9 pulp-infra@redhat.com 2018-01-29 11:33:59 UTC
The Pulp upstream bug status is at NEW. Updating the external tracker on this bug.

Comment 10 Bryan Kearney 2018-09-04 18:03:03 UTC
Thank you for your interest in Satellite 6. We have evaluated this request, and we do not expect this to be implemented in the product in the foreseeable future. We are therefore closing this out as WONTFIX. If you have any concerns about this, please feel free to contact Rich Jerrido or Bryan Kearney. Thank you.

Comment 11 pulp-infra@redhat.com 2020-08-31 15:06:57 UTC
The Pulp upstream bug status is at CLOSED - WONTFIX. Updating the external tracker on this bug.