Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 976357 - Running repository sync for long periods of time causes httpd memory to consume large amounts of memory
Summary: Running repository sync for long periods of time causes httpd memory to consu...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Content Management
Version: 6.0.2
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: Unspecified
Assignee: David Davis
QA Contact: Og Maciel
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-06-20 12:39 UTC by David Davis
Modified: 2019-09-26 15:47 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description David Davis 2013-06-20 12:39:52 UTC
Description of problem:

It seems like syncing is causing httpd to increase by about 1GB in memory every hour. I've been running provider sync for 2.5 hours and it's still at 0.0% and the memory usage is over 4 GB.

2234 apache    20   0 6880m 4.2g 4704 S 118.4 18.0 162:40.44 httpd

This was originally reported by a user who ran sync for over 13 hours and saw 20 GB being used.

10492 apache    20   0 20.2g  11g 5372 S 22.4 25.0 757:06.59 (wsgi:pulp)

This definitely seems plausible as 2.5/13 = .19 and 4.2/20.2 = .20.


Version-Release number of selected component (if applicable):

candlepin-0.8.9-1.el6_4.noarch
candlepin-scl-1-5.el6_4.noarch
candlepin-scl-quartz-2.1.5-5.el6_4.noarch
candlepin-scl-rhino-1.7R3-1.el6_4.noarch
candlepin-scl-runtime-1-5.el6_4.noarch
candlepin-selinux-0.8.9-1.el6_4.noarch
candlepin-tomcat6-0.8.9-1.el6_4.noarch
elasticsearch-0.19.9-8.el6sat.noarch
katello-1.4.2-14.el6sat.noarch
katello-all-1.4.2-14.el6sat.noarch
katello-candlepin-cert-key-pair-1.0-1.noarch
katello-certs-tools-1.4.2-2.el6sat.noarch
katello-cli-1.4.2-7.el6sat.noarch
katello-cli-common-1.4.2-7.el6sat.noarch
katello-common-1.4.2-14.el6sat.noarch
katello-configure-1.4.3-16.el6sat.noarch
katello-configure-foreman-1.4.3-16.el6sat.noarch
katello-foreman-all-1.4.2-14.el6sat.noarch
katello-glue-candlepin-1.4.2-14.el6sat.noarch
katello-glue-elasticsearch-1.4.2-14.el6sat.noarch
katello-glue-pulp-1.4.2-14.el6sat.noarch
katello-qpid-broker-key-pair-1.0-1.noarch
katello-qpid-client-key-pair-1.0-1.noarch
katello-selinux-1.4.3-3.el6sat.noarch
m2crypto-0.21.1.pulp-8.el6sat.x86_64
mod_wsgi-3.4-1.pulp.el6sat.x86_64
pulp-rpm-plugins-2.1.2-0.3.beta.el6sat.noarch
pulp-selinux-2.1.2-0.3.beta.el6sat.noarch
pulp-server-2.1.2-0.3.beta.el6sat.noarch
python-isodate-0.5.0-1.pulp.el6sat.noarch
python-oauth2-1.5.170-3.pulp.el6sat.noarch
python-pulp-common-2.1.2-0.3.beta.el6sat.noarch
python-pulp-rpm-common-2.1.2-0.3.beta.el6sat.noarch
python-qpid-0.18-5.el6_4.noarch
python-rhsm-1.8.0-1.pulp.el6sat.x86_64
qpid-cpp-client-0.14-22.el6_3.x86_64
qpid-cpp-client-ssl-0.14-22.el6_3.x86_64
qpid-cpp-server-0.14-22.el6_3.x86_64
qpid-cpp-server-ssl-0.14-22.el6_3.x86_64
ruby193-rubygem-foreman-katello-engine-0.0.8-6.el6sat.noarch
ruby193-rubygem-katello-foreman-engine-0.0.3-5.el6sat.noarch
ruby193-rubygem-katello_api-0.0.3-2.el6_4.noarch
ruby193-rubygem-ldap_fluff-0.2.2-1.el6sat.noarch
signo-katello-0.0.18-1.el6sat.noarch


How reproducible:

Pretty consistently.


Steps to Reproduce:
1. Create a manifest with a bunch of subscriptions (RHEL, RHEV, Openshift, etc). You'll probably need a few hundred repos.
2. Import the manifest into your katello instance.
3. Enable a few hundred of the repos. This is probably easiest with the CLI.
4. After all repos are enabled, kick off a provider sync in the CLI.


Actual results:

Memory for katello processes remains stable.


Expected results:

Memory grows and grows.


Additional info:

Comment 1 David Davis 2013-06-20 12:50:29 UTC
At three hours, it's over 6 GB:

2234 apache    20   0 8716m 6.1g 5336 S 113.8 26.1 184:29.46 httpd

Comment 2 RHEL Program Management 2013-06-20 13:06:25 UTC
Since this issue was entered in Red Hat Bugzilla, the release flag has been
set to ? to ensure that it is properly evaluated for this release.

Comment 4 Michael Hrivnak 2013-06-20 17:50:00 UTC
Sync memory consumption is much better in pulp 2.2, so I suggest we consider this fixed. Definitely report back if you find any memory issues with 2.2.

Comment 5 David Davis 2013-06-22 00:22:44 UTC
Over the same timeframe of about 3 hours, the highest amount of memory consumption I am seeing is 330 MB. I agree that we could close this out as fixed.

Comment 6 David Davis 2013-06-22 00:23:51 UTC
To clarify, I am seeing 330 MB of RAM being consumed in Pulp v2.2 which Michael mentioned. This compares to 6-7GB with Pulp v2.1 (which ships with MDP1).

Comment 8 Og Maciel 2013-10-21 20:10:28 UTC
Verified:

* apr-util-ldap-1.3.9-3.el6_0.1.x86_64
* candlepin-0.8.25-1.el6sam.noarch
* candlepin-scl-1-5.el6_4.noarch
* candlepin-scl-quartz-2.1.5-5.el6_4.noarch
* candlepin-scl-rhino-1.7R3-1.el6_4.noarch
* candlepin-scl-runtime-1-5.el6_4.noarch
* candlepin-selinux-0.8.25-1.el6sam.noarch
* candlepin-tomcat6-0.8.25-1.el6sam.noarch
* elasticsearch-0.19.9-8.el6sat.noarch
* foreman-1.3.0-18.el6sat.noarch
* foreman-compute-1.3.0-18.el6sat.noarch
* foreman-libvirt-1.3.0-18.el6sat.noarch
* foreman-postgresql-1.3.0-18.el6sat.noarch
* foreman-proxy-1.3.0-3.el6sat.noarch
* katello-1.4.6-39.el6sat.noarch
* katello-all-1.4.6-39.el6sat.noarch
* katello-candlepin-cert-key-pair-1.0-1.noarch
* katello-certs-tools-1.4.4-1.el6sat.noarch
* katello-cli-1.4.3-24.el6sat.noarch
* katello-cli-common-1.4.3-24.el6sat.noarch
* katello-common-1.4.6-39.el6sat.noarch
* katello-configure-1.4.7-5.el6sat.noarch
* katello-configure-foreman-1.4.7-5.el6sat.noarch
* katello-configure-foreman-proxy-1.4.7-5.el6sat.noarch
* katello-foreman-all-1.4.6-39.el6sat.noarch
* katello-glue-candlepin-1.4.6-39.el6sat.noarch
* katello-glue-elasticsearch-1.4.6-39.el6sat.noarch
* katello-glue-pulp-1.4.6-39.el6sat.noarch
* katello-qpid-broker-key-pair-1.0-1.noarch
* katello-qpid-client-key-pair-1.0-1.noarch
* katello-selinux-1.4.4-4.el6sat.noarch
* openldap-2.4.23-31.el6.x86_64
* pulp-katello-plugins-0.2-1.el6sat.noarch
* pulp-nodes-common-2.3.0-0.22.beta.el6sat.noarch
* pulp-nodes-parent-2.3.0-0.22.beta.el6sat.noarch
* pulp-puppet-plugins-2.3.0-0.22.beta.el6sat.noarch
* pulp-rpm-plugins-2.3.0-0.22.beta.el6sat.noarch
* pulp-selinux-2.3.0-0.22.beta.el6sat.noarch
* pulp-server-2.3.0-0.22.beta.el6sat.noarch
* python-ldap-2.3.10-1.el6.x86_64
* ruby193-rubygem-ldap_fluff-0.2.2-2.el6sat.noarch
* ruby193-rubygem-net-ldap-0.3.1-3.el6sat.noarch
* ruby193-rubygem-runcible-1.0.7-1.el6sat.noarch
* signo-0.0.22-2.el6sat.noarch
* signo-katello-0.0.22-2.el6sat.noarch

Comment 9 Bryan Kearney 2014-04-24 17:11:18 UTC
This was verified and delivered with MDP2. Closing it out.

Comment 10 Bryan Kearney 2014-04-24 17:12:14 UTC
This was delivered and verified with MDP2. Closing the bug.


Note You need to log in before you can comment on or make changes to this bug.