Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
When publishing a repository with large metadata file (such as the others.xml.gz file in rhel-7-server-rpms). The Pulp worker can consumes more than 3GB of RAM for a few minutes. After that, the memory is freed to normally usage which is ok.
When calculating the open-size of a metadata, Pulp opens the gzip file which loads the whole gzip file into the memory.
plugins/distributors/yum/metadata/repomd.py
---------------------------------------------------------------------
if file_path.endswith('.gz'):
open_size_element = ElementTree.SubElement(data_element, 'open-size')
open_checksum_attributes = {'type': self.checksum_type}
open_checksum_element = ElementTree.SubElement(data_element, 'open-checksum',
open_checksum_attributes)
try:
file_handle = gzip.open(file_path, 'r') <============= Here
except:
# cannot have an else clause to the try without an except clause
raise
else:
try:
content = file_handle.read()
open_size_element.text = str(len(content))
open_checksum_element.text = self.checksum_constructor(content).hexdigest()
finally:
file_handle.close()
---------------------------------------------------------------------
This is not quite an issue if user is syncing only a few repos. In the case of Satellite, user may sync large repositories at the same time, such as the Optimized Capsule sync. If one Capsule has 8 workers and each worker consumes 4GB+ of memory then the Capsule will run out of memory.
Steps to Reproduce:
1. Set Pulp to use only 1 worker so that we can monitor the progress easily.
2. Force full publish a rhel-7-server-rpms repository.
3. Use the following command to monitor the memory usage.
watch 'ps -aux | grep reserved_resource_worker-0'
4. The high memory consumption happens when Pulp finalizing the others.xml.gz file. You can use the following command to monitor the pulp working directory.
cd /var/cache/pulp/reserved_resource_worker-0@<satellite fqdn>/<pulp task id>/
watch 'ls -alrth'
Comment 3pulp-infra@redhat.com
2020-09-10 16:08:23 UTC
The Pulp upstream bug status is at POST. Updating the external tracker on this bug.
Comment 4pulp-infra@redhat.com
2020-09-10 16:08:24 UTC
The Pulp upstream bug priority is at Normal. Updating the external tracker on this bug.
Comment 7pulp-infra@redhat.com
2020-09-14 13:05:45 UTC
The Pulp upstream bug status is at MODIFIED. Updating the external tracker on this bug.
Comment 8pulp-infra@redhat.com
2020-09-14 14:06:05 UTC
All upstream Pulp bugs are at MODIFIED+. Moving this bug to POST.
HOTFIX RPM is available for Satellite 6.7.4
INSTALLATION INSTRUCTIONS:
1. Download the attached hotfix RPM to each affected Satellite and Capsule server
2. # yum install ./pulp-rpm-plugins-2.21.0.6-2.HOTFIXRHBZ1876782.el7sat.noarch.rpm --disableplugin=foreman-protector
3. # satellite-maintain service restart
Comment 12pulp-infra@redhat.com
2020-11-02 16:08:27 UTC
The Pulp upstream bug status is at CLOSED - CURRENTRELEASE. Updating the external tracker on this bug.
Description of problem: When publishing a repository with large metadata file (such as the others.xml.gz file in rhel-7-server-rpms). The Pulp worker can consumes more than 3GB of RAM for a few minutes. After that, the memory is freed to normally usage which is ok. When calculating the open-size of a metadata, Pulp opens the gzip file which loads the whole gzip file into the memory. plugins/distributors/yum/metadata/repomd.py --------------------------------------------------------------------- if file_path.endswith('.gz'): open_size_element = ElementTree.SubElement(data_element, 'open-size') open_checksum_attributes = {'type': self.checksum_type} open_checksum_element = ElementTree.SubElement(data_element, 'open-checksum', open_checksum_attributes) try: file_handle = gzip.open(file_path, 'r') <============= Here except: # cannot have an else clause to the try without an except clause raise else: try: content = file_handle.read() open_size_element.text = str(len(content)) open_checksum_element.text = self.checksum_constructor(content).hexdigest() finally: file_handle.close() --------------------------------------------------------------------- This is not quite an issue if user is syncing only a few repos. In the case of Satellite, user may sync large repositories at the same time, such as the Optimized Capsule sync. If one Capsule has 8 workers and each worker consumes 4GB+ of memory then the Capsule will run out of memory. Steps to Reproduce: 1. Set Pulp to use only 1 worker so that we can monitor the progress easily. 2. Force full publish a rhel-7-server-rpms repository. 3. Use the following command to monitor the memory usage. watch 'ps -aux | grep reserved_resource_worker-0' 4. The high memory consumption happens when Pulp finalizing the others.xml.gz file. You can use the following command to monitor the pulp working directory. cd /var/cache/pulp/reserved_resource_worker-0@<satellite fqdn>/<pulp task id>/ watch 'ls -alrth'