Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2238914 - Pulp workers taking days to import large content-export
Summary: Pulp workers taking days to import large content-export
Keywords:
Status: CLOSED DUPLICATE of bug 2226950
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Pulp
Version: 6.11.5
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: Unspecified
Assignee: satellite6-bugs
QA Contact: Satellite QE Team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-09-14 09:54 UTC by Keith Williams
Modified: 2023-10-12 00:05 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-10-12 00:02:57 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 2238915 0 unspecified CLOSED Very large content import fails with subtask error 2024-05-28 10:05:58 UTC
Red Hat Issue Tracker SAT-20084 0 None None None 2023-09-14 09:56:02 UTC

Internal Links: 2238915

Description Keith Williams 2023-09-14 09:54:00 UTC
Description of problem:
During very large (2.1TB) content import, pulp worker threads have spent almost 12 days reading the tar.gz export file.

This BZ is raised in relation to support case #03609607.


Version-Release number of selected component (if applicable):
6.11.5.4

How reproducible:
everytime

Steps to Reproduce:
1. place 2.1TB tar.gz file in to /var/lib/pulp/imports
2. run command: time hammer content-import version --organization-id=1 --path/var/lib/pulp/imports/z1d-export/


Actual results:
Import process ran for just shy of 12 days and resulted in failure.

Expected results:
Import process to take a substantially shorter amount of time.

Additional info:
Please see the support case #03609607 which has a lot more contextual information.
High IOWait is observed during the import process which lasts for over 11 days. The systems this has been tested on can sustain 1000MB/s disk throughput. The main testing system also has 24cpu and 192GB of memory.
Multiple tests of importing a 2.1TB content export that contains:
Repositories: 151 (both Red Hat and 3rd party repos)
RPM Packages: >200,000
Size of export: 2.1TB
During this import, the Dynflow Console showed the following line `2931: Actions::Pulp3::ContentViewVersion::CreateImport (skipped) [ 1003399.78s / 6400.08s ]`. This equals 11 days, 14 hours, 43 minutes out of the total import run time of 11 days, 14 hours, 59 minutes, 23 seconds. Whist the import was sitting at this task, there was still IOWait and looking the the pulpworkers, they were all reading the tar.gz of the export.

The import failed with a message of "7 subtask(s) failed for task group /pulp/api/v3/task-groups/.....". I will raise a separate BZ for the failure itself. This BZ is for the import speed.

Comment 2 Daniel Alley 2023-10-12 00:02:57 UTC

*** This bug has been marked as a duplicate of bug 2226950 ***

Comment 3 Daniel Alley 2023-10-12 00:05:29 UTC
After upgrading to a build with a fix for 2226950 present, the 2.1TB turned into 1.2TB and the 11+ days turned into 7 hours.  As the 0.9TB would have been almost exclusively serialized database metadata, it has a disproportionately large impact on import times, and deduplicating it has disproportionate benefits.


Note You need to log in before you can comment on or make changes to this bug.