Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
DescriptionWaldirio M Pinheiro
2022-01-14 18:32:39 UTC
Description of problem:
Once the customer starts the pulp data migration, if necessary for any reason to reset the whole thing, I believe in the backend we are doing basically two things, cleaning the filesystem and also the database.
Working with a huge customer dataset, near 800GiB of data in the local filesystem, this process spent more than 6hours.
Version-Release number of selected component (if applicable):
6.9
How reproducible:
100%
Steps to Reproduce:
1. Enable and Sync the repos as immediate, something near ~800GiB in the filesystem
2. Start the data migration
3. After some days, try to reset
Actual results:
A lot of time to clean-up everything, at least 6 hours in the test env with ~800GiB
Expected results:
The cleanup/wipe process should be real quick, then the customer should be able to restart the process
Additional info:
[1]. https://access.redhat.com/documentation/en-us/red_hat_satellite/6.10/html-single/upgrading_and_updating_red_hat_satellite/index#preparing_to_migrate_pulp_content
Reset takes a long time when there's a lot of data, because on the Pulp3 side, reset can be selective - e.g., you can reset "just pulp-rpm" or "just pulp-container". That means the service can't just truncate/drop/recreate database tables, it has to remove entities, and related content (which may not be foreign-key-related, so Pulp can't even rely on CASCADE deletes).
Comment 2Waldirio M Pinheiro
2022-02-22 17:52:04 UTC
Hello Grant,
Thank you for the heads up. Could you think about anything that could speed up this clean-up process? If not, I believe we could at least, share during this process some deadline when starting the process.
Thank you again, my friend.
Waldirio
(In reply to Waldirio M Pinheiro from comment #2)
> Hello Grant,
>
> Thank you for the heads up. Could you think about anything that could speed
> up this clean-up process? If not, I believe we could at least, share during
> this process some deadline when starting the process.
>
> Thank you again, my friend.
> Waldirio
Pulp allows for selective-cleanup. If what we want is a "Just Do It" reset, I could envision a foreman-maintain task that stops the Pulp3 services and resets Pulp3's database to empty-post-install state. It's not as easy as "just trunc all the tables", because there's DDL that happens at install (setting up constant tables, admin user, that sort of thing).
Closing this BZ as WONTFIX. 6.9 is EOL and this is not an upgrade-blocking issue. I agree with Grant's comment in Comment 3, and that makes this more of an RFE anyway.