TL;DR during Pulp backup only save RPM files and once Satellite is restored launch a job that will create tasks to regenerate metadata of each CV to parallelize the process. Description of problem: Currently a user can choose between doing a backup of the complete Satellite environment or not to backup Pulp content. If user decided to backup Pulp, pulp_data.tar will contain all the RPMs and all the symbolic links that have been created for Content Views. In a large environment, with multiple CVs and multiple versions of each CV, pulp_data.tar can contain millions of symbolic links that have to be recovered during the restore. Reading a tar file is a lineal non-threaded process and writing millions of symbolic files can take days, as it will use just one CPU core. If user decided not to backup Pulp, pulp_data.tar will be empty. Restore will be much faster but Satellite needs to sync all the files again. In case there are custom RPMs that have been uploaded using UI or hammer, it is required to upload them again. Once all content is available, metadata regeneration is required for each version of CVs. One way to improve this could be to backup Pulp, but only RPM data and no symbolic links. With this method, pulp_data.tar restore won't take as much time as when tar file contains symbolic links. Once Satellite is running again, an automated process should launch metadata regeneration for each of the CVs, prioritizing the ones that are promoted. This would create a task for each regeneration and allow the usage of multiple CPUs, parallelizing the task and having a much faster restore than when symbolic files have to be created from tar file. How reproducible: Always Steps to Reproduce: 1. Sync multiple RHEL releases 2. Create different CVs 3. Backup with full Pulp content 4. Restore with Pulp content Actual results: Restore will start creating millions of symbolic links from tar, that uses a single CPU for this with no parallelism. This process can take days. Expected results: Be able to have a running Satellite faster, even in degraded mode until it ends to fully recover.
via an internal email thread: """ From: Hao Chang Yu <hyu> To: Mike McCune <mmccune> This may related to massive amount of symlinks in the tar files and causes high memory consumption. It seems that "-P" option can be used to extract the tar file to prevent the delay link creation. This worked for customer in Case No. 02461599. More details why extracting symlinks are slow https://bugzilla.redhat.com/show_bug.cgi?id=1759140 If this really works then we need to add the tar 's "-P" option to the foreman_maintain restore script. """
Upstream bug assigned to mmccune
Moving this bug to POST for triage into Satellite 6 since the upstream issue https://projects.theforeman.org/issues/28881 has been resolved.
*** Bug 1725409 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Satellite 6.8 Satellite Maintenance Release), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4365