Description of problem: Satellite backup creates a *.tar.gz file with usual content like: config_files.tar.gz .config.snar metadata.yml mongo_data.tar.gz .mongo.snar pgsql_data.tar.gz .postgres.snar .pulp.snar Since three far far biggest files are gzips already and the rest are small files only, the backup calls gzip on gzipped data. That is redundant and costs some time (to gzip and also gunzip during restore). We should either create just satellite-backup-${timestamp}.tar, or optionally (and maybe better) keep the config_files + mongo_data + pgsql_data in tar format "only". Version-Release number of selected component (if applicable): Sat 6.11 or any older. How reproducible: 100% Steps to Reproduce: 1. Create a (full) foreman-maintain backup . 2. Check the tar.gz backup contains tar.gz files inside. Actual results: tar.gz contains tar.gz inside Expected results: Change either one tar.gz -> tar Additional info: I do understand the change must be backward compatible, i.e. new f-m must be able to unpack tar.gz-in-tar.gz (but nothing else wrt. backward compatibility..?). Which makes the request bit complicated. But still the overall gain in compressing and extracting files is worth of it, I think.
How much time does the current gzip step add? That is, how much time would we be saving implementing this change?
I have to take my request back, since no *one* tar.gz of whole backup is created: Sat6.13 offline backup: /backup/satellite-backup-2023-08-07-15-37-18: config_files.tar.gz metadata.yml mongo_data.tar.gz pgsql_data.tar.gz pulp_data.tar Sat6.13 online backup: /backup/satellite-backup-2023-08-07-15-49-56: candlepin.dump config_files.tar.gz foreman.dump metadata.yml mongo_dump pg_globals.dump pulpcore.dump pulp_data.tar (similar content for e.g. 6.9 backups) The satellite-backup.tar.gz had to be received from a customer as an extra manual activity, where the final gzip was the redundant one. So this is not a Satellite product issue.