Description of problem: With bug 1188132, engine-backup supports the following new options: --archive-compressor=COMPRESSOR Use COMPRESSOR to compress the backup file, can be one of: gzip bzip2 xz None --files-compressor=COMPRESSOR for the files, same options as --archive-compressor --db-compressor=COMPRESSOR for the Engine, same options as --archive-compressor --db-dump-format=FORMAT Engine DB dump format, see pg_dump(1) for details. Can be one of: plain custom --db-restore-jobs=JOBS Number of restore jobs for the Engine DB, when using custom dump format and compressor None. Passed to pg_restore -j. Defaults to 2. --dwh-db-compressor=COMPRESSOR for DWH, same options as --archive-compressor --dwh-db-dump-format=FORMAT for DWH, same options as --db-dump-format --dwh-db-restore-jobs=JOBS for DWH, same as --db-restore-jobs --reports-db-compressor=COMPRESSOR for Reports, same options as --archive-compressor --reports-db-dump-format=FORMAT for Reports, same options as --db-dump-format --reports-db-restore-jobs=JOBS for Reports, same as --db-restore-jobs engine-backup should be tested on a large setup with various relevant combinations of these options, in order to have data about their affect on the size of the resulting backup file and the speed of backup and restore.
(In reply to Yedidyah Bar David from comment #0) > engine-backup should be tested on a large setup with various relevant > combinations > of these options, in order to have data about their affect on the size of the > resulting backup file and the speed of backup and restore. Was this done? Can we get the results? Did we indeed choose well the details for: --smallest --fast-backup --fast-restore ? If not, what should be used?
Any requirements on where to run this tests? Memory, processor, etc?
(In reply to Gonza from comment #2) > Any requirements on where to run this tests? > Memory, processor, etc? I'd say use the recommended specs - 16GB RAM, 4 cores. Use a large db - of at least, say, a few tens of hosts and a few hundred VMs. With DWH/Reports.