Red Hat Bugzilla – Bug 1445989
Incremental backup script in effect is performing full backup.
Last modified: 2017-08-10 18:46:27 EDT
Description of problem: Incremental backup script in effect is performing full backup. One error similar to the following: cp: cannot stat ‘*.snar’: No such file or directory It seems that the below files are used to provide a diff. As they weren't being read, customer was getting a full backup instead of an incremental backup. .config.snar .mongo.snar .postgres.snar .pulp.snar Adding the '.' to the command on line 131 fixed the problem: `cp #{@options[:incremental]}/.*.snar .` if @options[:incremental] Below is the difference between two files. ================================================================= # diff -u katello-backup katello-backup.new --- katello-backup 2017-02-03 10:01:31.000000000 -0500 +++ katello-backup.new 2017-04-25 09:56:54.162505948 -0400 @@ -128,7 +128,7 @@ `tar --selinux --create --gzip --file=config_files.tar.gz --listed-incremental=.config.snar #{CONFIGS.join(' ')}` puts "Done." - `cp #{@options[:incremental]}/*.snar .` if @options[:incremental] + `cp #{@options[:incremental]}/.*.snar .` if @options[:incremental] if @options[:online] backup_db_online ================================================================= Version-Release number of selected component (if applicable): How reproducible: Reproducible at customer's environment. Steps to Reproduce: Take incremental backup using the below command. # katello-backup /path/to/dir --incremental [PREVIOUS_BACKUP_DIR] Actual results: Incremental backup script is performing full backup. Expected results: It should perform incremental backup. Additional info: Customer had provided the above solution which they expect to be included in future release or to fix the issue.
Confirming that the issue exist in sat 6.2.8. After following documentation here: https://access.redhat.com/documentation/en-us/red_hat_satellite/6.2/html-single/server_administration_guide/#sect-Red_Hat_Satellite-Server_Administration_Guide-Backup_and_Disaster_Recovery-Backing_up_Red_Hat_Satellite_Server However, every time we perform an incremental backup it appears to do a full backup. The size of the backup is the same as the full backup. The incremental and full backups complete successfully but take a long time and consume a large amount of disk due to the incremental being full backups. Backup commands used are: # katello-backup /var/backup/ # katello-backup /var/backup/test --incremental /var/backup or #katello-backup /var/backup/test --incremental /var/backup/katello-backup-2017-04-25T15\:26\:32+08\:00 Reproducing results from local satellite server --FULL BACKUP-- # pwd /var/backup # katello-backup /var/backup/ # ls -lrt drwxrwx---. 2 root postgres 183 Apr 25 15:10 katello-backup-2017-04-25T15:09:42+08:00 # du -sh katello-backup-2017-04-25T15:09:42+08:00 4.9G katello-backup-2017-04-25T15:09:42+08:00 # ls -lrt katello-backup-2017-04-25T15:09:42+08:00 total 5097380 -rw-r--r--. 1 root root 628681 Apr 25 15:10 config_files.tar.gz -rw-r--r--. 1 root root 239749120 Apr 25 15:10 pgsql_data.tar.gz -rw-r--r--. 1 root root 4327495680 Apr 25 15:10 mongo_data.tar.gz -rw-r--r--. 1 root root 651837440 Apr 25 15:10 pulp_data.tar # cd katello-backup-2017-04-25T15\:09\:42+08\:00/ # du -sh * 616K config_files.tar.gz 4.1G mongo_data.tar.gz 229M pgsql_data.tar.gz 622M pulp_data.tar --PROMOTED ONE CV-- --INCREMENTAL BACKUP-- # pwd /var/backup/test # katello-backup /var/backup/test --incremental /var/backup # ls -lrt drwxrwx---. 2 root postgres 183 Apr 25 15:27 katello-backup-2017-04-25T15:26:32+08:00 # du -sh katello-backup-2017-04-25T15\:26\:32+08\:00/ 5.0G katello-backup-2017-04-25T15:26:32+08:00/ # ls -lrt katello-backup-2017-04-25T15\:26\:32+08\:00/ total 5177452 -rw-r--r--. 1 root root 627702 Apr 25 15:26 config_files.tar.gz -rw-r--r--. 1 root root 241244160 Apr 25 15:26 pgsql_data.tar.gz -rw-r--r--. 1 root root 4327495680 Apr 25 15:27 mongo_data.tar.gz -rw-r--r--. 1 root root 732334080 Apr 25 15:27 pulp_data.tar # du -sh * 616K config_files.tar.gz 4.1G mongo_data.tar.gz 231M pgsql_data.tar.gz 699M pulp_data.tar
*** Bug 1446683 has been marked as a duplicate of this bug. ***
Created redmine issue http://projects.theforeman.org/issues/19446 from this bug
Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/19446 has been resolved.
Verified in Satellite 6.2.9 Async. The backup script is now properly limiting incremental backups. -bash-4.2# katello-backup . --incremental katello-backup-2017-05-15T18\:05\:40+02\:00/ Starting backup: 2017-05-15 21:55:26 +0200 Creating backup folder ./katello-backup-2017-05-15T21:55:26+02:00 Generating metadata ... /opt/theforeman/tfm/root/usr/share/gems/gems/foreman_theme_satellite-0.1.42/app/models/concerns/satellite_packages.rb:4: warning: already initialized constant Katello::Ping::PACKAGES /opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.0.0.125/app/models/katello/ping.rb:7: warning: previous definition of PACKAGES was here Backing up config files... Done. Redirecting to /bin/systemctl stop foreman-tasks.service Redirecting to /bin/systemctl stop httpd.service Redirecting to /bin/systemctl stop pulp_workers.service Redirecting to /bin/systemctl stop foreman-proxy.service Redirecting to /bin/systemctl stop pulp_streamer.service Redirecting to /bin/systemctl stop pulp_resource_manager.service Redirecting to /bin/systemctl stop pulp_celerybeat.service Redirecting to /bin/systemctl stop smart_proxy_dynflow_core.service Redirecting to /bin/systemctl stop tomcat.service Redirecting to /bin/systemctl stop squid.service Redirecting to /bin/systemctl stop qdrouterd.service Redirecting to /bin/systemctl stop qpidd.service Redirecting to /bin/systemctl stop postgresql.service Redirecting to /bin/systemctl stop mongod.service Backing up postgres db... tar: Removing leading `/' from member names Done. Backing up mongo db... tar: Removing leading `/' from member names Done. Backing up Pulp data... tar: Removing leading `/' from member names Done. Redirecting to /bin/systemctl start mongod.service Redirecting to /bin/systemctl start postgresql.service Redirecting to /bin/systemctl start qpidd.service Redirecting to /bin/systemctl start qdrouterd.service Redirecting to /bin/systemctl start squid.service Redirecting to /bin/systemctl start tomcat.service Redirecting to /bin/systemctl start smart_proxy_dynflow_core.service Redirecting to /bin/systemctl start pulp_celerybeat.service Redirecting to /bin/systemctl start pulp_resource_manager.service Redirecting to /bin/systemctl start pulp_streamer.service Redirecting to /bin/systemctl start foreman-proxy.service Redirecting to /bin/systemctl start pulp_workers.service Redirecting to /bin/systemctl start httpd.service Redirecting to /bin/systemctl start foreman-tasks.service Done with backup: 2017-05-15 21:57:45 +0200 **** BACKUP Complete, contents can be found in: ./katello-backup-2017-05-15T21:55:26+02:00 **** -bash-4.2# ll katello-backup-2017-05-15T ls: cannot access katello-backup-2017-05-15T: No such file or directory -bash-4.2# ll total 0 drwxrwx---. 2 root postgres 199 May 15 18:06 katello-backup-2017-05-15T18:05:40+02:00 drwxrwx---. 2 root postgres 199 May 15 21:57 katello-backup-2017-05-15T21:55:26+02:00 -bash-4.2# ll katello-backup-2017-05-15T21\:55\:26+02\:00/ total 656M -rw-r--r--. 1 root root 605K May 15 21:55 config_files.tar.gz -rw-r--r--. 1 root root 346 May 15 21:57 metadata -rw-r--r--. 1 root root 181M May 15 21:56 mongo_data.tar.gz -rw-r--r--. 1 root root 27M May 15 21:56 pgsql_data.tar.gz -rw-r--r--. 1 root root 448M May 15 21:56 pulp_data.tar -bash-4.2# -bash-4.2# -bash-4.2# space 1.3G ./katello-backup-2017-05-15T18:05:40+02:00 659M ./katello-backup-2017-05-15T21:55:26+02:00 1.9G .
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1234