Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2212996 - snapshot backup/restore broken
Summary: snapshot backup/restore broken
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Satellite Maintain
Version: 6.13.0
Hardware: x86_64
OS: Linux
high
high
Target Milestone: 6.14.0
Assignee: Evgeni Golov
QA Contact: Lukas Pramuk
URL:
Whiteboard:
: 2171797 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-06-06 20:09 UTC by Arzoo Singh
Modified: 2023-11-15 16:30 UTC (History)
10 users (show)

Fixed In Version: rubygem-foreman_maintain-1.3.5
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-11-08 14:19:28 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Foreman Issue Tracker 36480 0 Normal Closed snapshot backup/restore broken 2023-11-08 09:44:48 UTC
Red Hat Issue Tracker SAT-18232 0 None None None 2023-06-08 12:43:04 UTC
Red Hat Product Errata RHSA-2023:6818 0 None None None 2023-11-08 14:19:41 UTC

Description Arzoo Singh 2023-06-06 20:09:19 UTC
Description of problem: 

During restore tests with Satellite 6.12 and Satellite 6.13 I noticed that foreman-maintain backup snapshot creates broken pgsql_data.tar.gz archives that cannot be restored with foreman-maintain restore. This situation can also be reproduced with the latest nightly version of Foreman and Katello.

Basically there are two problems when creating a snapshot backup with foreman-maintain backup snapshot:

1.) An archive with broken paths (missing / after data) will be created:

[root@foreman ~]# tar tfv /var/backup/katello-backup-2023-06-06-13-17-36/pgsql_data.tar.gz | head -n 20
-rw------- postgres/postgres 3 2023-04-05 21:04 var/snap/pgsql/var/lib/pgsql/dataPG_VERSION
drwx------ postgres/postgres 0 2023-06-03 22:19 var/snap/pgsql/var/lib/pgsql/database/
drwx------ postgres/postgres 0 2023-06-06 13:10 var/snap/pgsql/var/lib/pgsql/database/1/
drwx------ postgres/postgres 0 2023-04-05 21:04 var/snap/pgsql/var/lib/pgsql/database/13448/
drwx------ postgres/postgres 0 2023-06-06 13:10 var/snap/pgsql/var/lib/pgsql/database/13449/
drwx------ postgres/postgres 0 2023-06-06 13:10 var/snap/pgsql/var/lib/pgsql/database/159342/
drwx------ postgres/postgres 0 2023-06-06 13:10 var/snap/pgsql/var/lib/pgsql/database/161204/
drwx------ postgres/postgres 0 2023-06-06 13:09 var/snap/pgsql/var/lib/pgsql/database/166949/
drwx------ postgres/postgres 0 2023-04-13 10:54 var/snap/pgsql/var/lib/pgsql/database/pgsql_tmp/
-rw------- postgres/postgres 30 2023-06-06 13:09 var/snap/pgsql/var/lib/pgsql/datacurrent_logfiles
drwx------ postgres/postgres 0 2023-06-06 13:10 var/snap/pgsql/var/lib/pgsql/dataglobal/
drwx------ postgres/postgres 0 2023-06-06 13:07 var/snap/pgsql/var/lib/pgsql/datalog/
drwx------ postgres/postgres 0 2023-04-05 21:04 var/snap/pgsql/var/lib/pgsql/datapg_commit_ts/
drwx------ postgres/postgres 0 2023-04-05 21:04 var/snap/pgsql/var/lib/pgsql/datapg_dynshmem/
-rw-r----- postgres/postgres 697 2023-06-02 20:44 var/snap/pgsql/var/lib/pgsql/datapg_hba.conf
-rw-r----- postgres/postgres 47 2023-04-05 21:05 var/snap/pgsql/var/lib/pgsql/datapg_ident.conf
drwx------ postgres/postgres 0 2023-06-06 13:19 var/snap/pgsql/var/lib/pgsql/datapg_logical/
drwx------ postgres/postgres 0 2023-04-05 21:04 var/snap/pgsql/var/lib/pgsql/datapg_logical/mappings/
drwx------ postgres/postgres 0 2023-04-05 21:04 var/snap/pgsql/var/lib/pgsql/datapg_logical/snapshots/
drwx------ postgres/postgres 0 2023-04-05 21:04 var/snap/pgsql/var/lib/pgsql/datapg_multixact/

2. Even if you fix 1.), the contents of the archive will be unpacked to the wrong place during the restore, which results in the database remaining completely empty after the restore - as if Foreman/Satellite had been freshly installed.

# create backup
foreman-maintain backup snapshot /var/backup

# Fix first bug (missing '/')
cd /var/backup/katello-backup-2023-06-06-13-17-36/
tar xfz pgsql_data.tar.gz 
cd var/snap/pgsql/var/lib/pgsql
for file in $(ls); do new=$(echo $file | sed s/data//g); mv $file $new; done
mkdir -p data && chown postgres: data && chmod 700 data
mv * data
cd /var/backup/katello-backup-2023-06-06-13-17-36/
tar cfz pgsql_data-fixed.tar.gz var/snap/pgsql/var/lib/pgsql/data/
rm -f pgsql_data.tar.gz 
mv pgsql_data-fixed.tar.gz pgsql_data.tar.gz

# size of /var/lib/pgsql/data before restore
[root@foreman ~]# du -hs /var/lib/pgsql/data
3.9G /var/lib/pgsql/data

# restore 
foreman-maintain restore /var/backup/katello-backup-2023-06-06-14-06-10/

# size of /var/lib/pgsql/data after restore
[root@foreman ~]# du -hs /var/lib/pgsql/data
598M /var/lib/pgsql/data

# satellite/foreman is now completly empty

# actual data is in the wrong directory: 
du -hs /var/snap/pgsql/var/lib/pgsql/data/
3.9G /var/snap/pgsql/var/lib/pgsql/data/

The cause is, if I understand it correctly, that there is no distinction between source and destination in the backup_local method
For an offline backup, this works because the backup is created from the real postgres data directory (feature(:foreman_database).data_dir) and is unpacked there on restore.
But for a snapshot backup the source directory is the snapshot directory (@mount_dir) and the destination directory should be the postgres data directory "feature(:foreman_database).data_dir). But on restore it will be unpacked to @mount_dir instead.

A possible solution would be to have the backup_local method accept an additional restore_dir option, which must be set to the postgres data directory during a backup:

diff --git a/definitions/procedures/backup/offline/candlepin_db.rb b/definitions/procedures/backup/offline/candlepin_db.rb
index f00a641..5d2c943 100644
--- a/definitions/procedures/backup/offline/candlepin_db.rb
+++ b/definitions/procedures/backup/offline/candlepin_db.rb
@@ -36,7 +36,8 @@ module Procedures::Backup
             pg_backup_file,
             :listed_incremental => File.join(@backup_dir, '.postgres.snar'),
             :volume_size => @tar_volume_size,
-            :data_dir => pg_data_dir
+            :data_dir => pg_data_dir,
+            :restore_dir => feature(:candlepin_database).data_dir
           )
         end
       end
diff --git a/definitions/procedures/backup/offline/foreman_db.rb b/definitions/procedures/backup/offline/foreman_db.rb
index cd73c64..4285bb2 100644
--- a/definitions/procedures/backup/offline/foreman_db.rb
+++ b/definitions/procedures/backup/offline/foreman_db.rb
@@ -44,6 +44,7 @@ module Procedures::Backup
           :listed_incremental => File.join(@backup_dir, '.postgres.snar'),
           :volume_size => @tar_volume_size,
           :data_dir => pg_dir,
+          :restore_dir => feature(:foreman_database).data_dir,
           :command => cmd
         )
       end
diff --git a/definitions/procedures/backup/offline/pulpcore_db.rb b/definitions/procedures/backup/offline/pulpcore_db.rb
index 5aba2ef..94a79a7 100644
--- a/definitions/procedures/backup/offline/pulpcore_db.rb
+++ b/definitions/procedures/backup/offline/pulpcore_db.rb
@@ -36,7 +36,8 @@ module Procedures::Backup
             pg_backup_file,
             :listed_incremental => File.join(@backup_dir, '.postgres.snar'),
             :volume_size => @tar_volume_size,
-            :data_dir => pg_data_dir
+            :data_dir => pg_data_dir,
+            :restore_dir => feature(:pulpcore_database).data_dir
           )
         end
       end
diff --git a/lib/foreman_maintain/concerns/base_database.rb b/lib/foreman_maintain/concerns/base_database.rb
index 1fea0d5..33c4446 100644
--- a/lib/foreman_maintain/concerns/base_database.rb
+++ b/lib/foreman_maintain/concerns/base_database.rb
@@ -101,13 +101,14 @@ module ForemanMaintain

       def backup_local(backup_file, extra_tar_options = {})
         dir = extra_tar_options.fetch(:data_dir, data_dir)
+        restore_dir = extra_tar_options.fetch(:restore_dir, data_dir)
         command = extra_tar_options.fetch(:command, 'create')

         FileUtils.cd(dir) do
           tar_options = {
             :archive => backup_file,
             :command => command,
-            :transform => "s,^,#{dir[1..]},S",
+            :transform => "s,^,#{restore_dir[1..]},S",
             :files => '*',
           }.merge(extra_tar_options)
           feature(:tar).run(tar_options)




Version-Release number of selected component (if applicable):


How reproducible: Easily


Steps to Reproduce:
1. Try to create a Snapshot backup on Satellite 6.12 and 6.12
2. Check for the pgsql_data.tar.gz 
3. the tar file should be broken that cannot be restored with "satellite-maintain restore"

Actual results: 


Expected results: Should be able to backup and restore without any issues


Additional info: Upstream bug - https://projects.theforeman.org/issues/36480
Pull request : https://github.com/theforeman/foreman_maintain/pull/738

Comment 2 Bryan Kearney 2023-08-16 12:02:56 UTC
Moving this bug to POST for triage into Satellite since the upstream issue https://projects.theforeman.org/issues/36480 has been resolved.

Comment 3 Lukas Pramuk 2023-09-05 16:39:23 UTC
VERIFIED.

@Satellite 6.14.0 Snap13
rubygem-foreman_maintain-1.3.5-1.el8sat.noarch

by the following manual reproducer:

1) Have a satellite with /var/lib/pulp /var/lib/pgsql/data mounted from LVs

2) Enable and sync RHEL8 repositories

3) Run snapshot backup
# satellite-maintain backup snapshot /var/backup

4) Enable and sync some other repos - RHEL8 Kickstart repositories

5) Run restore from the backup
# satellite-maintain restore -y /var/backup/satellite-backup-*

6) Check data validity after restore (only RHEL8 repos should be present)

REPRO:

# hammer repository list --organization-id 1
---|------|---------|--------------|---------------|----
ID | NAME | PRODUCT | CONTENT TYPE | CONTENT LABEL | URL
---|------|---------|--------------|---------------|----


vs.

FIX:

# hammer repository list --organization-id 1
---|---------------------------------------------------------------|----------------------
ID | NAME                                                          | PRODUCT              
---|---------------------------------------------------------------|----------------------
2  | Red Hat Enterprise Linux 8 for x86_64 - AppStream RPMs 8      | Red Hat Enterprise...
1  | Red Hat Enterprise Linux 8 for x86_64 - BaseOS RPMs 8         | Red Hat Enterprise...
---|---------------------------------------------------------------|----------------------

Comment 5 Eric Helms 2023-10-24 15:56:05 UTC
*** Bug 2171797 has been marked as a duplicate of this bug. ***

Comment 14 errata-xmlrpc 2023-11-08 14:19:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Satellite 6.14 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6818


Note You need to log in before you can comment on or make changes to this bug.