Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2159839 - when creating a backup on rhel7 and restoring on rhel8, the restore process will fail with permission issues
Summary: when creating a backup on rhel7 and restoring on rhel8, the restore process w...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Satellite Maintain
Version: 6.11.0
Hardware: All
OS: All
high
high
Target Milestone: 6.14.0
Assignee: Evgeni Golov
QA Contact: Lukas Pramuk
URL:
Whiteboard:
: 2158896 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-01-10 22:30 UTC by Waldirio M Pinheiro
Modified: 2024-08-28 16:42 UTC (History)
14 users (show)

Fixed In Version: rubygem-foreman_maintain-1.3.3
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2238348 (view as bug list)
Environment:
Last Closed: 2023-11-08 14:18:13 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Foreman Issue Tracker 36578 0 High New restore sometimes fails when puppet or katello-agent features are enabled 2023-07-11 08:00:14 UTC
Red Hat Issue Tracker SAT-18664 0 None None None 2023-06-27 19:36:47 UTC
Red Hat Product Errata RHSA-2023:6818 0 None None None 2023-11-08 14:19:08 UTC

Description Waldirio M Pinheiro 2023-01-10 22:30:55 UTC
Description of problem:
Currently, there are a lot of customers creating a backup of Satellite 6.11 over rhel7 to restore in a fresh server over rhel8. The issue is, some users have different id's from rhel7 and rhel8, causing permission issues during the restore process.

Version-Release number of selected component (if applicable):
6.11, 6.12

How reproducible:
100%

Steps to Reproduce:
1. Create a satellite on 6.11@rhel7
2. Create a backup using "foreman-maintain backup ..."
3. Install a fresh rhel8
4. Install a fresh 6.11 satellite over rhel8
   Note. Some users will be with a different uid, for example, qpidd and puppet
5. Restore the backup using "foreman-maintain restore ..."

Actual results:
The process will fail because there are some restored files/directories that will keep the original permission from the rhel7 environment

Expected results:
Restore finished correctly with no issues, also, all the permission should be fixed, if necessary.


Additional info:

Comment 10 Eric Helms 2023-02-02 14:00:52 UTC
*** Bug 2158896 has been marked as a duplicate of this bug. ***

Comment 15 Evgeni Golov 2023-07-10 19:29:20 UTC
I was finally able to reproduce this bug.

It only happens:
- for backups that have either Puppet or Katello-Agent/Qpidd features enabled
- when restoring directly with foreman-maintain (and not satellite-clone) (yes, this is absolutely supported, just limits the impact)
- when restoring to a system that does not yet have the same features enabled (this is *technically* unsupported, as we document in [1] that the system to restore to needs to have "the same configuration", but do not elaborate exactly which bits need to be "same")

The issue is when the puppetserver or qpid-cpp-server packages are not installed while we unpack the backup, the files that should be owned by `puppet` or `qpidd` are extracted with their numeric ownership.
Would the packages (and thus the users) be already present, tar would be able to look up the correct UID/GID combination (as the tarball *contains* the names!) and the restore would work.

satellite-clone avoids this issue, as it checks whether the backup has puppet/qpidd and pre-installs the packages [2].

a viable workaround for users who do not wish to use satellite-clone is to install puppetserver/qpid-cpp-server on the system before running the restore, or follow the documentation to enable those features on the target system before doing the restore (but really, installing the packages is enough).

[1] https://access.redhat.com/documentation/en-us/red_hat_satellite/6.11/html/administering_red_hat_satellite/restoring_server_or_smart_proxy_from_a_backup_admin
[2] https://github.com/RedHatSatellite/satellite-clone/commit/8b70ae2b66b7f1cb125cf5868b3b4397618a5990

Comment 16 Evgeni Golov 2023-07-11 08:00:12 UTC
Created redmine issue https://projects.theforeman.org/issues/36578 from this bug

Comment 17 Evgeni Golov 2023-07-11 12:56:18 UTC
I've written how to reproduce this without el7/el8 on a 6.14 in https://github.com/theforeman/foreman_maintain/pull/744#issuecomment-1630638289

Comment 24 Lukas Pramuk 2023-08-29 16:35:31 UTC
VERIFIED.

@Satellite 6.14.0 Snap 13
rubygem-foreman_maintain-1.3.5-1.el8sat.noarch

by the following manual reproducer:

1) Enable katello-agent feature 
# satellite-installer --foreman-proxy-content-enable-katello-agent true

2) Create a backup
# satellite-maintain backup offline -y /var/backup

3) On another machine with the same hostname install satellite with the defaults
(=katello-agent is disabled and not installed)

4) Restore from the backup made on 1st machine
# satellite-maintain restore -y /var/backup/satellite-backup-2023-08-29-08-04-05

REPRO:

--------------------------------------------------------------------------------
Restore configs from backup: 
- Restoring configs                                                   [OK]      
--------------------------------------------------------------------------------
Run installer reset: 
| Installer reset                                                     [FAIL]    
Failed executing yes | satellite-installer -v --reset-data , exit status 6:
...
2023-08-29 12:11:39 [ERROR ] [configure] Systemd start for qpidd failed!
2023-08-29 12:11:39 [ERROR ] [configure] journalctl log for qpidd:
2023-08-29 12:11:39 [ERROR ] [configure] -- Logs begin at Tue 2023-08-29 11:19:02 EDT, end at Tue 2023-08-29 12:11:39 EDT. --
2023-08-29 12:11:39 [ERROR ] [configure] Aug 29 12:10:09 satellite.example.com systemd[1]: Starting An AMQP message broker daemon....
2023-08-29 12:11:39 [ERROR ] [configure] Aug 29 12:10:09 satellite.example.com qpidd[39796]: 2023-08-29 12:10:09 [Broker] critical Unexpected error: Cannot open lock file /var/lib/qpidd/lock: Permission denied
2023-08-29 12:11:39 [ERROR ] [configure] Aug 29 12:10:09 satellite.example.com qpidd[39796]: 2023-08-29 12:10:09 [Broker] critical Unexpected error: Cannot open lock file /var/lib/qpidd/lock: Permission denied
2023-08-29 12:11:39 [ERROR ] [configure] Aug 29 12:10:09 satellite.example.com systemd[1]: qpidd.service: Main process exited, code=exited, status=1/FAILURE
2023-08-29 12:11:39 [ERROR ] [configure] Aug 29 12:11:39 satellite.example.com systemd[1]: qpidd.service: Start-post operation timed out. Stopping.
2023-08-29 12:11:39 [ERROR ] [configure] Aug 29 12:11:39 satellite.example.com systemd[1]: qpidd.service: Failed with result 'exit-code'.
2023-08-29 12:11:39 [ERROR ] [configure] Aug 29 12:11:39 satellite.example.com systemd[1]: Failed to start An AMQP message broker daemon..
2023-08-29 12:11:39 [ERROR ] [configure] /Stage[main]/Qpid::Service/Service[qpidd]/ensure: change from 'stopped' to 'running' failed: Systemd start for qpidd failed!

vs.

FIX:

--------------------------------------------------------------------------------
Ensure required packages are installed before restore: 
/ Installing required packages                                        [OK]      
--------------------------------------------------------------------------------
Restore configs from backup: 
/ Restoring configs                                                   [OK]      
--------------------------------------------------------------------------------
Run installer reset: 
| Installer reset                                                     [OK]      
--------------------------------------------------------------------------------

>>> restore now ensures required users/groups exist prior restoring configs (in order to map users/groups in archive vs. in system)

Comment 31 errata-xmlrpc 2023-11-08 14:18:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Satellite 6.14 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6818


Note You need to log in before you can comment on or make changes to this bug.