Bug 2054758

Summary: Satellite 6.10 clone is failing with user pulp doesn't exist
Product: Red Hat Satellite Reporter: Shravan Kumar Tiwari <shtiwari>
Component: Satellite CloneAssignee: Evgeni Golov <egolov>
Status: CLOSED ERRATA QA Contact: Lukas Pramuk <lpramuk>
Severity: high Docs Contact:
Priority: high    
Version: 6.10.0CC: ahumbe, alex.bron, egolov, ehelms, fandrieu, mkalyat, mmccune, pcreech, sussen
Target Milestone: 6.11.0Keywords: Triaged
Target Release: Unused   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: satellite-clone-3.0.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 2093413 (view as bug list) Environment:
Last Closed: 2022-07-05 14:33:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Shravan Kumar Tiwari 2022-02-15 16:26:45 UTC
Description of problem:

Customer is performing the cloning of satellite and Ran into an issue of mongo.db (hot-fix provided in BZ https://bugzilla.redhat.com/show_bug.cgi?id=2048927).

After applying the hot-fix the cloning process proceeded further but failed with following:

"Detected incorrect permissions on /var/lib/pulp/assets but user pulp doesn't exist"

Version-Release number of selected component (if applicable):
Satellite 6.10


Additional info:

- on the source server user pulp is defined, but on the target server that user is not there.

- Comparison between source and destination server seems to show difference in packages.


Source server:
==============

# rpm -qa | grep -i pulp | sort
libmodulemd-1.7.0-1.pulp.el7sat.x86_64
pulp-client-1.0-3.noarch
pulpcore-selinux-1.2.7-1.el7pc.x86_64
python3-pulp-ansible-0.9.0-1.el7pc.noarch
python3-pulp-certguard-1.4.0-1.el7pc.noarch
python3-pulp-container-2.8.1-0.3.HOTFIXRHBZ2026277.el7pc.noarch
python3-pulpcore-3.14.9-1.el7pc.noarch
python3-pulp-file-1.8.2-1.el7pc.noarch
python3-pulp-rpm-3.14.7-1.el7pc.noarch
tfm-rubygem-pulp_ansible_client-0.8.0-1.el7sat.noarch
tfm-rubygem-pulp_certguard_client-1.4.0-1.el7sat.noarch
tfm-rubygem-pulp_container_client-2.7.0-1.el7sat.noarch
tfm-rubygem-pulpcore_client-3.14.1-1.el7sat.noarch
tfm-rubygem-pulp_deb_client-2.13.0-1.el7sat.noarch
tfm-rubygem-pulp_file_client-1.8.2-1.el7sat.noarch
tfm-rubygem-pulp_rpm_client-3.13.3-1.el7sat.noarch
tfm-rubygem-smart_proxy_pulp-3.0.0-1.el7sat.noarch

Target server:
==================

# rpm -qa  | grep -i pulp | sort
pulpcore-selinux-1.2.7-1.el7pc.x86_64
python3-pulpcore-3.14.9-1.el7pc.noarch
tfm-rubygem-pulp_ansible_client-0.8.0-1.el7sat.noarch
tfm-rubygem-pulp_certguard_client-1.4.0-1.el7sat.noarch
tfm-rubygem-pulp_container_client-2.7.0-1.el7sat.noarch
tfm-rubygem-pulpcore_client-3.14.1-1.el7sat.noarch
tfm-rubygem-pulp_deb_client-2.13.0-1.el7sat.noarch
tfm-rubygem-pulp_file_client-1.8.2-1.el7sat.noarch
tfm-rubygem-pulp_rpm_client-3.13.3-1.el7sat.noarch

Comment 4 Evgeni Golov 2022-02-17 12:49:25 UTC
and to close the circle from an out-of-BZ mail thread we had:

Just to be sure, the customer is using the procedure described in https://access.redhat.com/documentation/en-us/red_hat_satellite/6.10/html/upgrading_and_updating_red_hat_satellite/cloning_satellite_server and there using the alternative "rsync" way to get the pulp data over to the new system?

If so, they can workaround the issue by pre-creating the pulp user like this:
useradd --home-dir /var/lib/pulp --no-create-home --system --shell /sbin/nologin pulp

We'll work on a proper fix, but at least the customer can be unblocked in the meantime.

Comment 6 Evgeni Golov 2022-03-16 06:53:55 UTC
huh, seems I never answered the needinfos here:

- yes, we can assume the cloned system is healthy (as long as there are no external capsules involved, which is the only step that was omitted)
- yes, doc bug please, yum should be used in that case
- if the certificate has old an new names, satellite-change-hostname should be able to rename the machine without re-registration of the clients. emphasis on *should* as that's not my primary knowledge domain.

Comment 9 Lukas Pramuk 2022-05-18 08:56:21 UTC
VERIFIED.

@Satellite 6.11.0 Snap20
satellite-clone-3.1.0-1.el8sat.noarch


1) Create/Have a 6.11.0 backup

2) Copy backup to target machine and remove pulp_data.tar to do clone without pulp data

# satellite-clone -y
...

TASK [satellite-clone : include_tasks] ***********************************************************************************************************************
Wednesday 18 May 2022  04:47:38 -0400 (0:02:04.423)       0:08:00.065 ********* 
included: /usr/share/satellite-clone/roles/satellite-clone/tasks/ensure_pulp_data_permissions.yml for sat.example.com

TASK [satellite-clone : Ensure pulp group exists] ************************************************************************************************************
Wednesday 18 May 2022  04:47:38 -0400 (0:00:00.079)       0:08:00.145 ********* 
changed: [sat.example.com]

TASK [satellite-clone : Ensure pulp user exists] *************************************************************************************************************
Wednesday 18 May 2022  04:47:39 -0400 (0:00:00.560)       0:08:00.705 ********* 
changed: [sat.example.com]

TASK [satellite-clone : Check /var/lib/pulp ownership] *******************************************************************************************************
Wednesday 18 May 2022  04:47:40 -0400 (0:00:00.614)       0:08:01.320 ********* 
ok: [sat.example.com]

TASK [satellite-clone : Correct ownership of /var/lib/pulp] **************************************************************************************************
Wednesday 18 May 2022  04:47:40 -0400 (0:00:00.379)       0:08:01.699 ********* 
skipping: [sat.example.com]

...

>>> satellite clone successfully ensures pulp user/group exist

Comment 18 errata-xmlrpc 2022-07-05 14:33:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Satellite 6.11 Release), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5498