This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1940869 - leapp fails when doing a "yum clean all"
Summary: leapp fails when doing a "yum clean all"
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: leapp-repository
Version: 7.9
Hardware: All
OS: Linux
low
high
Target Milestone: rc
: ---
Assignee: Leapp Notifications Bot
QA Contact: upgrades-and-conversions
URL:
Whiteboard:
: 2050153 2186874 (view as bug list)
Depends On:
Blocks: 1818088
TreeView+ depends on / blocked
 
Reported: 2021-03-19 12:34 UTC by Christophe Besson
Modified: 2023-09-12 11:09 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-09-12 11:09:28 UTC
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OAMG-4590 0 None None None 2023-05-11 07:54:11 UTC
Red Hat Issue Tracker   RHEL-3278 0 None Migrated None 2023-09-12 11:08:29 UTC
Red Hat Knowledge Base (Solution) 5233701 0 None None None 2023-05-05 09:51:52 UTC

Description Christophe Besson 2021-03-19 12:34:27 UTC
Description of problem:
Leapp fails during the step where "yum clean all" is executed in the container through systemd-nspawn.
Usually, it happens when the repositories are not properly configured, in particular with a Satellite infrastructure.
But this is not the case here: other machines, within the same infrastructure, have been upgraded successfully.
During a test, they changed the command "yum clean all" by "yum repolist all", and then the upgrade worked. It was done directly in get_available_repo_ids() from /usr/share/leapp-repository/repositories/system_upgrade/el7toel8/libraries/rhsm.py.

Version-Release number of selected component (if applicable):
leapp-0.12.0-1.el7_9.noarch
leapp-repository-0.13.0-2.el7_9.noarch

with the current leapp-data13.tar.gz.

How reproducible:
100% for the customer, but unable to reproduce internally.

Actual results:
~~~
2021-03-17 21:37:00.746 DEBUG    PID: 42336 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator: Loaded plugins: enabled_repos_upload, product-id, search-disabled-repos,
2021-03-17 21:37:00.757 DEBUG    PID: 42336 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator:               : subscription-manager
2021-03-17 21:37:00.925 DEBUG    PID: 42336 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator: 
2021-03-17 21:37:00.935 DEBUG    PID: 42336 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator: This system is not receiving updates. You can use subscription-manager on the host to register and assign subscriptions.
2021-03-17 21:37:00.947 DEBUG    PID: 42336 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator: 
2021-03-17 21:37:00.960 DEBUG    PID: 42336 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator: Uploading Enabled Repositories Report
2021-03-17 21:37:00.968 DEBUG    PID: 42336 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator: There are no enabled repos.
2021-03-17 21:37:00.977 DEBUG    PID: 42336 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator:  Run \"yum repolist all\" to see the repos you have.
2021-03-17 21:37:00.984 DEBUG    PID: 42336 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator:  To enable Red Hat Subscription Management repositories:
2021-03-17 21:37:00.992 DEBUG    PID: 42336 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator:      subscription-manager repos --enable <repo>
2021-03-17 21:37:01.0   DEBUG    PID: 42336 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator:  To enable custom repositories:
2021-03-17 21:37:01.8   DEBUG    PID: 42336 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator:      yum-config-manager --enable <repo>
2021-03-17 21:37:04.977 DEBUG    PID: 42336 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator: Command ['systemd-nspawn', '--register=no', '--quiet', '-D', '/var/lib/leapp/scratch/mounts/root_/system_overlay', '--bind=/etc/hosts:/etc/hosts', '--setenv=LEAPP_NO_RHSM=0', '--setenv=LEAPP_EXPERIMENTAL=0', '--setenv=LEAPP_COMMON_TOOLS=:/etc/leapp/repos.d/system_upgrade/el7toel8/tools', '--setenv=LEAPP_COMMON_FILES=:/etc/leapp/repos.d/system_upgrade/el7toel8/files', '--setenv=LEAPP_UNSUPPORTED=0', '--setenv=LEAPP_EXECUTION_ID=c5498300-0716-4e78-ae4b-59bad5a9a998', '--setenv=LEAPP_HOSTNAME=XXXXXX', 'yum', 'clean', 'all'] failed with exit code 1.
~~~

Additional info:
- We are suspecting a side effect of overlayfs, but currently we are unable to find some evidences.
- From the strace of another machine which fails, we noticed they have at least 20 ext4 filesystems (including 15 fs which are not required during the upgrade).
- During the "rollback" due to the failure, leapp is unable to remove some dirs used to mount overlays (e.g.: /var/lib/leapp/scratch/mounts/root_/system_overlay/...), there are some EBUSY (Device or resource busy).
- During the "yum clean all", we can see some operations done by the subscription-manager plugin, it fills /var/cache/rhsm/profile.json and then exits without any obvious reasons. Extracting the rhsm.log, we can see a strange warning (as the file exists):
~~~
2021-03-17 21:26:56,281 [WARNING] subscription-manager:28483:MainThread @cert_sorter.py:194 - Installed product 69 not present in response from server.
~~~
=> but this doesn't seem the same issue than rhbz#1911802

Comment 2 Pavel Odvody 2021-03-19 14:29:30 UTC
Hi Christophe,

can you share sosreport with us? Seeing the number of filesystems I wonder if they are backed by physical device or virtual/iscsi or something else.

Comment 3 Christophe Besson 2021-03-23 16:02:34 UTC
Hi Pavel,

here is an sosreport of a system having the issue within the same infrastructure.
-> it's not the same host, as it has been upgraded thanks to the workaround.
-> in this new try, some FS which are not required during the upgrade have been unmounted.

Comment 8 Christophe Besson 2021-03-24 12:57:48 UTC
I reviewed both straces, they behave the same way until the encountered error:
~~~
42732 21:37:00.937565 write(2<pipe:[1315988]>, "There are no enabled repos.\n Run \"yum repolist all\" to see the repos you have.\n To enable Red Hat Subscription Management repositories:\n     subscription-manager repos --enable <repo>\n To enable custom repositories:\n     yum-config-manager --enable <repo>\n", 256 <unfinished ...>
42732 21:37:00.937615 <... write resumed>) = 256 <0.000028>
~~~

But after that,  the behaviour of the 2nd machine is different, it ends quickly after that, whereas the 1st machine connects again to a remote server.
That does not change the final behaviour, both exits with a return code 1.

Just before the error message above, we can see the el8 repositories are written to /etc/yum.repos.d/redhat.repo:
~~~
#
# Certificate-Based Repositories
# Managed by (rhsm) subscription-manager
#
# *** This file is auto-generated.  Changes made here will be over-written. ***
# *** Use "subscription-manager repo-override --help" if you wish to make changes. ***
#
# If this file is empty and this system is subscribed consider
# a "yum repolist" to refresh available repos
#

[rhel-8-for-x86_64-baseos-rpms]
metadata_expire = 1
enabled_metadata = 1
sslclientcert = /etc/pki/entitlement/3841914744505786282.pem
baseurl = https://XXX/pulp/repos/XXX/Production/cv-vf-redhat-rhel7-to-rhel8/content/dist/rhel8/$releasever/x86_64/baseos/os
ui_repoid_vars = releasever
sslverify = 1
name = Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs)
sslclientkey = /etc/pki/entitlement/3841914744505786282-key.pem
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
enabled = 0                                                 <=== disabled!
sslcacert = /etc/rhsm-host/ca/katello-server-ca.pem
gpgcheck = 1

[rhel-8-for-x86_64-appstream-rpms]
metadata_expire = 1
enabled_metadata = 1
sslclientcert = /etc/pki/entitlement/3841914744505786282.pem
baseurl = https://XXX/pulp/repos/XXX/Production/cv-vf-redhat-rhel7-to-rhel8/content/dist/rhel8/$releasever/x86_64/appstream/os
ui_repoid_vars = releasever
sslverify = 1
name = Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs)
sslclientkey = /etc/pki/entitlement/3841914744505786282-key.pem
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
enabled = 0                                                 <=== disabled!
sslcacert = /etc/rhsm-host/ca/katello-server-ca.pem
gpgcheck = 1
~~~

That explains why yum complains, but not why repos are written with enabled=0.

Comparing with my test VM (working with *CDN* as I don't know Sat), I noticed a few differences of behaviour.
* on my VM, the link to /etc/rhsm-host does NOT exist, whereas it is present in the shared straces.
* on my VM, the product cert 69.pem (el7) is present, and 479.pem (el8) is still not there, this is the CONTRARY in the shared straces.
* on my VM, I see some env vars with the PATH while executing yum, I can't see it in the shared strace (not sure there is an impact, but it's so weird).

Could it be a side effect of failed attempts?

Comment 9 Christophe Besson 2021-03-24 14:28:40 UTC
Hmm, forgot the thing about the rhsm-host link which does not exist in my test VM, that was during the first "yum clean all", before the link is created...

Comment 10 Christophe Besson 2021-03-24 14:43:01 UTC
And in my test VM, repos are also written with enabled=0, but yum does not behave the same way after that.

Comment 11 Christophe Besson 2021-03-24 14:58:48 UTC
I can upload the excerpt of the strace during the "yum clean all" ran by systemd-nspawn, it's about 9M. It corresponds to the sosreport attached to this BZ.
Not sure it will be helpful, on my side I don't know what to inspect next.

Comparing with my test VM which is not connected to a Satellite server, the behaviour differs when the yum-plugin "enabled_repos_upload" is loaded.

Comment 12 Pavel Odvody 2021-03-24 15:17:43 UTC
Yeah, please upload the trace, I can take a look as well and see if anything stands out. Thanks Christophe!

Comment 21 Christophe Besson 2022-03-03 12:44:48 UTC
Adding another case with similar symptoms, didn't check whether the claimed workaround change the behavior.
Satellite side has been checked by the sysmgmt team, curl outputs to check whether the machine is able to download the rhel8 metadata looks good (http 200).

2022-02-24 12:56:50.620 DEBUG    PID: 13540 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator: This system is not receiving updates. You can use subscription-manager on the host to register and assign subscriptions.
2022-02-24 12:56:50.677 DEBUG    PID: 13540 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator: 
2022-02-24 12:56:50.706 DEBUG    PID: 13540 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator: There are no enabled repos.
2022-02-24 12:56:50.726 DEBUG    PID: 13540 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator:  Run "yum repolist all" to see the repos you have.
2022-02-24 12:56:50.745 DEBUG    PID: 13540 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator:  To enable Red Hat Subscription Management repositories:
2022-02-24 12:56:50.761 DEBUG    PID: 13540 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator:      subscription-manager repos --enable <repo>
2022-02-24 12:56:50.779 DEBUG    PID: 13540 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator:  To enable custom repositories:
2022-02-24 12:56:50.807 DEBUG    PID: 13540 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator:      yum-config-manager --enable <repo>
2022-02-24 12:56:50.834 DEBUG    PID: 13540 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator: Command ['systemd-nspawn', '--register=no', '--quiet', '-D', '/var/lib/leapp/scratch/mounts/root_/system_overlay', '--setenv=LEAPP_NO_RHSM=0', '--setenv=LEAPP_EXPERIMENTAL=0', '--setenv=LEAPP_COMMON_TOOLS=:/etc/leapp/repos.d/system_upgrade/el7toel8/tools', '--setenv=LEAPP_COMMON_FILES=:/etc/leapp/repos.d/system_upgrade/common/files:/etc/leapp/repos.d/system_upgrade/el7toel8/files', '--setenv=LEAPP_UNSUPPORTED=0', '--setenv=LEAPP_EXECUTION_ID=26aef12c-94c6-47f8-86b1-6afac6a8bf88', '--setenv=LEAPP_HOSTNAME=XXXXXXXXXXXXXXX', 'yum', 'clean', 'all'] failed with exit code 1.

Comment 24 Christophe Besson 2022-03-29 14:01:42 UTC
Another case with Satellite repos for which we have no workaround to offer.
Attaching privately the leapp.db corresponding to this case.

2022-03-28 14:15:30.422 DEBUG    PID: 16303 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator: External command has started: ['systemd-nspawn', '--register=no', '--quiet', '-D', '/var/lib/leapp/scratch/mounts/root_/system_overlay', '--setenv=LEAPP_NO_RHSM=0', '--setenv=LEAPP_EXPERIMENTAL=0', '--setenv=LEAPP_COMMON_TOOLS=:/etc/leapp/repos.d/system_upgrade/el7toel8/tools', '--setenv=LEAPP_COMMON_FILES=:/etc/leapp/repos.d/system_upgrade/common/files:/etc/leapp/repos.d/system_upgrade/el7toel8/files', '--setenv=LEAPP_UNSUPPORTED=0', '--setenv=LEAPP_EXECUTION_ID=c6695234-c12f-4841-8647-7f0e0a2af306', '--setenv=LEAPP_HOSTNAME=XXX', 'yum', 'clean', 'all']
2022-03-28 14:15:30.474 DEBUG    PID: 16303 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator: Host and machine ids are equal (49f3bb9bf74b4248864630e4923a58c4): refusing to link journals
2022-03-28 14:15:30.848 DEBUG    PID: 16303 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator: Loaded plugins: enabled_repos_upload, package_upload, product-id, search-
2022-03-28 14:15:30.868 DEBUG    PID: 16303 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator:               : disabled-repos, subscription-manager, tracer_upload
2022-03-28 14:15:31.129 DEBUG    PID: 16303 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator: 
2022-03-28 14:15:31.161 DEBUG    PID: 16303 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator: This system is not receiving updates. You can use subscription-manager on the host to register and assign subscriptions.
2022-03-28 14:15:31.184 DEBUG    PID: 16303 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator: 
2022-03-28 14:15:31.199 DEBUG    PID: 16303 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator: There are no enabled repos.
2022-03-28 14:15:31.214 DEBUG    PID: 16303 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator:  Run "yum repolist all" to see the repos you have.
2022-03-28 14:15:31.229 DEBUG    PID: 16303 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator: Uploading Enabled Repositories Report
2022-03-28 14:15:31.254 DEBUG    PID: 16303 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator:  To enable Red Hat Subscription Management repositories:
2022-03-28 14:15:31.275 DEBUG    PID: 16303 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator:      subscription-manager repos --enable <repo>
2022-03-28 14:15:31.295 DEBUG    PID: 16303 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator:  To enable custom repositories:
2022-03-28 14:15:31.314 DEBUG    PID: 16303 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator:      yum-config-manager --enable <repo>
2022-03-28 14:15:32.212 DEBUG    PID: 16303 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator: Command ['systemd-nspawn', '--register=no', '--quiet', '-D', '/var/lib/leapp/scratch/mounts/root_/system_overlay', '--setenv=LEAPP_NO_RHSM=0', '--setenv=LEAPP_EXPERIMENTAL=0', '--setenv=LEAPP_COMMON_TOOLS=:/etc/leapp/repos.d/system_upgrade/el7toel8/tools', '--setenv=LEAPP_COMMON_FILES=:/etc/leapp/repos.d/system_upgrade/common/files:/etc/leapp/repos.d/system_upgrade/el7toel8/files', '--setenv=LEAPP_UNSUPPORTED=0', '--setenv=LEAPP_EXECUTION_ID=c6695234-c12f-4841-8647-7f0e0a2af306', '--setenv=LEAPP_HOSTNAME=olegsapbd8ci', 'yum', 'clean', 'all'] failed with exit code 1.
2022-03-28 14:15:32.237 DEBUG    PID: 16303 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator: External command has finished: ['systemd-nspawn', '--register=no', '--quiet', '-D', '/var/lib/leapp/scratch/mounts/root_/system_overlay', '--setenv=LEAPP_NO_RHSM=0', '--setenv=LEAPP_EXPERIMENTAL=0', '--setenv=LEAPP_COMMON_TOOLS=:/etc/leapp/repos.d/system_upgrade/el7toel8/tools', '--setenv=LEAPP_COMMON_FILES=:/etc/leapp/repos.d/system_upgrade/common/files:/etc/leapp/repos.d/system_upgrade/el7toel8/files', '--setenv=LEAPP_UNSUPPORTED=0', '--setenv=LEAPP_EXECUTION_ID=c6695234-c12f-4841-8647-7f0e0a2af306', '--setenv=LEAPP_HOSTNAME=XXX', 'yum', 'clean', 'all']

We checked with curl the 3 required target repositories needed (e4s channel), and all of them worked well (HTTP 200 return code with repomd.xml downloaded):

curl -v --key /etc/pki/entitlement/*-key.pem --cert /etc/pki/entitlement/*[!key].pem --cacert /etc/rhsm/ca/katello-server-ca.pem https://XXX/pulp/repos/YYY/Library/CV_SAP_HANA_UPGRADE/content/e4s/rhel8/8.2/x86_64/baseos/os/repodata/repomd.xml

curl -v --key /etc/pki/entitlement/*-key.pem --cert /etc/pki/entitlement/*[!key].pem --cacert /etc/rhsm/ca/katello-server-ca.pem https://XXX/pulp/repos/YYY/Library/CV_SAP_HANA_UPGRADE/content/e4s/rhel8/8.2/x86_64/appstream/os/repodata/repomd.xml

curl -v --key /etc/pki/entitlement/*-key.pem --cert /etc/pki/entitlement/*[!key].pem --cacert /etc/rhsm/ca/katello-server-ca.pem https://XXX/pulp/repos/YYY/Library/CV_SAP_HANA_UPGRADE/content/e4s/rhel8/8.2/x86_64/sap-solutions/os/repodata/repomd.xml

Comment 35 Christophe Besson 2022-04-11 13:17:47 UTC
Customer shared the tarball created with the instrumented code.

The strace in attachment corresponds to the 2nd `yum clean all` executing during the IPU, when Leapp uses this to refresh `redhat.repo` with the "target repositories", those of RHEL *8*.

The data we have confirms the following points.

1/ Leapp is used with --channel=e4s [EXPECTED]

"LEAPP_TARGET_PRODUCT_CHANNEL=e4s" defined as an env var.

2/ At this step, 69.pem (RHEL7 product cert) has been removed and 479.pem (RHEL8 product cert) is there [EXPECTED]

3/ All the required certs, keys and cacert are present [EXPECTED]
~~~
$ find 0100-leapp-debug-1649442035.tgz/var/lib/leapp/scratch/mounts/root_/system_overlay/etc/pki/{product,product-default,entitlement} 0100-leapp-debug-1649442035.tgz/var/lib/leapp/scratch/mounts/root_/system_overlay/etc/rhsm-host/ca/katello-server-ca.pem 
0100-leapp-debug-1649442035.tgz/var/lib/leapp/scratch/mounts/root_/system_overlay/etc/pki/product
0100-leapp-debug-1649442035.tgz/var/lib/leapp/scratch/mounts/root_/system_overlay/etc/pki/product/479.pem                            <=== product cert
0100-leapp-debug-1649442035.tgz/var/lib/leapp/scratch/mounts/root_/system_overlay/etc/pki/product-default
0100-leapp-debug-1649442035.tgz/var/lib/leapp/scratch/mounts/root_/system_overlay/etc/pki/product-default/479.pem                    <=== product cert
0100-leapp-debug-1649442035.tgz/var/lib/leapp/scratch/mounts/root_/system_overlay/etc/pki/entitlement
0100-leapp-debug-1649442035.tgz/var/lib/leapp/scratch/mounts/root_/system_overlay/etc/pki/entitlement/1511877002625190611-key.pem    <=== customer key for auth
0100-leapp-debug-1649442035.tgz/var/lib/leapp/scratch/mounts/root_/system_overlay/etc/pki/entitlement/1511877002625190611.pem        <=== customer cert for auth
0100-leapp-debug-1649442035.tgz/var/lib/leapp/scratch/mounts/root_/system_overlay/etc/rhsm-host/ca/katello-server-ca.pem             <=== cacert for the Sat server
~~~

4/ The TCP comm is established to the Satellite server, port 443. The TLS handshake seems good. Looks like it is confirmed by the container's rhsm.log:
~~~
8691  14:20:23.373173 write(179</var/lib/leapp/scratch/mounts/root_/system_overlay/var/log/rhsm/rhsm.log>, "2022-04-08 14:20:23,372 [INFO] yum:8691:MainThread @connection.py:909 - Connection built: host=SAT-HOST port=443 handler=/rhsm auth=identity_cert ca_dir=/etc/rhsm-host/ca/ insecure=False\n", 205) = 205 <0.000030>
~~~

5/ During the execution of the subscription-manager plugin, redhat.repo is updated with RHEL 8 repositories, but all of them are defined with "enabled = 0".
~~~
8691  14:20:23.648700 write(180</var/lib/leapp/scratch/mounts/root_/system_overlay/etc/yum.repos.d/redhat.repo>, 
	#
	# Certificate-Based Repositories
	# Managed by (rhsm) subscription-manager
	#
	# *** This file is auto-generated.  Changes made here will be over-written. ***
	# *** Use "subscription-manager repo-override --help" if you wish to make changes. ***
	#
	# If this file is empty and this system is subscribed consider
	# a "yum repolist" to refresh available repos
	#

	[XXXXX_Splunk_splunk]
	metadata_expire = 1
	enabled_metadata = 0
	sslclientcert = /etc/pki/entitlement/1511877002625190611.pem
	baseurl = https://SAT-HOST/pulp/repos/XXXXX/Library/CV_SAP_HANA_UPGRADE/custom/Splunk/splunk
	sslverify = 1
	name = splunk
	sslclientkey = /etc/pki/entitlement/1511877002625190611-key.pem
	enabled = 0
	sslcacert = /etc/rhsm-host/ca/katello-server-ca.pem
	gpgcheck = 0

	[rhel-8-for-x86_64-sap-netweaver-e4s-rpms]
	metadata_expire = 1
	enabled_metadata = 0
	sslclientcert = /etc/pki/entitlement/1511877002625190611.pem
	baseurl = https://SAT-HOST/pulp/repos/XXXXX/Library/CV_SAP_HANA_UPGRADE/content/e4s/rhel8/$releasever/x86_64/sap/os
	ui_repoid_vars = releasever
	sslverify = 1
	name = Red Hat Enterprise Linux 8 for x86_64 - SAP NetWeaver - Update Services for SAP Solutions (RPMs)
	sslclientkey = /etc/pki/entitlement/1511877002625190611-key.pem
	gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
	enabled = 0
	sslcacert = /etc/rhsm-host/ca/katello-server-ca.pem
	gpgcheck = 1

	[rhel-8-for-x86_64-sap-solutions-eus-rpms]
	metadata_expire = 1
	enabled_metadata = 0
	sslclientcert = /etc/pki/entitlement/1511877002625190611.pem
	baseurl = https://SAT-HOST/pulp/repos/XXXXX/Library/CV_SAP_HANA_UPGRADE/content/eus/rhel8/$releasever/x86_64/sap-solutions/os
	ui_repoid_vars = releasever
	sslverify = 1
	name = Red Hat Enterprise Linux 8 for x86_64 - SAP Solutions - Extended Update Support (RPMs)
	sslclientkey = /etc/pki/entitlement/1511877002625190611-key.pem
	gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
	enabled = 0
	sslcacert = /etc/rhsm-host/ca/katello-server-ca.pem
	gpgcheck = 1

	[rhel-8-for-x86_64-appstream-e4s-rpms]
	metadata_expire = 1
	enabled_metadata = 0
	sslclientcert = /etc/pki/entitlement/1511877002625190611.pem
	baseurl = https://SAT-HOST/pulp/repos/XXXXX/Library/CV_SAP_HANA_UPGRADE/content/e4s/rhel8/$releasever/x86_64/appstream/os
	ui_repoid_vars = releasever
	sslverify = 1
	name = Red Hat Enterprise Linux 8 for x86_64 - AppStream - Update Services for SAP Solutions (RPMs)
	sslclientkey = /etc/pki/entitlement/1511877002625190611-key.pem
	gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
	enabled = 0
	sslcacert = /etc/rhsm-host/ca/katello-server-ca.pem
	gpgcheck = 1

	[rhel-8-for-x86_64-sap-solutions-e4s-rpms]
	metadata_expire = 1
	enabled_metadata = 0
	sslclientcert = /etc/pki/entitlement/1511877002625190611.pem
	baseurl = https://SAT-HOST/pulp/repos/XXXXX/Library/CV_SAP_HANA_UPGRADE/content/e4s/rhel8/$releasever/x86_64/sap-solutions/os
	ui_repoid_vars = releasever
	sslverify = 1
	name = Red Hat Enterprise Linux 8 for x86_64 - SAP Solutions - Update Services for SAP Solutions (RPMs)
	sslclientkey = /etc/pki/entitlement/1511877002625190611-key.pem
	gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
	enabled = 0
	sslcacert = /etc/rhsm-host/ca/katello-server-ca.pem
	gpgcheck = 1

	[rhel-8-for-x86_64-baseos-e4s-rpms]
	metadata_expire = 1
	enabled_metadata = 0
	sslclientcert = /etc/pki/entitlement/1511877002625190611.pem
	baseurl = https://SAT-HOST/pulp/repos/XXXXX/Library/CV_SAP_HANA_UPGRADE/content/e4s/rhel8/$releasever/x86_64/baseos/os
	ui_repoid_vars = releasever
	sslverify = 1
	name = Red Hat Enterprise Linux 8 for x86_64 - BaseOS - Update Services for SAP Solutions (RPMs)
	sslclientkey = /etc/pki/entitlement/1511877002625190611-key.pem
	gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
	enabled = 0
	sslcacert = /etc/rhsm-host/ca/katello-server-ca.pem
	gpgcheck = 1

	[rhel-8-for-x86_64-highavailability-e4s-rpms]
	metadata_expire = 1
	enabled_metadata = 0
	sslclientcert = /etc/pki/entitlement/1511877002625190611.pem
	baseurl = https://SAT-HOST
, 4096) = 4096 <0.000031>
 :
8691  14:20:23.648800 write(180</var/lib/leapp/scratch/mounts/root_/system_overlay/etc/yum.repos.d/redhat.repo>, 
	/pulp/repos/XXXXX/Library/CV_SAP_HANA_UPGRADE/content/e4s/rhel8/$releasever/x86_64/highavailability/os
	ui_repoid_vars = releasever
	sslverify = 1
	name = Red Hat Enterprise Linux 8 for x86_64 - High Availability - Update Services for SAP Solutions (RPMs)
	sslclientkey = /etc/pki/entitlement/1511877002625190611-key.pem
	gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
	enabled = 0
	sslcacert = /etc/rhsm-host/ca/katello-server-ca.pem
	gpgcheck = 1
, 470) = 470 <0.000024>
~~~

6/ Yum ends with "no enabled repos"...

8691  14:20:23.724622 read(180</var/lib/leapp/scratch/mounts/root_/system_overlay/var/lib/rhsm/cache/content_access_mode.json>, "{\"c25cb81c-0031-4578-95f4-1fdbb4910377\": \"org_environment\"}", 4096) = 59 <0.000023>
8691  14:20:23.724688 read(180</var/lib/leapp/scratch/mounts/root_/system_overlay/var/lib/rhsm/cache/content_access_mode.json>, "", 4096) = 0 <0.000017>
8691  14:20:23.724777 close(180</var/lib/leapp/scratch/mounts/root_/system_overlay/var/lib/rhsm/cache/content_access_mode.json>) = 0 <0.000019>
8691  14:20:23.724837 munmap(0x7f92149b1000, 4096) = 0 <0.000025>
8691  14:20:23.724920 stat("/etc/rhsm-host/", {st_dev=makedev(0, 40), st_ino=84170797, st_mode=S_IFDIR|0755, st_nlink=6, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=8, st_size=4096, st_atime=1649370435 /* 2022-04-07T18:27:15.247013352-0400 */, st_atime_nsec=247013352, st_mtime=1647031387 /* 2022-03-11T15:43:07.247198611-0500 */, st_mtime_nsec=247198611, st_ctime=1647031387 /* 2022-03-11T15:43:07.247198611-0500 */, st_ctime_nsec=247198611}) = 0 <0.000028>
8691  14:20:23.725178 write(1</dev/pts/0<char 136:0>>, "\nThis system is not receiving updates. You can use subscription-manager on the host to register and assign subscriptions.\n\n", 123) = 123 <0.000029>


===========================================================================

Note about the full yum output.
The I/O error is due to the fact I didn't mount /proc into the chroot, the subscription-manager plugin does not fail for this reason.

8691  14:20:21.938218 execve("/usr/sbin/chroot", ["chroot", "/var/lib/leapp/scratch/mounts/root_/system_overlay", "yum", "-d10", "clean", "all"], ["XDG_SESSION_ID=14810", "HOSTNAME=XXX", "SELINUX_ROLE_REQUESTED=", "LEAPP_DEBUG=1", "SHELL=/bin/bash", "TERM=xterm-256color", "HISTSIZE=1000", "SSH_CLIENT=XXX 63597 22", "LEAPP_NO_RHSM=0", "SELINUX_USE_CURRENT_RANGE=", "SSH_TTY=/dev/pts/0", "LC_ALL=en_US.UTF-8", "USER=root", "LEAPP_COMMON_FILES=:/etc/leapp/repos.d/system_upgrade/common/files:/etc/leapp/repos.d/system_upgrade/el7toel8/files", ..., "LEAPP_COMMON_TOOLS=:/etc/leapp/repos.d/system_upgrade/el7toel8/tools", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/etc/leapp/repos.d/system_upgrade/el7toel8/tools", "MAIL=/var/spool/mail/root", "_=/usr/bin/strace", "LEAPP_VERBOSE=1", "PWD=/usr/share/leapp-repository/repositories/system_upgrade/common/actors/targetuserspacecreator", "LEAPP_HOSTNAME=XXX", "LANG=en_US.UTF-8", "LEAPP_UNSUPPORTED=0", "SELINUX_LEVEL_REQUESTED=", "LEAPP_EXECUTION_ID=c8b22a7c-7db6-4e8a-bb76-4edd8bd09404", "HISTCONTROL=ignoredups", "HOME=/root", "SHLVL=2", "LOGNAME=root", "SSH_CONNECTION=XXX 63597 YYY 22", "LEAPP_EXPERIMENTAL=0", "LESSOPEN=||/usr/bin/lesspipe.sh %s", "LEAPP_CURRENT_ACTOR=target_userspace_creator", "LEAPP_TARGET_PRODUCT_CHANNEL=e4s", "XDG_RUNTIME_DIR=/run/user/0", "LEAPP_CURRENT_PHASE=TargetTransactionFactsCollection"]) = 0 <0.000971>
 :
8691  14:20:22.523503 write(1</dev/pts/0<char 136:0>>, "Not loading \"rhnplugin\" plugin, as it is disabled\n", 50) = 50 <0.000030>
8691  14:20:23.178802 write(1</dev/pts/0<char 136:0>>, "Loading \"enabled_repos_upload\" plugin\n", 38) = 38 <0.000030>
8691  14:20:23.182088 write(1</dev/pts/0<char 136:0>>, "Loading \"package_upload\" plugin\n", 32) = 32 <0.000026>
8691  14:20:23.187325 write(1</dev/pts/0<char 136:0>>, "Loading \"product-id\" plugin\n", 28) = 28 <0.000031>
8691  14:20:23.192469 write(1</dev/pts/0<char 136:0>>, "Loading \"search-disabled-repos\" plugin\n", 39) = 39 <0.000031>
8691  14:20:23.195100 write(1</dev/pts/0<char 136:0>>, "Loading \"subscription-manager\" plugin\n", 38) = 38 <0.000027>
8691  14:20:23.235110 write(2</dev/pts/0<char 136:0>>, "Traceback (most recent call last):\n", 35) = 35 <0.000033>
8691  14:20:23.235203 write(2</dev/pts/0<char 136:0>>, "  File \"/usr/lib64/python2.7/site-packages/psutil/_pslinux.py\", line 309, in <module>\n", 86) = 86 <0.000021>
8691  14:20:23.237784 write(2</dev/pts/0<char 136:0>>, "    set_scputimes_ntuple(\"/proc\")\n", 34) = 34 <0.000025>
8691  14:20:23.237859 write(2</dev/pts/0<char 136:0>>, "  File \"/usr/lib64/python2.7/site-packages/psutil/_common.py\", line 407, in wrapper\n", 84) = 84 <0.000021>
8691  14:20:23.239077 write(2</dev/pts/0<char 136:0>>, "    ret = cache[key] = fun(*args, **kwargs)\n", 44) = 44 <0.000023>
8691  14:20:23.239145 write(2</dev/pts/0<char 136:0>>, "  File \"/usr/lib64/python2.7/site-packages/psutil/_pslinux.py\", line 276, in set_scputimes_ntuple\n", 98) = 98 <0.000021>
8691  14:20:23.239291 write(2</dev/pts/0<char 136:0>>, "    with open_binary('%s/stat' % procfs_path) as f:\n", 52) = 52 <0.000024>
8691  14:20:23.239357 write(2</dev/pts/0<char 136:0>>, "  File \"/usr/lib64/python2.7/site-packages/psutil/_common.py\", line 713, in open_binary\n", 88) = 88 <0.000032>
8691  14:20:23.239514 write(2</dev/pts/0<char 136:0>>, "    return open(fname, \"rb\", **kwargs)\n", 39) = 39 <0.000021>
8691  14:20:23.239589 write(2</dev/pts/0<char 136:0>>, "IOError: [Errno 2] No such file or directory: '/proc/stat'\n", 59) = 59 <0.000025>
8691  14:20:23.341699 write(1</dev/pts/0<char 136:0>>, "Loading \"tracer_upload\" plugin\n", 31) = 31 <0.000042>
8691  14:20:23.355634 write(1</dev/pts/0<char 136:0>>, "Updating Subscription Management repositories.\n", 47) = 47 <0.000033>
8691  14:20:23.358748 write(1</dev/pts/0<char 136:0>>, "Subscription Manager is operating in container mode.\n", 53) = 53 <0.000028>
8691  14:20:23.725178 write(1</dev/pts/0<char 136:0>>, "\nThis system is not receiving updates. You can use subscription-manager on the host to register and assign subscriptions.\n\n", 123) = 123 <0.000029>
8691  14:20:23.732591 write(1</dev/pts/0<char 136:0>>, "Config time: 1.226\n", 19) = 19 <0.000032>
8691  14:20:23.733729 write(1</dev/pts/0<char 136:0>>, "Yum version: 3.4.3\n", 19) = 19 <0.000029>
8691  14:20:23.747477 write(1</dev/pts/0<char 136:0>>, "Setting up Package Sacks\n", 25) = 25 <0.000038>
8691  14:20:23.748234 write(2</dev/pts/0<char 136:0>>, "There are no enabled repos.\n Run \"yum repolist all\" to see the repos you have.\n To enable Red Hat Subscription Management repositories:\n     subscription-manager repos --enable <repo>\n To enable custom repositories:\n     yum-config-manager --enable <repo>\n", 256) = 256 <0.000030>
8691  14:20:23.748620 write(1</dev/pts/0<char 136:0>>, "Uploading Enabled Repositories Report\n", 38) = 38 <0.000028>

Comment 37 Petr Stodulka 2022-04-11 14:17:41 UTC
Hi Chris! Thank you for the investigation. Customer must configure at least one RHEL 8 repository enabled on the Satellite server. If customer enables BaseOS & Appstream repository (at least one of those), it should work as expected. From the official documentation:
~~~
Enable and synchronize all required RHEL 7 and RHEL 8 repositories with the latest updates for RHEL 7.9 and RHEL 8.4.
~~~

If I am right (not so familiar with Satellite) the default setting for an added repository is enabled (but it can be manually changed to 'disabled').

Comment 39 Petr Stodulka 2022-05-02 15:29:31 UTC
*** Bug 2050153 has been marked as a duplicate of this bug. ***

Comment 45 Petr Stodulka 2023-05-05 09:51:53 UTC
*** Bug 2186874 has been marked as a duplicate of this bug. ***

Comment 47 RHEL Program Management 2023-09-12 11:04:28 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 48 RHEL Program Management 2023-09-12 11:09:28 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.


Note You need to log in before you can comment on or make changes to this bug.