Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1746956

Summary: leapp upgrade keeps failing on Please ensure you have a valid RHEL subscription and your network is up
Product: Red Hat Enterprise Linux 7 Reporter: Kenny Tordeurs <ktordeur>
Component: leapp-repositoryAssignee: Leapp team <leapp-notifications>
Status: CLOSED ERRATA QA Contact: Alois Mahdal <amahdal>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 7.6CC: aromito, bugproxy, cbesson, hannsj_uhl, jcastran, jomiller, leapp-notifications, mbocek, mkielian, mreznik, mschena, msekleta, pstodulk, rmullett, snejoshi, vfeenstr, vmeghana, vsokol
Target Milestone: rcKeywords: Upgrades
Target Release: 7.8   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: leapp-repository-0.9.0-3.el7 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-11-05 06:59:32 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1689150, 1751958    

Description Kenny Tordeurs 2019-08-29 15:21:02 UTC
Description of problem:
leapp upgrade fails with:

~~~
============================================================
                           ERRORS
============================================================

2019-08-29 15:48:28.114824 [ERROR] Actor: target_userspace_creator Message: A subscription-manager command failed to execute
Detail: {u'hint': u'Please ensure you have a valid RHEL subscription and your network is up.'}

============================================================
                       END OF ERRORS
============================================================

============================================================
                           REPORT
============================================================

A report has been generated at /var/log/leapp/leapp-report.txt

A report has been generated at /var/log/leapp/leapp-report.json

============================================================
                       END OF REPORT
============================================================
~~~

Version-Release number of selected component (if applicable):
# rpm -qa | grep leapp
leapp-deps-0.8.1-1.el7_6.noarch
leapp-0.8.1-1.el7_6.noarch
python2-leapp-0.8.1-1.el7_6.noarch
leapp-repository-0.8.1-2.el7_6.noarch
leapp-repository-sos-plugin-0.8.1-2.el7_6.noarch
leapp-repository-deps-0.8.1-2.el7_6.noarch



Actual results:
Please ensure you have a valid RHEL subscription and your network is up.

Expected results:
preupgrade to throw a better error message

I believe in this example the issue could be related to missing repositories:
Like the following should be present:
~~~
2019-08-29 12:32:38.484 DEBUG    PID: 30985 leapp.workflow.FactsCollection.target_userspace_creator: +----------------------------------------------------------+
2019-08-29 12:32:38.541 DEBUG    PID: 30985 leapp.workflow.FactsCollection.target_userspace_creator:     Available Repositories in /etc/yum.repos.d/redhat.repo
2019-08-29 12:32:38.548 DEBUG    PID: 30985 leapp.workflow.FactsCollection.target_userspace_creator: +----------------------------------------------------------+
2019-08-29 12:32:38.554 DEBUG    PID: 30985 leapp.workflow.FactsCollection.target_userspace_creator: Repo ID:   rhel-8-for-x86_64-baseos-rpms
2019-08-29 12:32:38.560 DEBUG    PID: 30985 leapp.workflow.FactsCollection.target_userspace_creator: Repo Name: Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs)
2019-08-29 12:32:38.566 DEBUG    PID: 30985 leapp.workflow.FactsCollection.target_userspace_creator: Repo URL:  https://ktordeur-sat65.sysmgmt.lan/pulp/repos/Default_Organization/Library/content/dist/rhel8/8.0/x86_64/baseos/os
2019-08-29 12:32:38.572 DEBUG    PID: 30985 leapp.workflow.FactsCollection.target_userspace_creator: Enabled:   1
2019-08-29 12:32:38.577 DEBUG    PID: 30985 leapp.workflow.FactsCollection.target_userspace_creator:
2019-08-29 12:32:38.583 DEBUG    PID: 30985 leapp.workflow.FactsCollection.target_userspace_creator: Repo ID:   satellite-tools-6.5-for-rhel-8-x86_64-rpms
2019-08-29 12:32:38.589 DEBUG    PID: 30985 leapp.workflow.FactsCollection.target_userspace_creator: Repo Name: Red Hat Satellite Tools 6.5 for RHEL 8 x86_64 (RPMs)
2019-08-29 12:32:38.595 DEBUG    PID: 30985 leapp.workflow.FactsCollection.target_userspace_creator: Repo URL:  https://ktordeur-sat65.sysmgmt.lan/pulp/repos/Default_Organization/Library/content/dist/layered/rhel8/x86_64/sat-tools/6.5/os
2019-08-29 12:32:38.601 DEBUG    PID: 30985 leapp.workflow.FactsCollection.target_userspace_creator: Enabled:   0
2019-08-29 12:32:38.607 DEBUG    PID: 30985 leapp.workflow.FactsCollection.target_userspace_creator:
2019-08-29 12:32:38.617 DEBUG    PID: 30985 leapp.workflow.FactsCollection.target_userspace_creator: Repo ID:   rhel-8-for-x86_64-appstream-rpms
2019-08-29 12:32:38.623 DEBUG    PID: 30985 leapp.workflow.FactsCollection.target_userspace_creator: Repo Name: Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs)
2019-08-29 12:32:38.630 DEBUG    PID: 30985 leapp.workflow.FactsCollection.target_userspace_creator: Repo URL:  https://ktordeur-sat65.sysmgmt.lan/pulp/repos/Default_Organization/Library/content/dist/rhel8/8.0/x86_64/appstream/os
2019-08-29 12:32:38.646 DEBUG    PID: 30985 leapp.workflow.FactsCollection.target_userspace_creator: Enabled:   1
2019-08-29 12:32:38.661 DEBUG    PID: 30985 leapp.workflow.FactsCollection.target_userspace_creator:

~~~

Additional info:

Comment 4 Kenny Tordeurs 2019-08-29 15:36:47 UTC
Also note the following error:

~~~
2019-08-29 15:48:05.441 DEBUG    PID: 24599 leapp.workflow.FactsCollection.target_userspace_creator: System certificates corrupted. Please reregister.
~~~

This system was re-register at least 5x and still the same error.

Comment 5 Michal Bocek 2019-08-30 11:05:05 UTC
The upgrade seems to fail on command 'subscription-manager release --unset'. Could you please try running these commands before running leapp?
$ subscription-manager release --show
$ subscription-manager release --unset
and send us their output?

I agree we should be printing more descriptive error that would tell more details on what went wrong.

Vinzenz, there are also these log message before the release unset failure which I suppose are not related or critical?
1. 
  DEBUG    External command is started: [rm -rf /var/lib/leapp/scratch/mounts/source_overlay/var/cache/dnf]
  DEBUG    rm: cannot remove '/var/lib/leapp/scratch/mounts/source_overlay/var/cache/dnf/epel-926bba585bff1b83/repodata': Directory not empty
  DEBUG    rm: cannot remove '/var/lib/leapp/scratch/mounts/source_overlay/var/cache/dnf/rhel-7-server-satellite-maintenance-6-rpms-12218a13fa14d435/repodata': Directory not empty
  DEBUG    rm: cannot remove '/var/lib/leapp/scratch/mounts/source_overlay/var/cache/dnf/rhel-7-server-rh-common-rpms-c57e239adc6116e0/repodata': Directory not empty
  DEBUG    rm: cannot remove '/var/lib/leapp/scratch/mounts/source_overlay/var/cache/dnf/rhel-ha-for-rhel-7-server-rpms-0855d3a0c19b3277/repodata': Directory not empty
  DEBUG    rm: cannot remove '/var/lib/leapp/scratch/mounts/source_overlay/var/cache/dnf/rhel-7-server-rpms-30937a8d9a640100/repodata': Directory not empty
  DEBUG    rm: cannot remove '/var/lib/leapp/scratch/mounts/source_overlay/var/cache/dnf/rhel-7-server-extras-rpms-dfdb02f8928687cc/repodata': Directory not empty
  DEBUG    rm: cannot remove '/var/lib/leapp/scratch/mounts/source_overlay/var/cache/dnf/rhel-server-rhscl-7-rpms-a9b7670bb90339fa/repodata': Directory not empty
  DEBUG    rm: cannot remove '/var/lib/leapp/scratch/mounts/source_overlay/var/cache/dnf/rhel-7-server-eus-optional-rpms-67a05299668d21a9/repodata': Directory not empty
  DEBUG    rm: cannot remove '/var/lib/leapp/scratch/mounts/source_overlay/var/cache/dnf/rhel-rs-for-rhel-7-server-rpms-a5fdc9fe90052efa/repodata': Directory not empty
  DEBUG    rm: cannot remove '/var/lib/leapp/scratch/mounts/source_overlay/var/cache/dnf/rhel-7-server-satellite-tools-6.5-rpms-8ca5d56508f28cc3/repodata': Directory not empty
  DEBUG    rm: cannot remove '/var/lib/leapp/scratch/mounts/source_overlay/var/cache/dnf/rhel-7-server-ansible-2.6-rpms-5a05bb82de619af8/repodata': Directory not empty
  DEBUG    rm: cannot remove '/var/lib/leapp/scratch/mounts/source_overlay/var/cache/dnf/rhel-7-server-rhn-tools-rpms-c70f6bf86c2b7e7b/repodata': Directory not empty
  DEBUG    External command is finished: [rm -rf /var/lib/leapp/scratch/mounts/source_overlay/var/cache/dnf]
  WARNING  Removing mount directory /var/lib/leapp/scratch/mounts/source_overlay/var/cache/dnf failed with: A Leapp Command Error occurred.

2. Every call to systemd-nspawn prints a failure message:
  DEBUG    External command is started: [systemd-nspawn --register=no --quiet -D /var/lib/leapp/scratch/mounts/source_overlay rm -rf /etc/pki.bak]
  DEBUG    Failed to create directory /var/lib/leapp/scratch/mounts/source_overlay//sys/fs/selinux: No such file or directory
  DEBUG    Failed to create directory /var/lib/leapp/scratch/mounts/source_overlay//sys/fs/selinux: No such file or directory
  DEBUG    External command is finished: [systemd-nspawn --register=no --quiet -D /var/lib/leapp/scratch/mounts/source_overlay rm -rf /etc/pki.bak]

From what I found here https://lists.freedesktop.org/archives/systemd-devel/2018-June/040892.html:
"this suggests nspawn tries to mount selinuxfs into the container even though the kernel doesn't actually support that."

Michal Sekletar may have more info about the systemd one.

Comment 8 Michal Bocek 2019-08-30 17:53:11 UTC
Improvement to the leapp error message: https://github.com/oamg/leapp-repository/pull/325

Comment 9 Michal Bocek 2019-08-30 17:58:49 UTC
*** Bug 1747444 has been marked as a duplicate of this bug. ***

Comment 10 Michal Bocek 2019-08-30 18:03:33 UTC
From: https://bugzilla.redhat.com/show_bug.cgi?id=1747444

(In reply to Christophe Besson from comment #0)
> Description of problem:
> 
> The customer got these messages while trying an upgrade from RHEL 7.6 to
> RHEL 8.0 using leapp.
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 2019-08-26 22:00:30.994 INFO     PID: 14537
> leapp.workflow.FactsCollection.target_userspace_creator: Attempt 1 of 5 to
> perform unset_release failed - Retrying after 5 seconds
> 2019-08-26 22:00:37.91  INFO     PID: 14537
> leapp.workflow.FactsCollection.target_userspace_creator: Attempt 2 of 5 to
> perform unset_release failed - Retrying after 5 seconds
> 2019-08-26 22:00:43.212 INFO     PID: 14537
> leapp.workflow.FactsCollection.target_userspace_creator: Attempt 3 of 5 to
> perform unset_release failed - Retrying after 5 seconds
> 2019-08-26 22:00:49.314 INFO     PID: 14537
> leapp.workflow.FactsCollection.target_userspace_creator: Attempt 4 of 5 to
> perform unset_release failed - Retrying after 5 seconds
> 2019-08-26 22:00:55.408 INFO     PID: 14537
> leapp.workflow.FactsCollection.target_userspace_creator: Attempt 5 of 5 to
> perform unset_release failed - Retrying after 5 seconds
> 2019-08-26 22:01:01.487 WARNING  PID: 14537
> leapp.workflow.FactsCollection.target_userspace_creator: Attempt 6 of 5 to
> perform unset_release failed. Maximum number of retries have been reached.
> ...
> 2019-08-26 14:22:52.539214 [ERROR] Actor: target_userspace_creator Message:
> A subscription-manager command failed to execute
> Detail: {u'hint': u'Please ensure you have a valid RHEL subscription and
> your network is up.'}
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 
> Version-Release number of selected component (if applicable):
> leapp-repository-0.8.1-2.el7_6
> 
> How reproducible:
> Can't reproduce.
> 
> Steps to Reproduce:
> 1.
> 2.
> 3.
> 
> Actual results:
> 2019-08-26 14:22:52.539214 [ERROR] Actor: target_userspace_creator Message:
> A subscription-manager command failed to execute
> Detail: {u'hint': u'Please ensure you have a valid RHEL subscription and
> your network is up.'}
> 
> Expected results:
> No issue. The downgrade to the previous version (0.7) works.
> 
> Additional info:
> From the leapp.db, we can see that the command "subscription-manager release
> --unset" returns an exit code 70.
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 15134 22:06:21.083344 read(4</var/lib/leapp/leapp.db>,
> "\r\0\0\0\1\1\1\0\1\1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
> \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
> 0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
> \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
> 0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
> \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
> 0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\205z\205\343v\t\0)CU\2\
> 0\212Qprocess-result2019-08-26T10:02:11.153760Z5a4c0958-e4b7-4ea0-ae45-
> e12cbd6d6907\0\314{\"env\": null, \"id\":
> \"84de09c4-6dee-4a31-9f62-ed266ba51274\", \"parameters\":
> [\"systemd-nspawn\", \"--register=no\", \"--quiet\", \"-D\",
> \"/var/lib/leapp/scratch/mounts/source_overlay\", \"subscription-manager\",
> \"release\", \"--unset\"], \"result\": {\"signal\": 0, \"pid\": 11749,
> \"exit_code\": 70, \"stderr\": \"Failed to create directory
> /var/lib/leapp/scratch/mounts/source_overlay//sys/fs/selinux: Read-only file
> system\\nFailed to create directory
> /var/lib/leapp/scratch/mounts/source_overlay//sys/fs/selinux: Read-only file
> system\\nHost and machine ids are equal (9fe4afedfc8b4579b6d407df097233be):
> refusing to link journals\\nSystem certificates corrupted. Please
> reregister.\\n\", \"stdout\": \"\"}}"..., 16384) = 16384 <0.000025>
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 
> After a look in the subscription-manager code, it seems to come from there:
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> /usr/lib64/python2.7/site-packages/subscription_manager/managercli.py:
>  499         try:
>  500 
>  501             return_code = self._do_command()
>  502 
>  503             # Only persist the config changes if there was no exception
>  504             if config_changed and self.persist_server_options():
>  505                 conf.persist()
>  506 
>  507             if return_code is not None:
>  508                 return return_code
>  509         except (CertificateException, ssl.SSLError) as e:
>  510             log.error(e)
>  511             system_exit(os.EX_SOFTWARE, _('System certificates
> corrupted. Please reregister.'))
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 
> 
> The strace shows information on what was logged into rhsm.log:
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 16749 22:09:30.323544 write(4</var/log/rhsm/rhsm.log>, "2019-08-26
> 22:09:30,323 [INFO] subscription-manager:1:MainThread @managercli.py:346 -
> Client Versions: {'subscription-manager': '1.21.10-3.el7_6'}\n", 147) = 147
> <0.000010>
> 16749 22:09:30.324245 write(4</var/log/rhsm/rhsm.log>, "2019-08-26
> 22:09:30,324 [INFO] subscription-manager:1:MainThread @connection.py:871 -
> Connection built: http_proxy=205.235.99.79:3128
> host=subscription.rhsm.redhat.com port=443 handler=/subscription
> auth=identity_cert ca_dir=/etc/rhsm/ca/ insecure=False\n", 254) = 254
> <0.000014>
> 16749 22:09:30.324741 write(4</var/log/rhsm/rhsm.log>, "2019-08-26
> 22:09:30,324 [INFO] subscription-manager:1:MainThread @connection.py:871 -
> Connection built: http_proxy=205.235.99.79:3128
> host=subscription.rhsm.redhat.com port=443 handler=/subscription
> auth=none\n", 209) = 209 <0.000014>
> 16749 22:09:30.325935 write(4</var/log/rhsm/rhsm.log>, "2019-08-26
> 22:09:30,325 [INFO] subscription-manager:1:MainThread @managercli.py:322 -
> Consumer Identity name=sgccav011
> uuid=702c6051-cdd3-4df8-8509-7e40f1140e3d\n", 161) = 161 <0.000014>
> 16749 22:09:31.191314 write(4</var/log/rhsm/rhsm.log>, "2019-08-26
> 22:09:31,191 [INFO] subscription-manager:1:MainThread @connection.py:588 -
> Response: status=204, requestUuid=d65730dd-7bdf-4e11-a08d-e2336d2cd007,
> request=\"PUT
> /subscription/consumers/702c6051-cdd3-4df8-8509-7e40f1140e3d\"\n", 233) =
> 233 <0.000015>
> 16749 22:09:31.196361 write(4</var/log/rhsm/rhsm.log>, "2019-08-26
> 22:09:31,196 [ERROR] subscription-manager:1:MainThread @managercli.py:510 -
> Error loading certificate: [Errno 2] No such file or directory:
> '/etc/pki/product-default/69.pem'\n", 185) = 185 <0.000014>
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 
> => The error being: Error loading certificate: [Errno 2] No such file or
> directory: '/etc/pki/product-default/69.pem
> 
> => The customer has this file on its RHEL 7.6 rootfs, but not in the
> "el8target" used for the upgrade.
> 
> => I noticed that the behavior was different with the previous leapp release
> (0.7.x), so I asked for the customer to downgrade, and then it works for him.
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> # yum remove 'leapp*'
> # yum install leapp-0.7.0-2.el7_6 leapp-repository-0.7.0-5.el7_6
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 
> Unfortunately, I didn't succeed to reproduce the issue. If I removed this
> file from my test VM, it is upgraded anyway, without failure.
> If I copy the file during the leapp process, before the release --unset, it
> doesn't work too, I got the following error message:
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> No releases match '8.0'.  Consult 'release --list' for a full listing.
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~



(In reply to Christophe Besson from comment #2)
> Additional note:
> 
> The following error message can be safely ignored, this is not the root
> cause.
> 
> Failed to create directory
> /var/lib/leapp/scratch/mounts/source_overlay//sys/fs/selinux: Read-only file
> system\\nFailed to create directory
> /var/lib/leapp/scratch/mounts/source_overlay//sys/fs/selinux: Read-only file
> system

Comment 11 Michal Bocek 2019-08-30 18:21:10 UTC
From https://bugzilla.redhat.com/show_bug.cgi?id=1747444:
(In reply to Christophe Besson from comment #0)
> => The error being: Error loading certificate: [Errno 2] No such file or
> directory: '/etc/pki/product-default/69.pem
> 
> => The customer has this file on its RHEL 7.6 rootfs, but not in the
> "el8target" used for the upgrade.

The RHEL7 cert 69.pem is expected to not be in the el8target. We remove it and copy the RHEL 8 cert there:
https://github.com/oamg/leapp-repository/blob/v0.8.1/repos/system_upgrade/el7toel8/libraries/rhsm.py#L243

I'm not sure right now why submgr still looks for the 69.pem.

Comment 12 Michal Bocek 2019-08-30 18:24:17 UTC
Returning the priority/severity jcastran set before.

Comment 15 Petr Stodulka 2019-09-19 10:45:17 UTC
Joachim, keep in mind that if you set up the group to ibmconf only, you will decline access to most of redhatters as well - exclusively even for people that should resolve the bug ;-) If you set up any group in the BZ, please ensure that redhat or devel group is set as well. Not sure whether it is expected behaviour.

Comment 16 Petr Stodulka 2019-09-19 10:47:27 UTC
But regarding the point of the issue. I believe you even do not want to set groups (restrict people who can see the bug) here as this was originally reported by someone else and it was public. I do not see here any specific information that requires to put here any restriction.. Removing the groups.

Comment 17 Petr Stodulka 2019-09-19 10:49:00 UTC
Ha, I can see that I cannot remove the group you setup. Please, remove the ibmconf from the groups, then we can remove redhat. Thanks.

Sorry for the noise. I will use emails next time.

Comment 18 Hanns-Joachim Uhl 2019-09-19 10:55:07 UTC
(In reply to pstodulk from comment #17)
> Ha, I can see that I cannot remove the group you setup. Please, remove the
> ibmconf from the groups, then we can remove redhat. Thanks.
> 
.
... done for ibmconf ... sorry for creating the confusion ...

Comment 25 Petr Stodulka 2019-10-02 08:56:21 UTC
Hi. We lookad at it and it's possible that at least some cases are affected by issue with XFS with ftype=0. I see in some cases output that indicates it's the case. We realize that we are affected by XFS ftype=0 issue 'again' month before (can be checked using the xfs_info utility). So at least cases affected by that should be resolved with the next release (currently, PRs fixing that are in review process).

If there is case casued by different issue (excluding trouebles with proxy, network, unsubscribed systems), we probably fix that after the next release. But I believe, that most cases will be affected by the XFS issue. Usually machines where have been originally installed RHEL 7.0, 7.1, or 7.2 with the default XFS fs.

Comment 37 Hanns-Joachim Uhl 2019-10-29 14:51:07 UTC
Hello Red Hat / Leapp team,
... are there any plans to make the fix for this bugzilla available as RHEL7.6.z zstream at any time ...?
Please advise ...
Thanks for your support.

Comment 38 Michal Bocek 2019-10-29 16:21:38 UTC
RHEL 7 Extras (in which Leapp is being released) has no z-stream. Extras can receive updated packages at any time - it is independent of y-stream. The same RHEL 7 Extras content is available on all RHEL 7.x minor versions. The next release of Leapp (v0.9.x) is going to be pushed to the RHEL 7 Extras at the time of release of RHEL 8.1 GA.

Comment 39 Alois Mahdal 2019-11-05 05:17:42 UTC
Verifed with leapp-repository-0.9.0-4.el7 as part of regression testing:

https://projects.engineering.redhat.com/browse/OAMG-667

Comment 41 errata-xmlrpc 2019-11-05 06:59:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3306