Description of problem: In RHCS 5.1, for the disconnected environment, the below code snippet was added through the fix (https://github.com/ceph/ceph/commit/1f7c13a11e3597edb88b50a9f23a352e83fb24a5) due to which if the container image name is not in the format of <registry>/ceph/ceph the "docker.io" prefix gets added automatically. ~~~ bits = digest.split('/') if '.' not in bits[0] or len(bits) < 3: digest = 'docker.io/' + digest return digest ~~~ However, there was an another fix i.e https://github.com/ceph/ceph/commit/9271e721e671025b340b0a9867842bb6d3b531b1 where the above code snippet has been reworked. So can we have timeline when the fix would be available in the downstream and in which version ? Version-Release number of selected component (if applicable): RHCS 5.1 How reproducible: If the container image passed in the command `ceph orch upgrade <image>` is not in the format <registry>/ceph/ceph then the "docker.io" prefix get automatically added to the target image name Steps to Reproduce: 1. Deploy the RHCS 5.1 cluster in the **disconnected environment 2. Execute the `orch upgrade` command with the following image name format ~~~ bastionceph5.ceph.com:5000/rhceph-5-rhel8, dummy.registry.com/rhceph-5-rhel8, abc.io/ceph ~~~ Actual results: The orch upgrade command adds the prefix 'docker.io' automatically to the image name Expected results: It should not add the prefix 'docker.io' automatically Additional info: There is an upstream tracker for this issue: https://tracker.ceph.com/issues/53594
Cloned this BZ for Ceph 5.3 - https://bugzilla.redhat.com/show_bug.cgi?id=2100553
@akraj "Upgrading from 5.0 or 5.1 to 5.2 is still affected by this issue." In my RHCS 5.0 cluster I am able to upgrade, only the clusters that I have already upgraded to 5.1 are stuck on that version because of this bug. Upgrading from 5.0 to 5.2 should hopefully work?
(In reply to Christoph Dwertmann from comment #16) > @akraj "Upgrading from 5.0 or 5.1 to 5.2 is still affected by > this issue." > In my RHCS 5.0 cluster I am able to upgrade, only the clusters that I have > already upgraded to 5.1 are stuck on that version because of this bug. > Upgrading from 5.0 to 5.2 should hopefully work? Yes, you're correct about this. Just looked back and it seems the issue was introduced with https://github.com/ceph/ceph/pull/40577 which did not make it's way into the 5.0 release. So it's only upgrades started from a 5.1 build that will face the issue.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage Security, Bug Fix, and Enhancement Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5997