Bug 2077843
Summary: | [cee/sd][Cephadm] 5.1 `ceph orch upgrade` adds the prefix "docker.io" to image in the disconnected environment | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Kritik Sachdeva <ksachdev> | |
Component: | Cephadm | Assignee: | Adam King <adking> | |
Status: | CLOSED ERRATA | QA Contact: | Manisha Saini <msaini> | |
Severity: | medium | Docs Contact: | Akash Raj <akraj> | |
Priority: | unspecified | |||
Version: | 5.1 | CC: | aakashraj, adking, akraj, asriram, cdwertma, gjose, kdreyer, lithomas, mmuench, msaini, vereddy, wpinheir | |
Target Milestone: | --- | Keywords: | Rebase | |
Target Release: | 5.2 | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | ceph-16.2.8-2.el8cp | Doc Type: | Bug Fix | |
Doc Text: |
.`cephadm` no longer adds `docker.io` to the image name provided for the `ceph orch upgrade start` command
Previously, `cephadm` would add `docker.io` to any image from an unqualified registry, thereby it was impossible to pass an image from an unqualified registry, such as a local registry, to upgrade, as it would fail to pull this image.
Starting with {storage-product} 5.2, , `docker.io` is no longer added to the image name, unless the name is a match for an upstream ceph image such as `ceph/ceph:v17`. On running the `ceph orch upgrade` command, users can pass images from local registries and `Cephadm` can upgrade to that image.
NOTE: This is ONLY applicable to upgrades starting from 5.2. Upgrading from 5.1 to 5.2 is still affected by this issue.
|
Story Points: | --- | |
Clone Of: | ||||
: | 2100553 (view as bug list) | Environment: | ||
Last Closed: | 2022-08-09 17:38:20 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 2100553, 2102272 |
Description
Kritik Sachdeva
2022-04-22 12:09:37 UTC
Cloned this BZ for Ceph 5.3 - https://bugzilla.redhat.com/show_bug.cgi?id=2100553 @akraj "Upgrading from 5.0 or 5.1 to 5.2 is still affected by this issue." In my RHCS 5.0 cluster I am able to upgrade, only the clusters that I have already upgraded to 5.1 are stuck on that version because of this bug. Upgrading from 5.0 to 5.2 should hopefully work? (In reply to Christoph Dwertmann from comment #16) > @akraj "Upgrading from 5.0 or 5.1 to 5.2 is still affected by > this issue." > In my RHCS 5.0 cluster I am able to upgrade, only the clusters that I have > already upgraded to 5.1 are stuck on that version because of this bug. > Upgrading from 5.0 to 5.2 should hopefully work? Yes, you're correct about this. Just looked back and it seems the issue was introduced with https://github.com/ceph/ceph/pull/40577 which did not make it's way into the 5.0 release. So it's only upgrades started from a 5.1 build that will face the issue. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage Security, Bug Fix, and Enhancement Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5997 |