Bug 2096262
| Summary: | [RFE] : ceph orch upgrade : block upgrade of cluster with iscsi service | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Vasishta <vashastr> |
| Component: | Cephadm | Assignee: | Adam King <adking> |
| Status: | CLOSED ERRATA | QA Contact: | Manasa <mgowri> |
| Severity: | high | Docs Contact: | Masauso Lungu <mlungu> |
| Priority: | unspecified | ||
| Version: | 6.0 | CC: | adking, idryomov, mlungu, pasik, pnataraj, tserlin, vereddy |
| Target Milestone: | --- | Keywords: | FutureFeature |
| Target Release: | 6.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | ceph-17.2.3-39.el9cp | Doc Type: | No Doc Update |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-03-20 18:56:39 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Vasishta
2022-06-13 11:58:13 UTC
Issue is fixed in the latest RHCS 6.0 build
Below are the snippet
[ceph: root@magna021 /]# ceph orch upgrade status
{
"target_image": "registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:9067726d198edafd890e84376796de613d2f63221374d104078b8a0ceec7c529",
"in_progress": true,
"which": "Upgrading all daemon types on all hosts",
"services_complete": [],
"progress": "1/89 daemons upgraded",
"message": "Error: UPGRADE_ISCSI_UNSUPPORTED: Upgrade attempted to RHCS release not supporting iscsi with iscsi daemons present"
}
[ceph: root@magna021 /]# ceph orch ls
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
alertmanager ?:9093,9094 1/1 2m ago 5M count:1
crash 12/12 2m ago 5M *
grafana ?:3000 1/1 2m ago 5M count:1
iscsi.test 0/2 103s ago 3M plena001;plena002
mgr 2/2 2m ago 5M count:2
mon 5/5 2m ago 4M magna021;magna022;magna024;magna025;magna026
node-exporter ?:9100 12/12 2m ago 5w count:12
osd 38 2m ago - <unmanaged>
osd.all-available-devices 14 2m ago 2w <unmanaged>
prometheus ?:9095 1/1 2m ago 4w count:1
rbd-mirror 1/1 2m ago 4M magna026
[root@magna021 yum.repos.d]# ceph health detail
HEALTH_ERR 2 failed cephadm daemon(s); Upgrade attempted to RHCS release not supporting iscsi with iscsi daemons present
[WRN] CEPHADM_FAILED_DAEMON: 2 failed cephadm daemon(s)
daemon iscsi.test.plena001.konnne on plena001 is in error state
daemon iscsi.test.plena002.wgcgle on plena002 is in error state
[ERR] UPGRADE_ISCSI_UNSUPPORTED: Upgrade attempted to RHCS release not supporting iscsi with iscsi daemons present
Iscsi is no longer supported in RHCS 6.
Please remove any iscsi services/daemons from the cluster before upgrading.
If you instead would rather keep using iscsi than upgrade, please manually downgrade any
upgraded daemons with `ceph orch daemon redeploy <daemon-name> --image <previous-5.x-image-name>`
[root@magna021 yum.repos.d]#
[root@magna021 yum.repos.d]# ceph health detail
HEALTH_ERR 2 failed cephadm daemon(s); Upgrade attempted to RHCS release not supporting iscsi with iscsi daemons present
[WRN] CEPHADM_FAILED_DAEMON: 2 failed cephadm daemon(s)
daemon iscsi.test.plena001.konnne on plena001 is in error state
daemon iscsi.test.plena002.wgcgle on plena002 is in error state
[ERR] UPGRADE_ISCSI_UNSUPPORTED: Upgrade attempted to RHCS release not supporting iscsi with iscsi daemons present
Iscsi is no longer supported in RHCS 6.
Please remove any iscsi services/daemons from the cluster before upgrading.
If you instead would rather keep using iscsi than upgrade, please manually downgrade any
upgraded daemons with `ceph orch daemon redeploy <daemon-name> --image <previous-5.x-image-name>`
[root@magna021 yum.repos.d]#
cephadm version-
ceph-17.2.3-39.el9cp
ceph version
ceph-17.2.3-39.el9cp
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:1360 |