Bug 2243570 - No warning generated when the 'require-osd-release' flag does not match current release, when upgraded from Qunicy to Reef
Summary: No warning generated when the 'require-osd-release' flag does not match curr...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 7.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 7.1z5
Assignee: Sridhar Seshasayee
QA Contact: Vipin M S
Rivka Pollack
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-10-12 03:17 UTC by Pawan
Modified: 2025-06-23 02:51 UTC (History)
9 users (show)

Fixed In Version: ceph-18.2.1-331.el9cp
Doc Type: Bug Fix
Doc Text:
.OSDMap checks now ensure that a health warning is reported until the release flag is updated after a cluster upgrade Previously, after all OSDs were upgraded to a new release, the `require-osd-release` flag in the OSDMap was updated to reflect the new release name. However, the check that verifies this flag against the running version was not updated appropriately to include the 'reef' release, so no cluster warning was raised once the upgrade was completed. As a result, users could mistakenly continue operations, risking catastrophic outcomes including cluster unavailability. With this fix, the OSDMap check now includes the 'reef' release, ensuring that a health warning is reported until the `require-osd-release` flag is updated to the appropriate release after a cluster upgrade.
Clone Of:
Environment:
Last Closed: 2025-06-23 02:51:33 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 69150 0 None None None 2024-12-10 08:30:08 UTC
Github ceph ceph pull 60981 0 None open reef: osd: adding 'reef' to pending_require_osd_release 2024-12-10 08:30:08 UTC
Red Hat Issue Tracker RHCEPH-7712 0 None None None 2023-10-12 03:17:52 UTC
Red Hat Product Errata RHBA-2025:9335 0 None None None 2025-06-23 02:51:36 UTC

Description Pawan 2023-10-12 03:17:29 UTC
Description of problem:

When the upgrade is in progress and the osds are upgraded from Quincy to Reef, the warning message does not shown up in ceph status.



We observed the warning when the cluster was upgraded from :

1. Nautilus to Pacific.
# ceph health detail
HEALTH_WARN all OSDs are running pacific or later but require_osd_release < pacific
[WRN] OSD_UPGRADE_FINISHED: all OSDs are running pacific or later but require_osd_release < pacific
all OSDs are running pacific or later but require_osd_release < pacific

2. Pacific to Quincy.

"OSD_UPGRADE_FINISHED":{"severity":"HEALTH_WARN","summary":{"message":"all OSDs are running quincy or later but require_osd_release < quincy","count":0},"detail":[{"message":"all OSDs are running quincy or later but require_osd_release < quincy"}]

Test-run log : http://magna002.ceph.redhat.com/cephci-jenkins/cephci-run-LU2F6M/Upgrade_ceph_cluster_0.log 

3. Pacific to Reef.

"OSD_UPGRADE_FINISHED":{"severity":"HEALTH_WARN","summary":{"message":"all OSDs are running quincy or later but require_osd_release < quincy","count":0},"detail":[{"message":"all OSDs are running quincy or later but require_osd_release < quincy"}],"muted":false}

Test-run log : http://magna002.ceph.redhat.com/cephci-jenkins/cephci-run-ILSU49/Upgrade_ceph_cluster_0.log 

Version-Release number of selected component (if applicable):
ceph version 18.2.0-84.el9cp (4d8b4718f998b40ce8c0995ad6d2b3b3745756ea) reef (stable)

How reproducible:
3/3

Steps to Reproduce:
1. Deploy a RHCS 6.1 cluster
2. Upgrade the cluster to latest nightly 7.0 builds
3. Observe that no warning generated for mismatch b/w the require-osd-release flag during upgrades.

Actual results:
Warning not seen in ceph during upgrade from Quincy to Reef

Expected results:
Warning seen in ceph during upgrade from Quincy to Reef

Additional info:
Feature was introduced with bz : https://bugzilla.redhat.com/show_bug.cgi?id=1988773

Comment 1 RHEL Program Management 2023-10-12 03:17:40 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 10 errata-xmlrpc 2025-06-23 02:51:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 7.1 security and bug fix updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2025:9335


Note You need to log in before you can comment on or make changes to this bug.