Bug 2277756 - rolling update fails unless mon_mds_skip_sanity=true is set
Summary: rolling update fails unless mon_mds_skip_sanity=true is set
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 5.3
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ---
: 5.3z8
Assignee: Teoman ONAY
QA Contact: Sayalee
URL:
Whiteboard:
Depends On:
Blocks: 2160009
TreeView+ depends on / blocked
 
Reported: 2024-04-29 12:51 UTC by John Fulton
Modified: 2025-02-13 19:22 UTC (History)
11 users (show)

Fixed In Version: ceph-ansible-6.0.28.17-1.el8cp
Doc Type: Bug Fix
Doc Text:
.Ceph Monitor (`ceph-mon`) no longer fails during rolling upgrades Previously, when running a rolling upgrade from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5 the upgrade would fail, with the following error message: `/builddir/build/BUILD/ceph-14.2.22/src/mds/FSMap.cc: 766: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)` With this fix, the FSMap sanity check is disabled before upgrade and the ugprade completes as expected.
Clone Of:
Environment:
Last Closed: 2025-02-13 19:22:43 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSPRH-6533 0 None None None 2024-04-29 12:51:37 UTC
Red Hat Issue Tracker RHCEPH-8923 0 None None None 2024-05-02 11:08:53 UTC
Red Hat Product Errata RHBA-2025:1478 0 None None None 2025-02-13 19:22:48 UTC

Description John Fulton 2024-04-29 12:51:37 UTC
Description of problem:

When running the the following playbook to upgrade from RHCSv4 to RHCSv5

https://github.com/ceph/ceph-ansible/blob/v6.0.28.7/infrastructure-playbooks/rolling_update.yml

The first ceph-mon upgraded to the RHCS 5 version, but the not upgraded ceph-mon is failing as described in the following KCS:

  https://access.redhat.com/solutions/7020523


Version-Release number of selected component (if applicable):

ceph-ansible-6.0.28.7-1.el8 provided by rhceph-5-tools-for-rhel-8-x86_64-rpms

How reproducible:

Deterministic

Steps to Reproduce:

Follow the docs to do an update

https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/17.1/html-single/framework_for_upgrades_16.2_to_17.1/index#proc_installing-ceph-ansible_upgrading-ceph

Actual results:

ceph mon fails and we need to work around as described here:

 https://access.redhat.com/solutions/7020523

Expected results:

Upgrade does not fail and workaround is not required.


Additional info:

Maybe the playbook could apply the workaround from https://access.redhat.com/solutions/7020523 unless the product itself will get the fix.

Comment 1 RHEL Program Management 2024-04-29 12:51:51 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 10 John Fulton 2025-01-06 14:12:52 UTC
Could the ceph-ansible patch which provides the fix, please be linked from this BZ?

Is it one of these?

  https://github.com/ceph/ceph-ansible/releases/tag/v6.0.28

Comment 11 John Fulton 2025-01-06 14:14:01 UTC
Removing my needinfo as it was redirected in comment #9

Comment 29 errata-xmlrpc 2025-02-13 19:22:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.3 security and bug fix updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2025:1478


Note You need to log in before you can comment on or make changes to this bug.