Bug 2277756
Summary: | rolling update fails unless mon_mds_skip_sanity=true is set | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | John Fulton <johfulto> |
Component: | Ceph-Ansible | Assignee: | Teoman ONAY <tonay> |
Status: | CLOSED ERRATA | QA Contact: | Sayalee <saraut> |
Severity: | medium | Docs Contact: | |
Priority: | high | ||
Version: | 5.3 | CC: | alfrgarc, ceph-eng-bugs, cephqe-warriors, gfidente, gmeno, jpretori, rpollack, saraut, tonay, tserlin, vdas |
Target Milestone: | --- | ||
Target Release: | 5.3z8 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ceph-ansible-6.0.28.17-1.el8cp | Doc Type: | Bug Fix |
Doc Text: |
.Ceph Monitor (`ceph-mon`) no longer fails during rolling upgrades
Previously, when running a rolling upgrade from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5 the upgrade would fail, with the following error message: `/builddir/build/BUILD/ceph-14.2.22/src/mds/FSMap.cc: 766: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)`
With this fix, the FSMap sanity check is disabled before upgrade and the ugprade completes as expected.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2025-02-13 19:22:43 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 2160009 |
Description
John Fulton
2024-04-29 12:51:37 UTC
Please specify the severity of this bug. Severity is defined here: https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity. Could the ceph-ansible patch which provides the fix, please be linked from this BZ? Is it one of these? https://github.com/ceph/ceph-ansible/releases/tag/v6.0.28 Removing my needinfo as it was redirected in comment #9 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.3 security and bug fix updates), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2025:1478 |