Bug 2349078 - [7.x] [Read Balancer] Make rm-pg-upmap-primary able to remove mappings by force
Summary: [7.x] [Read Balancer] Make rm-pg-upmap-primary able to remove mappings by force
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 7.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 7.1z4
Assignee: Laura Flores
QA Contact: Pawan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2025-02-28 20:37 UTC by Laura Flores
Modified: 2025-05-07 13:27 UTC (History)
11 users (show)

Fixed In Version: ceph-18.2.1-308.el9cp
Doc Type: Enhancement
Doc Text:
.New `ceph osd rm-pg-up map-primary-all` command for OSDMap cleanup Previously, users had to remove `pg_upmap_primary` mappings individually using `ceph osd rm-pg-upmap-primary PGID`, which was time-consuming and error-prone, especially when cleaning up invalid mappings after pool deletion. With this enhancement, users can run the new `ceph osd rm-pg-upmap-primary-all` command to clear all `pg_upmap_primary` mappings from the OSDMap at once, simplifying management and cleanup.
Clone Of:
Environment:
Last Closed: 2025-05-07 12:48:26 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 62190 0 None Merged mon, osd: add command to remove invalid pg-upmap-primary entries 2025-03-19 21:28:01 UTC
Github ceph ceph pull 62191 0 None Merged reef: mon, osd: add command to remove invalid pg-upmap-primary entries 2025-03-20 21:52:42 UTC
Red Hat Bugzilla 2349077 0 unspecified VERIFIED [8.x] [Read Balancer] Make rm-pg-upmap-primary able to remove mappings by force 2025-05-07 07:28:38 UTC
Red Hat Issue Tracker RHCEPH-10692 0 None None None 2025-02-28 20:40:33 UTC
Red Hat Product Errata RHSA-2025:4664 0 None None None 2025-05-07 12:48:29 UTC

Description Laura Flores 2025-02-28 20:37:48 UTC
Description of problem:

Corresponding upstream tracker here: https://tracker.ceph.com/issues/69760

Essentially, the user was running a v18.2.1 cluster and hit BZ#2290580, which we know occurs when clients older than Reef are erroneously allowed to connect to the cluster when pg_upmap_primary, a strictly-Reef feature, is employed.

The user also hit BZ#2348970, which occurs when a pool is deleted and "phantom" pg_upmap_primary entries for that pool are left in the OSDMap. Therefore, the user cannot remove the pg_upmap_primary entries prior to upgrading from the broken encoder to the fixed encoder, which is the suggested workaround for BZ#2290580.

The idea for a fix is to provide the option to force-removal of a "phantom" pg_upmap_primary mapping, and potentially to relax the assertion in the OSDMap encoder.

The net effect: Although fixes for BZ#2290580 are already included in v18.2.4, the user still experiences difficulty if they hit the crash try to upgrade.

Version-Release number of selected component (if applicable):
v18.2.1

Comment 1 Storage PM bot 2025-02-28 20:38:00 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 17 errata-xmlrpc 2025-05-07 12:48:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 7.1 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2025:4664


Note You need to log in before you can comment on or make changes to this bug.