Bug 2349077 - [8.x] [Read Balancer] Make rm-pg-upmap-primary able to remove mappings by force
Summary: [8.x] [Read Balancer] Make rm-pg-upmap-primary able to remove mappings by force
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 7.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 8.1
Assignee: Laura Flores
QA Contact: Pawan
Rivka Pollack
URL:
Whiteboard:
: 2353072 (view as bug list)
Depends On:
Blocks: 2351689 2357063
TreeView+ depends on / blocked
 
Reported: 2025-02-28 20:35 UTC by Laura Flores
Modified: 2025-06-26 12:26 UTC (History)
11 users (show)

Fixed In Version: ceph-19.2.1-70.el9cp
Doc Type: Enhancement
Doc Text:
.`pg-upmap-primary` mappings can now be removed from the OSDmap With this enhancement, the new `ceph osd rm-pg-upmap-primary-all` command is introduced. The command allows users to clear all `pg-upmap-primary` mappings in the OSDmap at any time. Use the command to remove `pg-upmap-primary` with a single command. The command can also be used to remove any invalid mappings, when required. IMPORTANT: Use the command carefully, as it directly modifies primary PG mappings and can impact read performance.
Clone Of:
: 2357063 (view as bug list)
Environment:
Last Closed: 2025-06-26 12:26:44 UTC
Embargoed:
lflores: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 67179 0 None None None 2025-03-19 21:27:21 UTC
Github ceph ceph pull 62190 0 None Merged mon, osd: add command to remove invalid pg-upmap-primary entries 2025-03-19 21:27:21 UTC
Github ceph ceph pull 62421 0 None open squid: mon, osd: add command to remove invalid pg-upmap-primary entries 2025-03-20 21:53:12 UTC
Red Hat Issue Tracker RHCEPH-10691 0 None None None 2025-02-28 20:36:44 UTC
Red Hat Product Errata RHSA-2025:9775 0 None None None 2025-06-26 12:26:56 UTC

Internal Links: 2349078

Description Laura Flores 2025-02-28 20:35:36 UTC
Description of problem:

Corresponding upstream tracker here: https://tracker.ceph.com/issues/69760

Essentially, the user was running a v18.2.1 cluster and hit BZ#2290580, which we know occurs when clients older than Reef are erroneously allowed to connect to the cluster when pg_upmap_primary, a strictly-Reef feature, is employed.

The user also hit BZ#2348970, which occurs when a pool is deleted and "phantom" pg_upmap_primary entries for that pool are left in the OSDMap. Therefore, the user cannot remove the pg_upmap_primary entries prior to upgrading from the broken encoder to the fixed encoder, which is the suggested workaround for BZ#2290580.

The idea for a fix is to provide the option to force-removal of a "phantom" pg_upmap_primary mapping, and potentially to relax the assertion in the OSDMap encoder.

The net effect: Although fixes for BZ#2290580 are already included in v18.2.4, the user still experiences difficulty if they hit the crash try to upgrade.

Version-Release number of selected component (if applicable):
v18.2.1

Comment 1 Storage PM bot 2025-02-28 20:35:48 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 12 errata-xmlrpc 2025-06-26 12:26:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 8.1 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2025:9775


Note You need to log in before you can comment on or make changes to this bug.