Bug 2302230
Summary: | [Reads Balancer] PGs not getting scaled down post removal of bulk flag on the cluster | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Pawan <pdhiran> | |
Component: | RADOS | Assignee: | Laura Flores <lflores> | |
Status: | ASSIGNED --- | QA Contact: | Pawan <pdhiran> | |
Severity: | high | Docs Contact: | ||
Priority: | unspecified | |||
Version: | 7.1 | CC: | bhubbard, ceph-eng-bugs, cephqe-warriors, lflores, ngangadh, nojha, rpollack, rzarzyns, vumrao, yhatuka | |
Target Milestone: | --- | Keywords: | Automation, TestBlocker | |
Target Release: | 9.0 | Flags: | lflores:
needinfo-
|
|
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | Known Issue | ||
Doc Text: |
.Placement groups are not scaled down in `upmap-read` and `read` balancer modes
Currently, `pg-upmap-primary` entries are not properly removed for placement groups (PGs) that are pending merge. For example, when the bulk flag is removed on a pool, or any case where the number of PGs in a pool decreases. As a result, the PG scale-down process gets stuck and the number of PGs in the affected pool do not decrease as expected.
As a workaround, remove the `pg_upmap_primary` entries in the OSD map of the affected pool.
To view the entries, run the `ceph osd dump` command and then run `ceph osd rm-pg-upmap-primary PG_ID` for reach PG in the affected pool.
After using the workaround, the PG scale-down process resumes as expected.
|
Story Points: | --- | |
Clone Of: | ||||
: | 2357061 (view as bug list) | Environment: | ||
Last Closed: | Type: | Bug | ||
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 2357061, 2317218 |
Description
Pawan
2024-08-01 10:04:31 UTC
|