Bug 2042417
Summary: | [RADOS Stretch cluster] PG's stuck in remapped+peering after deployment of stretch mode | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Greg Farnum <gfarnum> |
Component: | RADOS | Assignee: | Greg Farnum <gfarnum> |
Status: | CLOSED ERRATA | QA Contact: | Pawan <pdhiran> |
Severity: | high | Docs Contact: | Akash Raj <akraj> |
Priority: | unspecified | ||
Version: | 5.1 | CC: | akraj, akupczyk, amathuri, bhubbard, ceph-eng-bugs, choffman, gfarnum, jdurgin, kdreyer, ksirivad, lflores, nojha, pdhange, pdhiran, rfriedma, rzarzyns, skanta, sseshasa, vereddy, vumrao |
Target Milestone: | --- | Keywords: | Rebase |
Target Release: | 5.2 | Flags: | gfarnum:
needinfo-
|
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ceph-16.2.8-2.el8cp | Doc Type: | Bug Fix |
Doc Text: |
.PGs no longer get incorrectly stuck in `remapped+peering` state in stretch mode
Previously, due to a logical error, when operating a cluster in stretch mode, it was possible for some placement groups (PGs) to get permanently stuck in `remapped+peering` state under certain cluster conditions, causing the data to be unavailable until the OSDs were taken offline.
With this fix, PGs choose stable OSD sets and they no longer get incorrectly stuck in `remapped+peering` state in stretch mode.
|
Story Points: | --- |
Clone Of: | 2025800 | Environment: | |
Last Closed: | 2022-08-09 17:37:27 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 2025800 | ||
Bug Blocks: | 2102272 |
Comment 13
errata-xmlrpc
2022-08-09 17:37:27 UTC
|