Bug 2040528
Summary: | pgs wait for read lease after osd start | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Vikhyat Umrao <vumrao> |
Component: | RADOS | Assignee: | Neha Ojha <nojha> |
Status: | CLOSED ERRATA | QA Contact: | Pawan <pdhiran> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 5.0 | CC: | agunn, akupczyk, amathuri, bhubbard, ceph-eng-bugs, ceph-qe-bugs, choffman, ksirivad, lflores, nojha, owasserm, pdhange, rfriedma, rmandyam, rzarzyns, sseshasa, stephen.blinick, tserlin, vereddy, vumrao |
Target Milestone: | --- | ||
Target Release: | 5.1 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ceph-16.2.7-37.el8cp | Doc Type: | Bug Fix |
Doc Text: |
.The `prior_readable_until_ub` parameter is cleared at the end of the peering cycle
Previously, under circumstances when the primary OSD restarted, the knowledge about the prior interval was unavailable as the `prior_readable_until_ub` parameter, which stands for the upper bound on how long prior interval is readable for a PG, was cleared early in the peering stage which was propagated to its peers.
This would cause the placement groups (PGs) to go into a WAIT state and this would block any OSD request during that period.
With this release, the `prior_readable_until_ub` parameter is cleared at the end of the peering cycle, just before activating, after communicating to the peer OSDs and the PGs no longer go into WAIT state after the OSD is restarted.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2022-04-04 10:23:35 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 2059329, 2031073 |
Description
Vikhyat Umrao
2022-01-14 00:05:29 UTC
*** Bug 2034712 has been marked as a duplicate of this bug. *** Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:1174 |