Bug 2303115
Summary: | [IBM Support] PG's not being deep scrubbed in time | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Mike Hackett <mhackett> |
Component: | RADOS | Assignee: | Michael J. Kidd <linuxkidd> |
Status: | CLOSED ERRATA | QA Contact: | skanta |
Severity: | high | Docs Contact: | Rivka Pollack <rpollack> |
Priority: | unspecified | ||
Version: | 5.3 | CC: | bhubbard, ceph-eng-bugs, cephqe-warriors, gsitlani, linuxkidd, ngangadh, nojha, pdhiran, rfriedma, rpollack, rzarzyns, tserlin, vumrao |
Target Milestone: | --- | Flags: | linuxkidd:
needinfo-
linuxkidd: needinfo- |
Target Release: | 5.3z8 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ceph-16.2.10-271.el9cp | Doc Type: | Bug Fix |
Doc Text: |
.'noscrub' and `nodeep-scrub` flags now work as expected.
Previously, in some cases, using the `noscrub` or `nodeep-scrub` flags resulted in incorrect handling of these flags. As a result, placement groups (PGs) were not scrubbed until the primary OSD was restarted. In addition, until the OSD was restarted, the number of concurrent scrubs that could be performed by any of the OSDs that are storing the PG was reduced. These OSD limitations were on both the primary OSDs and OSDs serving as replicas. If the maximum configuration was 1, which was the default, the affected OSDs would not be able to participate in any new scrub actions, for any PG.
With this fix, the 'noscrub' flag now works as expected.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2025-02-13 19:23:02 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Mike Hackett
2024-08-06 13:18:45 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.3 security and bug fix updates), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2025:1478 |