Bug 2243105
Summary: | MDS: "1 MDSs behind on trimming" and "2 clients failing to respond to cache pressure". | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Manny <mcaldeir> | |
Component: | CephFS | Assignee: | Patrick Donnelly <pdonnell> | |
Status: | CLOSED ERRATA | QA Contact: | Hemanth Kumar <hyelloji> | |
Severity: | high | Docs Contact: | Akash Raj <akraj> | |
Priority: | medium | |||
Version: | 5.3 | CC: | akraj, ceph-eng-bugs, cephqe-warriors, gfarnum, hyelloji, linuxkidd, mcaldeir, pdonnell, tserlin, vereddy, vshankar | |
Target Milestone: | --- | Keywords: | Rebase | |
Target Release: | 7.1 | |||
Hardware: | All | |||
OS: | All | |||
Whiteboard: | ||||
Fixed In Version: | ceph-18.2.1-2.el9cp | Doc Type: | Bug Fix | |
Doc Text: |
.MDS now queues the next client replay request automatically as part of request clenaup
Previously, sometimes, MDS would not queue the next client request for replay in the `up:client-replay` state causing the MDS to hang.
With this fix, the next client replay request is queued automatically as part of request cleanup and MDS proceeds with failover recovery normally.
|
Story Points: | --- | |
Clone Of: | ||||
: | 2244866 2244868 (view as bug list) | Environment: | ||
Last Closed: | 2024-06-13 14:22:04 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 2244866, 2244868, 2267614, 2272099, 2298578, 2298579 |
Comment 11
Manny
2023-10-12 03:07:44 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:3925 |