Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2133786

Summary: [GSS][Doc] How to handle MDSs behind on trimming errors
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Karun Josy <kjosy>
Component: DocumentationAssignee: Akash Raj <akraj>
Documentation sub component: File System Guide QA Contact: Amarnath <amk>
Status: CLOSED DUPLICATE Docs Contact: Rivka Pollack <rpollack>
Severity: medium    
Priority: unspecified CC: akraj, tpetr, vshankar
Version: 4.2   
Target Milestone: ---   
Target Release: Backlog   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 2260300 (view as bug list) Environment:
Last Closed: 2024-01-25 06:03:55 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2260300    

Description Karun Josy 2022-10-11 11:52:21 UTC
* Describe the issue:

Recently with the increased number of OCS/ODF users, we are seeing a high number of situations where MDS complaints that it is behind on trimming.
What should be the best way to handle this situation?
We have some KCS [1] (unverified) which says to increase mds_log_max_segments value from 128 to 256. Is this a verified solution applicable to all clusters including ODF? Or should this value be determined?
What are the caveats if any that we should be aware of if we are increasing this value?



[1] https://access.redhat.com/solutions/6639511

* Document URL:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html/file_system_guide/health-messages-for-the-ceph-file-system_fs


Product Version:
RHCS 4 and 5

Comment 7 Akash Raj 2024-01-25 06:03:55 UTC

*** This bug has been marked as a duplicate of bug 2260300 ***