Bug 2133786

Summary: [GSS][Doc] How to handle MDSs behind on trimming errors
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Karun Josy <kjosy>
Component: DocumentationAssignee: Anjana Suparna Sriram <asriram>
Documentation sub component: File System Guide QA Contact: Amarnath <amk>
Status: NEW --- Docs Contact:
Severity: medium    
Priority: unspecified CC: akraj, tpetr, vshankar
Version: 4.2   
Target Milestone: ---   
Target Release: Backlog   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Karun Josy 2022-10-11 11:52:21 UTC
* Describe the issue:

Recently with the increased number of OCS/ODF users, we are seeing a high number of situations where MDS complaints that it is behind on trimming.
What should be the best way to handle this situation?
We have some KCS [1] (unverified) which says to increase mds_log_max_segments value from 128 to 256. Is this a verified solution applicable to all clusters including ODF? Or should this value be determined?
What are the caveats if any that we should be aware of if we are increasing this value?



[1] https://access.redhat.com/solutions/6639511

* Document URL:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html/file_system_guide/health-messages-for-the-ceph-file-system_fs


Product Version:
RHCS 4 and 5