Bug 2133786 - [GSS][Doc] How to handle MDSs behind on trimming errors
Summary: [GSS][Doc] How to handle MDSs behind on trimming errors
Keywords:
Status: NEW
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Documentation
Version: 4.2
Hardware: All
OS: Linux
unspecified
medium
Target Milestone: ---
: Backlog
Assignee: Anjana Suparna Sriram
QA Contact: Amarnath
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-10-11 11:52 UTC by Karun Josy
Modified: 2023-07-25 18:12 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-5426 0 None None None 2022-10-11 12:01:05 UTC

Description Karun Josy 2022-10-11 11:52:21 UTC
* Describe the issue:

Recently with the increased number of OCS/ODF users, we are seeing a high number of situations where MDS complaints that it is behind on trimming.
What should be the best way to handle this situation?
We have some KCS [1] (unverified) which says to increase mds_log_max_segments value from 128 to 256. Is this a verified solution applicable to all clusters including ODF? Or should this value be determined?
What are the caveats if any that we should be aware of if we are increasing this value?



[1] https://access.redhat.com/solutions/6639511

* Document URL:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html/file_system_guide/health-messages-for-the-ceph-file-system_fs


Product Version:
RHCS 4 and 5


Note You need to log in before you can comment on or make changes to this bug.