Bug 2218688

Summary: [GSS] MGR not getting updates about PG states from OSDs
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: kelwhite
Component: CephFSAssignee: Venky Shankar <vshankar>
Status: CLOSED COMPLETED QA Contact: Hemanth Kumar <hyelloji>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 5.1CC: bhubbard, bmcmurra, ceph-eng-bugs, cephqe-warriors, csharpe, gfarnum, jquinn, khiremat, kjosy, linuxkidd, lithomas, mcaldeir, pdonnell, rsachere, vshankar, vumrao, xiubli
Target Milestone: ---Flags: khiremat: needinfo-
Target Release: 6.1z1   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-07-12 00:31:27 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description kelwhite 2023-06-29 20:44:42 UTC

Comment 17 jquinn 2023-06-30 02:23:48 UTC
Below are questions from a conversation that took place with Brad earlier in gmeet around the trimming / logging backlog, I'm adding it here to get official responses to the topics.   

1. MDS is behind on trimming - are there any tunings, etc.. to help speed up the rate that trimming is happening.    
2. How to track the current logging backlog - I believe we have this already in a previous discussion above, using ceph health detail in Brads conversation.  If there is another preferred method please let us know.   

Thanks, 
Joe

Comment 67 Manny 2023-06-30 23:54:01 UTC
New Parf Dump uploaded to SS

~~~
{
    "magic": "ceph fs volume v011",
    "write_pos": 214678395331440,
    "expire_pos": 214677827382519,
    "trimmed_pos": 214677824995328,
    "stream_format": 1,
    "layout": {
        "stripe_unit": 4194304,
        "stripe_count": 1,
        "object_size": 4194304,
        "pool_id": 9,
        "pool_ns": ""
    }
}
~~~

BR
Manny