Bug 2091491

Summary: mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Xiubo Li <xiubli>
Component: CephFSAssignee: Xiubo Li <xiubli>
Status: CLOSED ERRATA QA Contact: Yogesh Mane <ymane>
Severity: high Docs Contact: Eliska <ekristov>
Priority: unspecified    
Version: 5.1CC: ceph-eng-bugs, ekristov, tserlin, vereddy
Target Milestone: ---   
Target Release: 6.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-17.2.3-1.el9cp Doc Type: Bug Fix
Doc Text:
.A replica MDS is no longer stuck, if a client sends a getattr client request just after it was created Previously, if a client sent a getattr client request just after the replica MDS was created, the client would make a path of `#INODE-NUMBER` because the `CInode` was not linked yet. The replica MDS would keep retrying until the auth MDS flushed the `mdlog` and the `C_MDS_openc_finish` and `link_primary_inode` were called 5 seconds later at most. With this fix, the replica MDS trying to find the `CInode` from auth MDS would manually trigger `mdslog flush`, if it could not find it.
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-03-20 18:56:39 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2126050    

Description Xiubo Li 2022-05-30 06:06:38 UTC
Description of problem:


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 RHEL Program Management 2022-05-30 06:06:44 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 27 errata-xmlrpc 2023-03-20 18:56:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:1360