Bug 1794781

Summary: mds scrub fails on all the files with errors "Scrub error on inode 0x2000001b1da (file-path) see mds.log and `damage ls` output for details
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Hemanth Kumar <hyelloji>
Component: CephFSAssignee: Milind Changire <mchangir>
Status: CLOSED ERRATA QA Contact: Hemanth Kumar <hyelloji>
Severity: high Docs Contact: Amrita <asakthiv>
Priority: high    
Version: 5.0CC: asakthiv, assingh, ceph-eng-bugs, linuxkidd, mchangir, mhackett, pdonnell, rmandyam, sweil, tserlin, vereddy
Target Milestone: ---Keywords: Reopened
Target Release: 5.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-16.2.0-59.el8cp Doc Type: Known Issue
Doc Text:
.Backtrace now works as expected for CephFS scrub operations Previously, backtrace was unwritten to stable storage. Scrub activity reported a failure if the backtrace did not match the in-memory copy for a new and unsynced entry. Backtrace mismatch also happened for a stray entry that was about to be purged permanently since there was no need to save the backtrace to the disk. Due to the ongoing metadata I/O, it might have happened that the raw stats would not match if there was heavy metadata I/O because the raw stats accounting is not instantaneous. To workaround this issue, rerun the scrub when the system is idle and has had enough time to flush in-memory state to disk. As a result, once the metadata has been flushed to the disk, these errors are resolved. Backtrace validation is successful if there is no backtrace found on the disk and the file is new, and the entry is stray and about to be purged. See the KCS link:https://access.redhat.com/solutions/6123271[_Ceph status shows HEALTH_ERR with MDSs report damaged metadata_] for more details.
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-08-30 08:23:40 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1907706, 1959686    
Attachments:
Description Flags
mgr logs none

Comment 9 Hemanth Kumar 2021-04-26 11:33:14 UTC
@patrick, any update on this ?

Comment 29 Hemanth Kumar 2021-06-02 16:35:21 UTC
Created attachment 1788734 [details]
mgr logs

Hi Milind, Please ignore the previous log, uploaded a different log by mistake.

Comment 52 Michael J. Kidd 2021-06-16 15:14:53 UTC
I've created a KCS with the relevant diagnostic and repair steps:
https://access.redhat.com/solutions/6123271

Please let me know if there are any changes needed.

Comment 53 Michael J. Kidd 2021-06-16 15:20:55 UTC
--clearing needinfo state

Comment 54 Patrick Donnelly 2021-06-16 16:13:35 UTC
(In reply to Michael J. Kidd from comment #52)
> I've created a KCS with the relevant diagnostic and repair steps:
> https://access.redhat.com/solutions/6123271
> 
> Please let me know if there are any changes needed.

Looks good! Thanks Michael.

Comment 56 errata-xmlrpc 2021-08-30 08:23:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294