+++ This bug was initially created as a clone of Bug #2161478 +++ Description of problem: After commit 28227872295ae657a0d26a3f004c54c41794db18 mds: automatically fragment stray dirs stray dirs are allowed to be fragment. But scan_stray_dir doesn't walk through all fragments of the stray inodes correctly. It doesn't reset next.frag after each run of stray dir inode. Therefore, next stray inode will start from the last fragment of previous stray inode, and the preceding fragments are all skipped. Version-Release number of selected component (if applicable): 5.2 How reproducible: Always Steps to Reproduce: Delete huge number of files in cephfs to fill up the stray directory and restart the MDS whilst the inodes hang around in stray. Not all stray inodes would get evaluated. Actual results: MDS should correct evaluate stray directory inodes on restart.
https://github.com/ceph/ceph/pull/49670 is in v17.2.6
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage 6.1 security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:3623