Bug 2161478 - MDS: scan_stray_dir doesn't walk through all stray inode fragment
Summary: MDS: scan_stray_dir doesn't walk through all stray inode fragment
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 5.2
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 5.3z1
Assignee: Venky Shankar
QA Contact: Hemanth Kumar
URL:
Whiteboard:
Depends On:
Blocks: 2161479
TreeView+ depends on / blocked
 
Reported: 2023-01-17 04:34 UTC by Venky Shankar
Modified: 2023-02-28 10:07 UTC (History)
5 users (show)

Fixed In Version: ceph-16.2.10-105.el8cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2161479 (view as bug list)
Environment:
Last Closed: 2023-02-28 10:06:24 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 58294 0 None None None 2023-01-17 04:34:41 UTC
Red Hat Issue Tracker RHCEPH-5939 0 None None None 2023-01-17 04:36:42 UTC
Red Hat Product Errata RHSA-2023:0980 0 None None None 2023-02-28 10:07:26 UTC

Description Venky Shankar 2023-01-17 04:34:42 UTC
Description of problem:

After commit 28227872295ae657a0d26a3f004c54c41794db18 mds: automatically fragment stray dirs
stray dirs are allowed to be fragment.

But scan_stray_dir doesn't walk through all fragments of the stray inodes correctly.
It doesn't reset next.frag after each run of stray dir inode.
Therefore, next stray inode will start from the last fragment of previous stray inode, and the preceding fragments are all skipped.


Version-Release number of selected component (if applicable):
5.2

How reproducible:
Always

Steps to Reproduce:
Delete huge number of files in cephfs to fill up the stray directory and restart the MDS whilst the inodes hang around in stray. Not all stray inodes would get evaluated.

Actual results:
MDS should correct evaluate stray directory inodes on restart.

Comment 8 errata-xmlrpc 2023-02-28 10:06:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 5.3 Bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:0980


Note You need to log in before you can comment on or make changes to this bug.