Bug 2129968 - mds only stores damage for up to one dentry per dirfrag
Summary: mds only stores damage for up to one dentry per dirfrag
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 5.1
Hardware: All
OS: All
medium
medium
Target Milestone: ---
: 6.0
Assignee: Patrick Donnelly
QA Contact: Amarnath
Masauso Lungu
URL:
Whiteboard:
Depends On:
Blocks: 2126050
TreeView+ depends on / blocked
 
Reported: 2022-09-26 19:42 UTC by Patrick Donnelly
Modified: 2023-03-20 18:58 UTC (History)
7 users (show)

Fixed In Version: ceph-17.2.3-46.el9cp
Doc Type: Bug Fix
Doc Text:
.MDS now stores all damaged dentries Previously, metadata servers (MDS) would only store dentry damage for a `dirfrag` if dentry damage would not already exist in that `dirfrag`. As a result, only the first damaged dentry would be stored in the damage table and subsequent damage in the `dirfrag` would be forgotten. With this fix, MDS can now properly store all the damaged dentries.
Clone Of:
Environment:
Last Closed: 2023-03-20 18:58:17 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 57670 0 None None None 2022-09-26 19:42:11 UTC
Red Hat Issue Tracker RHCEPH-5352 0 None None None 2022-09-26 19:49:38 UTC
Red Hat Product Errata RHBA-2023:1360 0 None None None 2023-03-20 18:58:54 UTC

Description Patrick Donnelly 2022-09-26 19:42:12 UTC
Description of problem:

See: https://tracker.ceph.com/issues/57249

Comment 12 Amarnath 2022-11-03 18:02:08 UTC
Hi Pattrick,

i tried above steps but i am not able to the keys of omap

I have created 2 dir test and test_2 and i got inode value but omap keys list is not coming
I think I am missing something can you help me with this?


[root@ceph-amk-snap-7jmafh-node7 cephfs_fuse]# stat test
  File: test
  Size: 8714966   	Blocks: 1          IO Block: 4096   directory
Device: 2ah/42d	Inode: 1099511627781  Links: 3
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:fusefs_t:s0
Access: 2022-11-03 13:41:24.916110015 -0400
Modify: 2022-11-03 13:41:56.290478216 -0400
Change: 2022-11-03 13:41:56.290478216 -0400
 Birth: -
[root@ceph-amk-snap-7jmafh-node7 cephfs_fuse]# stat test_2
  File: test_2
  Size: 0         	Blocks: 1          IO Block: 4096   directory
Device: 2ah/42d	Inode: 1099511627785  Links: 3
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:fusefs_t:s0
Access: 2022-11-03 13:56:05.319185051 -0400
Modify: 2022-11-03 13:56:13.675764568 -0400
Change: 2022-11-03 13:56:13.675764568 -0400
 Birth: -

[root@ceph-amk-snap-7jmafh-node7 cephfs_fuse]# rados --pool cephfs.cephfs.meta listomapkeys 1099511627785.00000000
error getting omap key set cephfs.cephfs.meta/1099511627785.00000000: (2) No such file or directory
[root@ceph-amk-snap-7jmafh-node7 cephfs_fuse]# rados --pool cephfs.cephfs.meta listomapkeys 1099511627781.00000000
error getting omap key set cephfs.cephfs.meta/1099511627781.00000000: (2) No such file or directory

Regards,
Amarnath

Comment 14 Amarnath 2022-11-07 19:13:51 UTC
Hi Patrick, 

I tired Hex value as you suggested. still, it was complaining the same.
[root@ceph-amk-snap-7jmafh-node7 74086dfb-850b-427a-8c80-1f28e8e9f5a4]# ls -lrt
total 1
drwxr-xr-x. 3 root root 8714966 Nov  7 01:37 test
drwxr-xr-x. 3 root root 8714966 Nov  7 01:37 test_2
[root@ceph-amk-snap-7jmafh-node7 74086dfb-850b-427a-8c80-1f28e8e9f5a4]# stat test
  File: test
  Size: 8714966   	Blocks: 1          IO Block: 4096   directory
Device: 2ah/42d	Inode: 2199023255554  Links: 3
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:fusefs_t:s0
Access: 2022-11-07 01:37:02.884768091 -0500
Modify: 2022-11-07 01:37:02.896205212 -0500
Change: 2022-11-07 01:37:02.896205212 -0500
 Birth: -
[root@ceph-amk-snap-7jmafh-node7 74086dfb-850b-427a-8c80-1f28e8e9f5a4]# INODE=$(printf %x 2199023255554).00000000

[root@ceph-amk-snap-7jmafh-node7 74086dfb-850b-427a-8c80-1f28e8e9f5a4]# echo $INODE
20000000002.00000000
[root@ceph-amk-snap-7jmafh-node7 74086dfb-850b-427a-8c80-1f28e8e9f5a4]# rados --pool=cephfs.cephfs.meta listomapkeys $INODE
error getting omap key set cephfs.cephfs.meta/20000000002.00000000: (2) No such file or directory
[root@ceph-amk-snap-7jmafh-node7 74086dfb-850b-427a-8c80-1f28e8e9f5a4]# ceph fs ls
name: cephfs, metadata pool: cephfs.cephfs.meta, data pools: [cephfs.cephfs.data ]
[root@ceph-amk-snap-7jmafh-node7 74086dfb-850b-427a-8c80-1f28e8e9f5a4]# 

Sorry for bothering you too much

Regards,
Amarnath

Comment 18 Amarnath 2022-11-16 04:44:36 UTC
Thanks Patrick,

Moving the bug to Verified State

Comment 28 errata-xmlrpc 2023-03-20 18:58:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:1360


Note You need to log in before you can comment on or make changes to this bug.