Bug 2129968

Summary: mds only stores damage for up to one dentry per dirfrag
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Patrick Donnelly <pdonnell>
Component: CephFSAssignee: Patrick Donnelly <pdonnell>
Status: CLOSED ERRATA QA Contact: Amarnath <amk>
Severity: medium Docs Contact: Masauso Lungu <mlungu>
Priority: medium    
Version: 5.1CC: ceph-eng-bugs, cephqe-warriors, hyelloji, mlungu, pasik, vereddy, vshankar
Target Milestone: ---   
Target Release: 6.0   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: ceph-17.2.3-46.el9cp Doc Type: Bug Fix
Doc Text:
.MDS now stores all damaged dentries Previously, metadata servers (MDS) would only store dentry damage for a `dirfrag` if dentry damage would not already exist in that `dirfrag`. As a result, only the first damaged dentry would be stored in the damage table and subsequent damage in the `dirfrag` would be forgotten. With this fix, MDS can now properly store all the damaged dentries.
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-03-20 18:58:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2126050    

Description Patrick Donnelly 2022-09-26 19:42:12 UTC
Description of problem:

See: https://tracker.ceph.com/issues/57249

Comment 12 Amarnath 2022-11-03 18:02:08 UTC
Hi Pattrick,

i tried above steps but i am not able to the keys of omap

I have created 2 dir test and test_2 and i got inode value but omap keys list is not coming
I think I am missing something can you help me with this?


[root@ceph-amk-snap-7jmafh-node7 cephfs_fuse]# stat test
  File: test
  Size: 8714966   	Blocks: 1          IO Block: 4096   directory
Device: 2ah/42d	Inode: 1099511627781  Links: 3
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:fusefs_t:s0
Access: 2022-11-03 13:41:24.916110015 -0400
Modify: 2022-11-03 13:41:56.290478216 -0400
Change: 2022-11-03 13:41:56.290478216 -0400
 Birth: -
[root@ceph-amk-snap-7jmafh-node7 cephfs_fuse]# stat test_2
  File: test_2
  Size: 0         	Blocks: 1          IO Block: 4096   directory
Device: 2ah/42d	Inode: 1099511627785  Links: 3
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:fusefs_t:s0
Access: 2022-11-03 13:56:05.319185051 -0400
Modify: 2022-11-03 13:56:13.675764568 -0400
Change: 2022-11-03 13:56:13.675764568 -0400
 Birth: -

[root@ceph-amk-snap-7jmafh-node7 cephfs_fuse]# rados --pool cephfs.cephfs.meta listomapkeys 1099511627785.00000000
error getting omap key set cephfs.cephfs.meta/1099511627785.00000000: (2) No such file or directory
[root@ceph-amk-snap-7jmafh-node7 cephfs_fuse]# rados --pool cephfs.cephfs.meta listomapkeys 1099511627781.00000000
error getting omap key set cephfs.cephfs.meta/1099511627781.00000000: (2) No such file or directory

Regards,
Amarnath

Comment 14 Amarnath 2022-11-07 19:13:51 UTC
Hi Patrick, 

I tired Hex value as you suggested. still, it was complaining the same.
[root@ceph-amk-snap-7jmafh-node7 74086dfb-850b-427a-8c80-1f28e8e9f5a4]# ls -lrt
total 1
drwxr-xr-x. 3 root root 8714966 Nov  7 01:37 test
drwxr-xr-x. 3 root root 8714966 Nov  7 01:37 test_2
[root@ceph-amk-snap-7jmafh-node7 74086dfb-850b-427a-8c80-1f28e8e9f5a4]# stat test
  File: test
  Size: 8714966   	Blocks: 1          IO Block: 4096   directory
Device: 2ah/42d	Inode: 2199023255554  Links: 3
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:fusefs_t:s0
Access: 2022-11-07 01:37:02.884768091 -0500
Modify: 2022-11-07 01:37:02.896205212 -0500
Change: 2022-11-07 01:37:02.896205212 -0500
 Birth: -
[root@ceph-amk-snap-7jmafh-node7 74086dfb-850b-427a-8c80-1f28e8e9f5a4]# INODE=$(printf %x 2199023255554).00000000

[root@ceph-amk-snap-7jmafh-node7 74086dfb-850b-427a-8c80-1f28e8e9f5a4]# echo $INODE
20000000002.00000000
[root@ceph-amk-snap-7jmafh-node7 74086dfb-850b-427a-8c80-1f28e8e9f5a4]# rados --pool=cephfs.cephfs.meta listomapkeys $INODE
error getting omap key set cephfs.cephfs.meta/20000000002.00000000: (2) No such file or directory
[root@ceph-amk-snap-7jmafh-node7 74086dfb-850b-427a-8c80-1f28e8e9f5a4]# ceph fs ls
name: cephfs, metadata pool: cephfs.cephfs.meta, data pools: [cephfs.cephfs.data ]
[root@ceph-amk-snap-7jmafh-node7 74086dfb-850b-427a-8c80-1f28e8e9f5a4]# 

Sorry for bothering you too much

Regards,
Amarnath

Comment 18 Amarnath 2022-11-16 04:44:36 UTC
Thanks Patrick,

Moving the bug to Verified State

Comment 28 errata-xmlrpc 2023-03-20 18:58:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:1360