If one inode has more than one link and after one of its dentries being unlinked it will be moved to stray directory. Before the linkmerge/migrate finises if a link request comes it will fail with -EXDEV. While in non-multiple link case it's also possible that the clients could pass one invalidate ino, which is still under unlinking.
Please specify the severity of this bug. Severity is defined here: https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.
Hi All, Steps Followed: 1. Created a file fileA 2. Created 10000 hard links for it using 3. Deleted all created files in loop 4. Not seeing any crash [root@ceph-hk6-i3ej66-node8 ~]# cd /mnt/amk_2196405/ [root@ceph-hk6-i3ej66-node8 amk_2196405]# [root@ceph-hk6-i3ej66-node8 amk_2196405]# [root@ceph-hk6-i3ej66-node8 amk_2196405]# ls -lrt total 20480 -rwSrwSrw-. 1 root root 10485760 Jul 13 15:10 file1 -rw-rw-rw-. 1 root root 10485760 Jul 13 15:22 file [root@ceph-hk6-i3ej66-node8 amk_2196405]# touch fileA [root@ceph-hk6-i3ej66-node8 amk_2196405]# vi fileA [root@ceph-hk6-i3ej66-node8 amk_2196405]# stat fileA File: fileA Size: 24 Blocks: 1 IO Block: 4194304 regular file Device: 31h/49d Inode: 1099511630284 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: system_u:object_r:fusefs_t:s0 Access: 2023-07-14 02:33:32.188126859 -0400 Modify: 2023-07-14 02:33:32.188793190 -0400 Change: 2023-07-14 02:33:32.196889494 -0400 Birth: - [root@ceph-hk6-i3ej66-node8 amk_2196405]# ceph fs status cephfs - 3 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs.ceph-hk6-i3ej66-node4.zngiwp Reqs: 0 /s 17 16 12 9 1 active cephfs.ceph-hk6-i3ej66-node3.snjejd Reqs: 0 /s 12 16 14 5 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 516k 56.7G cephfs.cephfs.data data 57.0M 56.7G STANDBY MDS cephfs.ceph-hk6-i3ej66-node6.mavujt cephfs.ceph-hk6-i3ej66-node7.mpmcmr cephfs.ceph-hk6-i3ej66-node5.lsfxkv MDS version: ceph version 17.2.6-96.el9cp (3c9b67d46bf428c8eb52f31dfd4c722a2e896cf7) quincy (stable) [root@ceph-hk6-i3ej66-node8 amk_2196405]# for i in {1..10000};do echo $i;ln -v fileA fileA_$i; ls -lrt fileA_$i;echo "##########################################";done ########################################## 10000 'fileA_10000' => 'fileA' -rw-r--r--. 10001 root root 24 Jul 14 02:33 fileA_10000 ########################################## [root@ceph-hk6-i3ej66-node8 amk_2196405]# stat fileA_10000 File: fileA_10000 Size: 24 Blocks: 1 IO Block: 4194304 regular file Device: 31h/49d Inode: 1099511630284 Links: 10001 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: system_u:object_r:fusefs_t:s0 Access: 2023-07-14 02:33:32.188126859 -0400 Modify: 2023-07-14 02:33:32.188793190 -0400 Change: 2023-07-14 02:36:17.725269206 -0400 Birth: - [root@ceph-hk6-i3ej66-node8 amk_2196405]# stat fileA File: fileA Size: 24 Blocks: 1 IO Block: 4194304 regular file Device: 31h/49d Inode: 1099511630284 Links: 10001 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: system_u:object_r:fusefs_t:s0 Access: 2023-07-14 02:33:32.188126859 -0400 Modify: 2023-07-14 02:33:32.188793190 -0400 Change: 2023-07-14 02:36:17.725269206 -0400 Birth: - [root@ceph-hk6-i3ej66-node8 amk_2196405]# for i in {1..10000};do echo $i;rm -rf fileA_$i; stat fileA;echo "##########################################";done ########################################## 10000 File: fileA Size: 24 Blocks: 1 IO Block: 4194304 regular file Device: 31h/49d Inode: 1099511630284 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: system_u:object_r:fusefs_t:s0 Access: 2023-07-14 02:33:32.188126859 -0400 Modify: 2023-07-14 02:33:32.188793190 -0400 Change: 2023-07-14 02:40:03.836465538 -0400 Birth: - ########################################## [root@ceph-hk6-i3ej66-node8 amk_2196405]# stat fileA File: fileA Size: 24 Blocks: 1 IO Block: 4194304 regular file Device: 31h/49d Inode: 1099511630284 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: system_u:object_r:fusefs_t:s0 Access: 2023-07-14 02:33:32.188126859 -0400 Modify: 2023-07-14 02:33:32.188793190 -0400 Change: 2023-07-14 02:40:03.836465538 -0400 Birth: - [root@ceph-hk6-i3ej66-node8 amk_2196405]# ceph versions { "mon": { "ceph version 17.2.6-96.el9cp (3c9b67d46bf428c8eb52f31dfd4c722a2e896cf7) quincy (stable)": 3 }, "mgr": { "ceph version 17.2.6-96.el9cp (3c9b67d46bf428c8eb52f31dfd4c722a2e896cf7) quincy (stable)": 2 }, "osd": { "ceph version 17.2.6-96.el9cp (3c9b67d46bf428c8eb52f31dfd4c722a2e896cf7) quincy (stable)": 12 }, "mds": { "ceph version 17.2.6-96.el9cp (3c9b67d46bf428c8eb52f31dfd4c722a2e896cf7) quincy (stable)": 5 }, "overall": { "ceph version 17.2.6-96.el9cp (3c9b67d46bf428c8eb52f31dfd4c722a2e896cf7) quincy (stable)": 22 } } Regards, Amarnath
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 6.1 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:4473