Bug 2196405 - mds: wait for unlink operation to finish
Summary: mds: wait for unlink operation to finish
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 6.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 6.1z1
Assignee: Xiubo Li
QA Contact: Amarnath
Akash Raj
URL:
Whiteboard:
Depends On:
Blocks: 2221020
TreeView+ depends on / blocked
 
Reported: 2023-05-09 02:12 UTC by Xiubo Li
Modified: 2023-08-03 16:45 UTC (History)
6 users (show)

Fixed In Version: ceph-17.2.6-88.el9cp
Doc Type: Bug Fix
Doc Text:
.Link requests no longer fail with `-EXDEV` Previously, if an _inode_ had more than one link and after one of its dentries was unlinked, it would be moved to a stray directory. Before the link merge/migrate finished, if a link request came, it would fail with `-EXDEV` error. While in non-multiple link cases, it was possible that the clients could pass one invalidate ino, which is still under unlinking. Due to this, some link requests would fail directly. With this fix, if users wait for the link merge, migrate or purge to finish, no link requests fails with `-EXDEV`.
Clone Of:
Environment:
Last Closed: 2023-08-03 16:45:09 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 56695 0 None None None 2023-05-09 02:16:47 UTC
Red Hat Issue Tracker RHCEPH-6628 0 None None None 2023-05-09 02:13:05 UTC
Red Hat Product Errata RHBA-2023:4473 0 None None None 2023-08-03 16:45:54 UTC

Description Xiubo Li 2023-05-09 02:12:51 UTC
If one inode has more than one link and after one of its dentries
being unlinked it will be moved to stray directory. Before the
linkmerge/migrate finises if a link request comes it will fail
with -EXDEV.

While in non-multiple link case it's also possible that the clients
could pass one invalidate ino, which is still under unlinking.

Comment 1 RHEL Program Management 2023-05-09 02:16:02 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 7 Amarnath 2023-07-14 06:46:53 UTC
Hi All,


Steps Followed: 
1. Created a file fileA
2. Created 10000 hard links for it using
3. Deleted all created files in loop
4. Not seeing any crash

[root@ceph-hk6-i3ej66-node8 ~]# cd /mnt/amk_2196405/
[root@ceph-hk6-i3ej66-node8 amk_2196405]# 
[root@ceph-hk6-i3ej66-node8 amk_2196405]# 
[root@ceph-hk6-i3ej66-node8 amk_2196405]# ls -lrt
total 20480
-rwSrwSrw-. 1 root root 10485760 Jul 13 15:10 file1
-rw-rw-rw-. 1 root root 10485760 Jul 13 15:22 file
[root@ceph-hk6-i3ej66-node8 amk_2196405]# touch fileA
[root@ceph-hk6-i3ej66-node8 amk_2196405]# vi fileA
[root@ceph-hk6-i3ej66-node8 amk_2196405]# stat fileA
  File: fileA
  Size: 24        	Blocks: 1          IO Block: 4194304 regular file
Device: 31h/49d	Inode: 1099511630284  Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:fusefs_t:s0
Access: 2023-07-14 02:33:32.188126859 -0400
Modify: 2023-07-14 02:33:32.188793190 -0400
Change: 2023-07-14 02:33:32.196889494 -0400
 Birth: -
[root@ceph-hk6-i3ej66-node8 amk_2196405]# ceph fs status
cephfs - 3 clients
======
RANK  STATE                   MDS                     ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  cephfs.ceph-hk6-i3ej66-node4.zngiwp  Reqs:    0 /s    17     16     12      9   
 1    active  cephfs.ceph-hk6-i3ej66-node3.snjejd  Reqs:    0 /s    12     16     14      5   
       POOL           TYPE     USED  AVAIL  
cephfs.cephfs.meta  metadata   516k  56.7G  
cephfs.cephfs.data    data    57.0M  56.7G  
            STANDBY MDS              
cephfs.ceph-hk6-i3ej66-node6.mavujt  
cephfs.ceph-hk6-i3ej66-node7.mpmcmr  
cephfs.ceph-hk6-i3ej66-node5.lsfxkv  
MDS version: ceph version 17.2.6-96.el9cp (3c9b67d46bf428c8eb52f31dfd4c722a2e896cf7) quincy (stable)
[root@ceph-hk6-i3ej66-node8 amk_2196405]# for i in {1..10000};do echo $i;ln -v fileA fileA_$i; ls -lrt fileA_$i;echo "##########################################";done

##########################################
10000
'fileA_10000' => 'fileA'
-rw-r--r--. 10001 root root 24 Jul 14 02:33 fileA_10000
##########################################
[root@ceph-hk6-i3ej66-node8 amk_2196405]# stat fileA_10000
  File: fileA_10000
  Size: 24        	Blocks: 1          IO Block: 4194304 regular file
Device: 31h/49d	Inode: 1099511630284  Links: 10001
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:fusefs_t:s0
Access: 2023-07-14 02:33:32.188126859 -0400
Modify: 2023-07-14 02:33:32.188793190 -0400
Change: 2023-07-14 02:36:17.725269206 -0400
 Birth: -
[root@ceph-hk6-i3ej66-node8 amk_2196405]# stat fileA
  File: fileA
  Size: 24        	Blocks: 1          IO Block: 4194304 regular file
Device: 31h/49d	Inode: 1099511630284  Links: 10001
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:fusefs_t:s0
Access: 2023-07-14 02:33:32.188126859 -0400
Modify: 2023-07-14 02:33:32.188793190 -0400
Change: 2023-07-14 02:36:17.725269206 -0400
 Birth: -
[root@ceph-hk6-i3ej66-node8 amk_2196405]# for i in {1..10000};do echo $i;rm -rf fileA_$i; stat fileA;echo "##########################################";done

##########################################
10000
  File: fileA
  Size: 24        	Blocks: 1          IO Block: 4194304 regular file
Device: 31h/49d	Inode: 1099511630284  Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:fusefs_t:s0
Access: 2023-07-14 02:33:32.188126859 -0400
Modify: 2023-07-14 02:33:32.188793190 -0400
Change: 2023-07-14 02:40:03.836465538 -0400
 Birth: -
##########################################
[root@ceph-hk6-i3ej66-node8 amk_2196405]# stat fileA
  File: fileA
  Size: 24        	Blocks: 1          IO Block: 4194304 regular file
Device: 31h/49d	Inode: 1099511630284  Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:fusefs_t:s0
Access: 2023-07-14 02:33:32.188126859 -0400
Modify: 2023-07-14 02:33:32.188793190 -0400
Change: 2023-07-14 02:40:03.836465538 -0400
 Birth: -
[root@ceph-hk6-i3ej66-node8 amk_2196405]# ceph versions
{
    "mon": {
        "ceph version 17.2.6-96.el9cp (3c9b67d46bf428c8eb52f31dfd4c722a2e896cf7) quincy (stable)": 3
    },
    "mgr": {
        "ceph version 17.2.6-96.el9cp (3c9b67d46bf428c8eb52f31dfd4c722a2e896cf7) quincy (stable)": 2
    },
    "osd": {
        "ceph version 17.2.6-96.el9cp (3c9b67d46bf428c8eb52f31dfd4c722a2e896cf7) quincy (stable)": 12
    },
    "mds": {
        "ceph version 17.2.6-96.el9cp (3c9b67d46bf428c8eb52f31dfd4c722a2e896cf7) quincy (stable)": 5
    },
    "overall": {
        "ceph version 17.2.6-96.el9cp (3c9b67d46bf428c8eb52f31dfd4c722a2e896cf7) quincy (stable)": 22
    }
}

Regards,
Amarnath

Comment 9 errata-xmlrpc 2023-08-03 16:45:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 6.1 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:4473


Note You need to log in before you can comment on or make changes to this bug.