Description of problem: File with Filename size 256bytes is not accessible after fscrypt unlock. File was accessible after creation, after lock in encrypted format, but after fscrypt unlock same file is not accessible. Logs: [root@ceph-sumar-fscrypt-az0v8f-node6 fuse_sv1]# date Fri Apr 25 09:45:54 UTC 2025 [root@ceph-sumar-fscrypt-az0v8f-node6 fuse_sv1]# ls -l testdir1/ ls: cannot access 'testdir1/???': No such file or directory total 1583333 -?????????? ? ? ? ? ? '???' [root@ceph-sumar-fscrypt-az0v8f-node6 fuse_sv1]# date Fri Apr 25 09:51:48 UTC 2025 [root@ceph-sumar-fscrypt-az0v8f-node6 fuse_sv1]# ls -l testdir1/ ls: cannot access 'testdir1/???': No such file or directory total 1583333 -?????????? ? ? ? ? ? '???' Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. Setup fscrypt on test directpry in ceph-fuse mount of subvolume 2.Create a file with filename size being 256bytes 3.Perform fscrypt lock and verify encryption of filename in step2 Observation : Filename in encrypted format could be listed and size was 249bytes 4. Perform fscrypt unlock and verify filename encryption removed and is accessible. Actual results: Filename is not accessible, neither file stats like size, timestamp are printed. Expected results: Filename should be accessible after fscrypt unlock. Additional info: Detailed logs for steps followed is below. Please note that this issue was not seen until filename size was 255bytes. Logs for this test is at https://docs.google.com/document/d/1RIPmYgoqAfmsvUEjhNOziYUeITstbAKqY815fcFUhrw/edit?tab=t.0 Set filename size to 256bytes ----------------------------- [root@ceph-sumar-fscrypt-az0v8f-node6 fuse_sv1]# mv testdir1/abcdefghijklmnopqrstuvwxyz01234567890_.@com+abcdefghijklmnopqrstuvwxyz01234567890_.@com#%-xw_abcdefghijklmnopqrstuvwxyz01234567890_.@com+abcdefghijklmnopqrstuvwxyz01234567890_.@com#%-xwaaaaaaaaaaaaaaaaaaaaaaaaa01234567890_.@com#%-xwaaaaaaaaaaaaaa testdir1/abcdefghijklmnopqrstuvwxyz01234567890_.@com+abcdefghijklmnopqrstuvwxyz01234567890_.@com#%-xw_abcdefghijklmnopqrstuvwxyz01234567890_.@com+abcdefghijklmnopqrstuvwxyz01234567890_.@com#%-xwaaaaaaaaaaaaaaaaaaaaaaaaa01234567890_.@com#%-xwaaaaaaaaaaaaaaa [root@ceph-sumar-fscrypt-az0v8f-node6 fuse_sv1]# echo -n testdir1/abcdefghijklmnopqrstuvwxyz01234567890_.@com+abcdefghijklmnopqrstuvwxyz01234567890_.@com#%-xw_abcdefghijklmnopqrstuvwxyz01234567890_.@com+abcdefghijklmnopqrstuvwxyz01234567890_.@com#%-xwaaaaaaaaaaaaaaaaaaaaaaaaa01234567890_.@com#%-xwaaaaaaaaaaaaaaa | wc -c 256 Perform fscrypt lock and verify encryption of filename ------------------------------------------------------ [root@ceph-sumar-fscrypt-az0v8f-node6 fuse_sv1]# fscrypt lock testdir1/ "testdir1/" is now locked. [root@ceph-sumar-fscrypt-az0v8f-node6 fuse_sv1]# ls -l testdir1/ -rw-r--r--. 1 root root 9 Apr 25 09:21 i7mz,1d9qlRiyqZg1tE5Sf99LyLJfIYcgZ34dQU2dTz1MTMrHd7CZy4zVhkCu7BnMkKkl4DeYqXRoB75sl9MVQ2ns1yBtzqsGYtW7Zqztned3e70VBgeVOU5IN01w6g1Z2g9iv5E1VGv8e7HQH7d7Ggd1TasJzhXnsMXEVgJP9FHPBRAXW5ezKW,R,phlACaBSsTckkmTmzMlW9xqy,lvkqEDNLHZfUiyWjXVsSxFh6F6tXf [root@ceph-sumar-fscrypt-az0v8f-node6 fuse_sv1]# echo -n testdir1/i7mz,1d9qlRiyqZg1tE5Sf99LyLJfIYcgZ34dQU2dTz1MTMrHd7CZy4zVhkCu7BnMkKkl4DeYqXRoB75sl9MVQ2ns1yBtzqsGYtW7Zqztned3e70VBgeVOU5IN01w6g1Z2g9iv5E1VGv8e7HQH7d7Ggd1TasJzhXnsMXEVgJP9FHPBRAXW5ezKW,R,phlACaBSsTckkmTmzMlW9xqy,lvkqEDNLHZfUiyWjXVsSxFh6F6tXf | wc -c 249 [root@ceph-sumar-fscrypt-az0v8f-node6 fuse_sv1]# fscrypt unlock testdir1/ Enter key file for protector "cephfs_key": secret.key "testdir1/" is now unlocked and ready for use. [root@ceph-sumar-fscrypt-az0v8f-node6 fuse_sv1]# ls -l testdir1/ ls: cannot access 'testdir1/???': No such file or directory total 1583333 -?????????? ? ? ? ? ? '???' -rw-r--r--. 1 root root 5 Apr 25 09:15 a -rw-r--r--. 1 root root 25254625 Apr 15 13:17 ceph-client.1.log -rw-r--r--. 1 root root 4 Apr 17 11:09 file -rw-r--r--. 1 cephuser cephuser 1024 Apr 17 10:32 file1 -rw-r--r--. 1 root root 1024 Jan 1 1970 file2 -rw-r--r--. 1 root root 629145600 Apr 17 10:56 file_1g drwxr-xr-x. 3 root root 0 Apr 15 13:39 file_dstdir drwxr-xr-x. 3 root root 0 Apr 15 13:39 file_srcdir -rw-r--r--. 1 root root 9437184 Apr 15 13:37 fio_file_10m.0.0 -rw-r--r--. 1 root root 1658 Apr 15 13:45 iozone_report.xls -rw-r--r--. 1 root root 957096993 Apr 17 11:45 maillog -rw-------. 1 root root 388101 Apr 15 13:24 messages drwxr-xr-x. 2 root root 8192 Apr 15 13:48 network_shared -rw-------. 1 root root 15 Apr 15 13:23 smallfile lrwxrwxrwx. 1 root root 46 Apr 17 10:36 symlink -> file1 drwxr-xr-x. 4 root root 44692440 Apr 15 13:28 thread0 drwxr-xr-x. 4 root root 42660824 Apr 15 13:28 thread1 -rw-r--r--. 1 root root 1 Apr 17 11:00 trunc_file
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Ceph Storage 8.1 security, bug fix, and enhancement updates), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2025:9775