Created attachment 1242928 [details] ceph-client log Description of problem:When we Specify a specific path for CAPS, use Ceph-Fuse to mount to this specific path we are unable to create directories recursively. We get the same results on Rhel/Ubuntu/CentOS. We are able to create 3 diretories recursively, but unable to create 4 or more. This same test works correctly when using the kernel client to mount. Version-Release number of selected component (if applicable): 10.2.3-17.el7cp.x86_64 How reproducible: We have reproduced on Rhel, and customer has the issue ever time on Ubuntu and CentOS. Steps to Reproduce: 1.create Specific path to be used by Ceph-Fuse and create a user. Create the caps that restricts to this pool and path. See below for more details. 2.Try to create directories using mkdir -p /ceph/test/path/1/2/3/4/ 3. Actual results: [root@icehouse1 ceph]# mkdir -p /ceph/test/1/test_long/2/5/6/7/8/8 mkdir: cannot create directory ‘/ceph/test/1/test_long/2/5/6/7’: Permission denied [root@icehouse1 ceph]# Expected results:Expect to create the directories recursively same as it does for smaller amount of directories. Additional info:Below is ceph-fuse config that I used, I can provide detailed steps on creating the path and mounting it if you would like. --ceph auth list-- client.marge key: AQAK2ndYrhUZKBAAT86oMPEaurR5GcxhPmFNSw== caps: [mds] allow rw path=/admin/hosts/marge caps: [mon] allow rw caps: [osd] allow rw pool=cephfs_data --/etc/fstab -- id=marge,conf=/etc/ceph/ceph.conf,keyring=/etc/ceph/ceph.client.marge.keyring,client_mountpoint=/admin/hosts/marge /ceph fuse.ceph noatime,_netdev 0 0 [root@icehouse1 ceph]# df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/rhel-root 52403200 1886816 50516384 4% / devtmpfs 1928120 0 1928120 0% /dev tmpfs 1939032 0 1939032 0% /dev/shm tmpfs 1939032 194340 1744692 11% /run tmpfs 1939032 0 1939032 0% /sys/fs/cgroup /dev/vda1 1038336 141284 897052 14% /boot /dev/mapper/rhel-home 47285700 34340 47251360 1% /home tmpfs 387808 0 387808 0% /run/user/0 ceph-fuse 176066560 282624 175783936 1% /ceph 10.65.3.4:6789:/ 176066560 282624 175783936 1% /ceph2 [root@icehouse1 log]# rpm -qa |grep -i ceph libcephfs1-10.2.3-17.el7cp.x86_64 ceph-common-10.2.3-17.el7cp.x86_64 ceph-mds-10.2.3-17.el7cp.x86_64 ceph-ansible-1.0.5-46.el7scon.noarch python-cephfs-10.2.3-17.el7cp.x86_64 ceph-selinux-10.2.3-17.el7cp.x86_64 ceph-radosgw-10.2.3-17.el7cp.x86_64 ceph-fuse-10.2.3-17.el7cp.x86_64 ceph-base-10.2.3-17.el7cp.x86_64 [root@icehouse1 log]# [root@icehouse1 ceph]# mkdir -p /ceph/test/1 [root@icehouse1 ceph]# [root@icehouse1 ceph]# ll /ceph total 1 drwxr-xr-x 1 root root 0 Jan 20 22:00 test [root@icehouse1 ceph]# [root@icehouse1 ceph]# mkdir -p /ceph/test/1/test_long/2/5/6/7/8/8 mkdir: cannot create directory ‘/ceph/test/1/test_long/2/5/6/7’: Permission denied [root@icehouse1 ceph]# --- From debug logs --- 2017-01-20 22:03:08.022928 7f437869a700 10 mds.0.server reply_client_request -13 ((13) Permission denied) client_request(client.344503:31 mkdir #10000000050/7 2017-01-20 22:03:08.021217) v3 2017-01-20 22:03:08.022925 7f437869a700 10 MDSAuthCap is_capable inode(path /1000000004f/6 owner 0:0 mode 040755) by caller 0:0 mask 2 new 0:0 cap: MDSAuthCaps[allow rw path="/admin/hosts/marge"] 2017-01-20 22:03:08.022928 7f437869a700 10 mds.0.server reply_client_request -13 ((13) Permission denied) client_request(client.344503:31 mkdir #10000000050/7 2017-01-20 22:03:08.021217) v3 2017-01-20 22:03:08.022941 7f437869a700 10 mds.0.server apply_allocated_inos 0 / [] / 0 -- debugging set -- Debug_client = 20 debug_mds = 10
Created attachment 1242930 [details] mds log
- If we check MDS logs: [root@icehouse1 ceph]# mkdir -p /ceph/test/1/test_long/2/5/6/7/8/8 mkdir: cannot create directory ‘/ceph/test/1/test_long/2/5/6/7’: Permission denied 17-01-20 22:03:08.019669 7f437869a700 10 MDSAuthCap is_capable inode(path /admin/hosts/marge/test/1/test_long/2/5 owner 0:0 mode 040755) by caller 0:0 mask 2 new 0:0 cap: MDSAuthCaps[allow rw path="/admin/hosts/marge"] ^^ ‘/ceph/test/1/test_long/2/5 Till this path AuthCap was sussessfull. 2017-01-20 22:03:08.019675 7f437869a700 10 mds.0.server prepare_new_inode used_prealloc 10000000050 ([10000000051~3b8,1000000040d~d,1000000041c~1a], 991 left) ^^ In Prepared a new inode. 2017-01-20 22:03:08.019685 7f437869a700 10 mds.0.server dir mode 040755 new mode 040755 --- But for Directory name '6' ------------------- 2017-01-20 22:03:08.022925 7f437869a700 10 MDSAuthCap is_capable inode(path /1000000004f/6 owner 0:0 mode 040755) by caller 0:0 mask 2 new 0:0 cap: MDSAuthCaps[allow rw path="/admin/hosts/marge"] ^^ ‘/ceph/test/1/test_long/2/5/6' but when we went for directory name '6' AuthCap failed. 2017-01-20 22:03:08.022928 7f437869a700 10 mds.0.server reply_client_request -13 ((13) Permission denied) client_request(client.344503:31 mkdir #10000000050/7 2017-01-20 22:03:08.021217) v3 ^^ and gave Permission denied. 2017-01-20 22:03:08.022941 7f437869a700 10 mds.0.server apply_allocated_inos 0 / [] / 0 ^^ apply allocated inodes null.
This is the following known issue: http://tracker.ceph.com/issues/17858 It is pending backport to the upstream jewel branch, so will be in the next jewel release and should be picked up next time downstream rebases.
jewel backport tracker: http://tracker.ceph.com/issues/18008 jewel backport PR: https://github.com/ceph/ceph/pull/12154
Moving this bug to verified state. Issue not seen. Verified in build ceph: 10.2.5-22.el7cp (5cec6848b914e87dd6178e559dedae8a37cc08a3) [root@host87 ~]$ ceph-fuse --cluster cephv1 -m host80:6789,host86:6789,host87:6789 /mnt/cephfs/ ceph-fuse[2446609]: starting ceph client 2017-02-10 07:36:40.320421 7f955596cec0 -1 init, newargv = 0x7f9560f9ec60 newargc=11 ceph-fuse[2446609]: starting fuse [root@host87 cephfs]# mkdir -p test/test1/test/test1/test2/5/6/7 [root@host87 cephfs]# ls test [root@host87 cephfs]# cd test/test1/test/test1/test2/5/6/7/ [root@host87 7]#
""Operation not permitted" errors are no longer returned" should probably have " incorrectly" at the end or something -- we still do EPERM, just not in places we shouldn't.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2017-0514.html