Description of problem: ----------------------- The corresponding buckets for created directories still remains even after removing the directories from nfs mountpoint Version-Release number of selected component (if applicable): ------------------------------------------------------------ # rpm -qa | grep nfs nfs-ganesha-rgw-2.4.2-5.el7cp.x86_64 nfs-ganesha-2.4.2-5.el7cp.x86_64 # rpm -qa | grep ceph ceph-radosgw-10.2.5-26.el7cp.x86_64 Steps to Reproduce: 1. Upgraded the nfs builds and ceph to the latest (nfs-2.4.2.4, ceph-10.2.5.26) 2. Deleted all the available directories from the nfsshare mounted on the clients by running rm -rf * 3. Verify if the corresponding buckets also gets deleted. [root@magna0xx ~]# cd /home/nfsshare/ [root@magna0xx nfsshare]# ls [root@magna0xx nfsshare]# ls [root@magna0xx nfsshare]# mkdir dir1 mkdir: cannot create directory ‘dir1’: File exists [root@magna0xx nfsshare]# mount magna020.ceph.redhat.com:/ on /home/nfsshare type nfs4 (rw,relatime,sync,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.8.128.48,local_lock=none,addr=10.8.128.20) [root@magna0xx ~]# rados ls -p default.rgw.data.root .bucket.meta.dir6:e825fa53-71c5-4242-8456-6dce7981f130.162789.1 .bucket.meta.folder3:e825fa53-71c5-4242-8456-6dce7981f130.25196.1 .bucket.meta.dir8:e825fa53-71c5-4242-8456-6dce7981f130.226915.3 folder3 dir8 dir1 .bucket.meta.folder1:e825fa53-71c5-4242-8456-6dce7981f130.14300.1 .bucket.meta.dir1:e825fa53-71c5-4242-8456-6dce7981f130.113352.1 dir6 folder1 [root@magna0xx ~]# NOTE : Only few buckets were remaining out of many.
Hi Matt, Since the nfs testing began I have been using these folder to add data. I have mostly used crefi, smallfiles, dd, iozone on for writing data.. s3 api were run on a different bucket called "bucketlist", bucketlist1 and so on.. They all got deleted. I tried reproducing them again by creating new folders, added data and deleted, it's working fine.. Not sure how it landed in the undeletable state.
Verified , Was able to delete the existing directory "bigbucket" which used to fail in previous builds... Creating directory with same name also works fine.. [ubuntu@magna003 nfs]$ ls bigbucket ceph client1_run4 client1_run5 client1_run6 client2_run1 client2_run2 nfs [ubuntu@magna003 nfs]$ rm -rf bigbucket [ubuntu@magna003 nfs]$ ls ceph client1_run4 client1_run5 client1_run6 client2_run1 client2_run2 nfs [ubuntu@magna003 nfs]$ mkdir bigbucket [ubuntu@magna003 nfs]$ ls bigbucket ceph client1_run4 client1_run5 client1_run6 client2_run1 client2_run2 nfs
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2017-0514.html