Bug 1422059 - nfs: The corresponding buckets for created directories still remains even after removing them from nfs mountpoint
Summary: nfs: The corresponding buckets for created directories still remains even aft...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW
Version: 2.2
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: 2.2
Assignee: Matt Benjamin (redhat)
QA Contact: Hemanth Kumar
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-02-14 12:04 UTC by Hemanth Kumar
Modified: 2022-02-21 18:42 UTC (History)
10 users (show)

Fixed In Version: RHEL: ceph-10.2.5-29.el7cp, nfs-ganesha-2.4.2-6.el7cp Ubuntu: ceph_10.2.5-21redhat1xenial, nfs-ganesha_2.4.2-6redhat1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-14 15:49:25 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-3529 0 None None None 2022-02-21 18:42:26 UTC
Red Hat Product Errata RHBA-2017:0514 0 normal SHIPPED_LIVE Red Hat Ceph Storage 2.2 bug fix and enhancement update 2017-03-21 07:24:26 UTC

Description Hemanth Kumar 2017-02-14 12:04:12 UTC
Description of problem:
-----------------------
The corresponding buckets for created directories still remains even after removing the directories from nfs mountpoint

Version-Release number of selected component (if applicable):
------------------------------------------------------------
# rpm -qa | grep nfs
nfs-ganesha-rgw-2.4.2-5.el7cp.x86_64
nfs-ganesha-2.4.2-5.el7cp.x86_64
# rpm -qa | grep ceph
ceph-radosgw-10.2.5-26.el7cp.x86_64



Steps to Reproduce:
1. Upgraded the nfs builds and ceph to the latest (nfs-2.4.2.4, ceph-10.2.5.26)
2. Deleted all the available directories from the nfsshare mounted on the clients by running rm -rf *
3. Verify if the corresponding buckets also gets deleted.

[root@magna0xx ~]# cd /home/nfsshare/
[root@magna0xx nfsshare]# ls
[root@magna0xx nfsshare]# ls
[root@magna0xx nfsshare]# mkdir dir1
mkdir: cannot create directory ‘dir1’: File exists

[root@magna0xx nfsshare]# mount
magna020.ceph.redhat.com:/ on /home/nfsshare type nfs4 (rw,relatime,sync,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.8.128.48,local_lock=none,addr=10.8.128.20)

[root@magna0xx ~]# rados ls -p default.rgw.data.root
.bucket.meta.dir6:e825fa53-71c5-4242-8456-6dce7981f130.162789.1
.bucket.meta.folder3:e825fa53-71c5-4242-8456-6dce7981f130.25196.1
.bucket.meta.dir8:e825fa53-71c5-4242-8456-6dce7981f130.226915.3
folder3
dir8
dir1
.bucket.meta.folder1:e825fa53-71c5-4242-8456-6dce7981f130.14300.1
.bucket.meta.dir1:e825fa53-71c5-4242-8456-6dce7981f130.113352.1
dir6
folder1
[root@magna0xx ~]# 

NOTE : Only few buckets were remaining out of many.

Comment 4 Hemanth Kumar 2017-02-16 13:05:27 UTC
Hi Matt,

Since the nfs testing began I have been using these folder to add data. I have mostly used crefi, smallfiles, dd, iozone on for writing data.. s3 api were run on a different bucket called "bucketlist", bucketlist1 and so on.. They all got deleted.

I tried reproducing them again by creating new folders, added data and deleted,  it's working fine.. Not sure how it landed in the undeletable state.

Comment 19 Hemanth Kumar 2017-02-24 08:55:45 UTC
Verified , Was able to delete the existing directory "bigbucket" which used to fail in previous builds... Creating directory with same name also works fine..

[ubuntu@magna003 nfs]$ ls
bigbucket  ceph  client1_run4  client1_run5  client1_run6  client2_run1  client2_run2  nfs

[ubuntu@magna003 nfs]$ rm -rf bigbucket

[ubuntu@magna003 nfs]$ ls
ceph  client1_run4  client1_run5  client1_run6  client2_run1  client2_run2  nfs

[ubuntu@magna003 nfs]$ mkdir bigbucket
[ubuntu@magna003 nfs]$ ls
bigbucket  ceph  client1_run4  client1_run5  client1_run6  client2_run1  client2_run2  nfs

Comment 21 errata-xmlrpc 2017-03-14 15:49:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2017-0514.html


Note You need to log in before you can comment on or make changes to this bug.