Bug 1398843 - [Ganesha] : rm -rf * is unsuccessful in cleaning up the Ganesha mount point.
Summary: [Ganesha] : rm -rf * is unsuccessful in cleaning up the Ganesha mount point.
Keywords:
Status: CLOSED CANTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: nfs-ganesha
Version: rhgs-3.2
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Jiffin
QA Contact: Ambarish
URL:
Whiteboard:
Depends On:
Blocks: 1351530
TreeView+ depends on / blocked
 
Reported: 2016-11-26 15:13 UTC by Ambarish
Modified: 2023-09-14 03:35 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
When a parallel rm -rf from multiple nfs clients which has large no of directory hierarchy and files in it is performed, due to client side caching, deletion of certain files results in ESTALE, and the parent directory will not be removed with ENOEMPTY. Workaround: Perform rm -rf * again on the mount point.
Clone Of:
Environment:
Last Closed: 2017-08-23 12:32:49 UTC
Embargoed:


Attachments (Terms of Use)

Description Ambarish 2016-11-26 15:13:11 UTC
Description of problem:
----------------------

4 node cluster.Mounted the volume via v4 on 4 clients.Created a huge deep dir data set.
Ran rm -rf <mount-point>/* from multiple clients.It should have cleaned the mount point completely.But that wasn't the case.
It threw a lot of "Stale File Handle: messages on the application side,which I understand is expected from BZ#1396776.
But it left 143 directories as it is ,without deleting.There were no files though :

[root@gqac010 gluster-mount]# ll
total 12
drwxr-xr-x 3 root root 4096 Nov 26 07:02 d1
drwxr-xr-x 3 root root 4096 Nov 26 03:40 d2
drwxrwxr-x 3 root root 4096 Nov 26 07:09 linux-4.8.9
[root@gqac010 gluster-mount]# 

[root@gqac010 gluster-mount]# find . -mindepth 1 -type f


[root@gqac010 gluster-mount]# find . -mindepth 1 -type d | wc -l
143
[root@gqac010 gluster-mount]# 

I see this on SSL as well as non SSL environment.

Version-Release number of selected component (if applicable):
------------------------------------------------------------

nfs-ganesha-gluster-2.4.1-1.el7rhgs.x86_64
glusterfs-ganesha-3.8.4-5.el7rhgs.x86_64


How reproducible:
-----------------

2/2

Steps to Reproduce:
-------------------

Given in description.

Actual results:
--------------

rm -rf * does not clean up properly.

Expected results:
-----------------

rm -rf * should clean up everything.

Additional info:
----------------

OS : RHEL 7.3

*Vol Config* :

Volume Name: testvol
Type: Distributed-Replicate
Volume ID: efe39b14-0eed-498c-b3cd-3946e7f9769c
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gqas014.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick0
Brick2: gqas008.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick1
Brick3: gqas016.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick2
Brick4: gqas009.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick3
Options Reconfigured:
ganesha.enable: on
features.cache-invalidation: on
nfs.disable: on
performance.readdir-ahead: on
transport.address-family: inet
performance.stat-prefetch: off
server.allow-insecure: on
nfs-ganesha: enable
cluster.enable-shared-storage: enable

Comment 6 Ambarish 2016-11-29 10:55:04 UTC
Quick update.

I ran rm -rf from multiple FUSE mounts(2*2 volume),and it cleared everything from the mount point.

Comment 7 Soumya Koduri 2016-11-29 15:11:11 UTC
Ravi and myself have taken a look at the setup and logs provided. We do not see any obvious errors logged or issues. Maybe we should try to reproduce and take tcpdump and observer traffic between NFS-client/NFS-ganesha and Gluster server.

Comment 11 Bhavana 2017-03-14 01:40:29 UTC
Hi jiffin,

I have updated the doc text for the release notes. Let me know if this looks ok, and if I got the workaround correctly.

Comment 14 Kaleb KEITHLEY 2017-08-23 12:32:49 UTC
Known Issue. Behaviour comes from the client side

Comment 15 Red Hat Bugzilla 2023-09-14 03:35:07 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.