Bug 1679004 - With parallel-readdir enabled, deleting a directory containing stale linkto files fails with "Directory not empty"
Summary: With parallel-readdir enabled, deleting a directory containing stale linkto f...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: distribute
Version: 6
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Nithya Balachandran
QA Contact:
URL:
Whiteboard:
Depends On: 1672851
Blocks: 1672869 1678183
TreeView+ depends on / blocked
 
Reported: 2019-02-20 05:02 UTC by Nithya Balachandran
Modified: 2019-03-25 16:33 UTC (History)
1 user (show)

Fixed In Version: glusterfs-6.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1672851
Environment:
Last Closed: 2019-02-22 03:35:04 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gluster.org Gerrit 22245 0 None Merged cluster/dht: Request linkto xattrs in dht_rmdir opendir 2019-02-22 03:35:03 UTC

Description Nithya Balachandran 2019-02-20 05:02:43 UTC
+++ This bug was initially created as a clone of Bug #1672851 +++

Description of problem:

If parallel-readdir is enabled on a volume, rm -rf <dir> fails with "Directory not empty" if <dir> contains stale linkto files.


Version-Release number of selected component (if applicable):


How reproducible:
Consistently

Steps to Reproduce:
1. Create a 3 brick distribute volume
2. Enable parallel-readdir and readdir-ahead on the volume
3. Fuse mount the volume and mkdir dir0
4. Create some files inside dir0 and rename them so linkto files are created on the bricks
5. Check the bricks to see which files have linkto files. Delete the data files directly on the bricks, leaving the linkto files behind. These are now stale linkto files.
6. Remount the volume
7. rm -rf dir0

Actual results:
[root@rhgs313-6 fuse1]# rm -rf dir0/
rm: cannot remove ‘dir0/’: Directory not empty


Expected results:
dir0 should be deleted without errors

Additional info:

--- Additional comment from Nithya Balachandran on 2019-02-06 04:10:11 UTC ---

RCA:

rm -rf <dir> works by first listing and unlinking all entries in <dir> and then calling an rmdir <dir>.
As DHT readdirp does not return linkto files in the listing, they are not unlinked as part of the rm -rf itself. dht_rmdir handles this by performing a readdirp internally on <dir> and deleting all stale linkto files before proceeding with the actual rmdir operation.

When parallel-readdir is enabled, the rda xlator is loaded below dht in the graph and proactively lists and caches entries when an opendir is performed. Entries are returned from this cache for any subsequent readdirp calls on the directory that was opened.
DHT uses the presence of the trusted.glusterfs.dht.linkto xattr to determine whether a file is a linkto file. As this call to opendir does not set trusted.glusterfs.dht.linkto in the list of requested xattrs for the opendir call, the cached entries do not contain this xattr value.  As none of the entries returned will have the xattr, DHT believes they are all data files and fails the rmdir with ENOTEMPTY.

Turning off parallel-readdir allows the rm -rf to succeed.

--- Additional comment from Worker Ant on 2019-02-06 04:37:57 UTC ---

REVIEW: https://review.gluster.org/22160 (cluster/dht: Request linkto xattrs in dht_rmdir opendir) posted (#1) for review on master by N Balachandran

--- Additional comment from Worker Ant on 2019-02-13 18:24:39 UTC ---

REVIEW: https://review.gluster.org/22160 (cluster/dht: Request linkto xattrs in dht_rmdir opendir) merged (#3) on master by Raghavendra G

Comment 1 Worker Ant 2019-02-21 05:18:21 UTC
REVIEW: https://review.gluster.org/22245 (cluster/dht: Request linkto xattrs in dht_rmdir opendir) posted (#1) for review on release-6 by N Balachandran

Comment 2 Worker Ant 2019-02-22 03:35:04 UTC
REVIEW: https://review.gluster.org/22245 (cluster/dht: Request linkto xattrs in dht_rmdir opendir) merged (#2) on release-6 by Shyamsundar Ranganathan

Comment 3 Shyamsundar 2019-03-25 16:33:20 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.