Hide Forgot
Description of problem: Created a distributed replicate volume with stat-prefetch enabled & did glusterfs untar from one client & rm -rf from another. rm started failing stale nfs file handle errors. After disabling stat-prefetch, these errors, did not come. client log: [2012-01-06 13:11:27.892716] W [client3_1-fops.c:2249:client3_1_lookup_cbk] 4-vol-client-2: remote operation failed: Stale NFS file handle. Path: /glusterfs- 3.3.0qa18/xlators/cluster/afr/src/afr-mem-types.h [2012-01-06 13:11:27.892787] W [client3_1-fops.c:2249:client3_1_lookup_cbk] 4-vol-client-3: remote operation failed: Stale NFS file handle. Path: /glusterfs- 3.3.0qa18/xlators/cluster/afr/src/afr-mem-types.h [2012-01-06 13:11:27.892815] W [stat-prefetch.c:2614:sp_unlink_helper] 4-vol-stat-prefetch: lookup-behind has failed for path (/glusterfs-3.3.0qa18/xlators/c luster/afr/src/afr-mem-types.h)(Stale NFS file handle), unwinding unlink call waiting on it [2012-01-06 13:11:27.892834] W [fuse-bridge.c:1050:fuse_unlink_cbk] 0-glusterfs-fuse: 4754388: UNLINK() /glusterfs-3.3.0qa18/xlators/cluster/afr/src/afr-mem- types.h => -1 (Stale NFS file handle) [2012-01-06 13:11:27.919645] W [client3_1-fops.c:2249:client3_1_lookup_cbk] 4-vol-client-4: remote operation failed: Stale NFS file handle. Path: /glusterfs- 3.3.0qa18/xlators/cluster/afr/src/afr-self-heal-common.c [2012-01-06 13:11:27.919721] W [client3_1-fops.c:2249:client3_1_lookup_cbk] 4-vol-client-5: remote operation failed: Stale NFS file handle. Path: /glusterfs- 3.3.0qa18/xlators/cluster/afr/src/afr-self-heal-common.c [2012-01-06 13:11:27.919751] W [stat-prefetch.c:2614:sp_unlink_helper] 4-vol-stat-prefetch: lookup-behind has failed for path (/glusterfs-3.3.0qa18/xlators/c luster/afr/src/afr-self-heal-common.c)(Stale NFS file handle), unwinding unlink call waiting on it [2012-01-06 13:11:27.941187] W [fuse-bridge.c:1050:fuse_unlink_cbk] 0-glusterfs-fuse: 4754424: UNLINK() /glusterfs-3.3.0qa18/xlators/cluster/afr/src/afr-self -heal-common.c => -1 (Stale NFS file handle) Steps to Reproduce: 1.while [ 1 ]; do tar -xvf glusterfs-3.3.0qa18.tar.gz ; echo 3 > /proc/sys/vm/drop_caches ; sleep 5; done from one client 2. rm -rf glusterfs-3.3.0qa18/* from another client Actual results: rm: cannot remove `glusterfs-3.3.0qa18/xlators/cluster/afr/src/afr-self-heal-algorithm.c': Stale NFS file handle rm: cannot remove `glusterfs-3.3.0qa18/xlators/cluster/afr/src/afr-self-heal-common.h': Stale NFS file handle rm: cannot remove `glusterfs-3.3.0qa18/xlators/cluster/afr/src/afr-mem-types.h': Stale NFS file handle rm: cannot remove `glusterfs-3.3.0qa18/xlators/cluster/afr/src/afr-self-heal-common.c': Stale NFS file handle rm: cannot remove `glusterfs-3.3.0qa18/xlators/cluster/dht/src/Makefile.am': Stale NFS file handle rm: cannot remove `glusterfs-3.3.0qa18/xlators/cluster/dht/src/Makefile.in': Stale NFS file handle rm: cannot remove `glusterfs-3.3.0qa18/xlators/cluster/dht/src/switch.c': Stale NFS file handle rm: cannot remove `glusterfs-3.3.0qa18/xlators/cluster/Makefile.in': Stale NFS file handle Expected results: root@Dagobah:~/mount# rm -rf glusterfs-3.3.0qa18/* rm: cannot remove `glusterfs-3.3.0qa18/libglusterfs/src': Directory not empty rm: cannot remove `glusterfs-3.3.0qa18/xlators/cluster/afr/src': Directory not empty
I need server side logs, since server is the originator of ESTALE errors. stat-prefetch is just delaying the act of communicating ESTALE error back to fuse. On the other hand stat-prefetch is being replaced with md-cache on master. Hence closing this bug for time-being citing insufficient data.