Bug 1365468

Summary: [Perf/Stress] : "Stale File Handle" during smallfile listing on Ganesha v4 mounts.
Product: [Community] GlusterFS Reporter: Ambarish <asoman>
Component: ganesha-nfsAssignee: Soumya Koduri <skoduri>
Status: CLOSED EOL QA Contact:
Severity: high Docs Contact:
Priority: medium    
Version: 3.8CC: asoman, bugs, info, jthottan, kkeithle, mzywusko, ndevos, skoduri
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: nfs-ganesha-next.20160727.5ba03b2-1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-11-07 10:40:08 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Ambarish 2016-08-09 10:58:27 UTC
Description of problem:
----------------------

Recursive ls from Ganesha v4 mount point results in "Stale File Handle".

Version-Release number of selected component (if applicable):
-------------------------------------------------------------


glusterfs-server-3.8.1-0.4.git56fcf39.el7rhgs.x86_64
nfs-ganesha-gluster-2.4-0.dev.26.el7rhgs.x86_64
pacemaker-libs-1.1.13-10.el7.x86_64
pcs-0.9.143-15.el7.x86_64


How reproducible:
-----------------

2/2

Steps to Reproduce:
-------------------

1. Run smallfile creates : python /small-files/smallfile/smallfile_cli.py --operation create --threads 8  --file-size 64 --files 10000 --top /gluster-mount --host-set "`echo $CLIENT | tr ' ' ','`"
 
2. Run smallfile ls : python /small-files/smallfile/smallfile_cli.py --operation ls-l--threads 8  --file-size 64 --files 10000 --top /gluster-mount --host-set "`echo $CLIENT | tr ' ' ','`"

3. Try ls -R manually as well from mount point.

Actual results:
--------------

"stale File Handle" on a couple of files/dirs :

<snip>

drwxr-xr-x 2 root root  4096 Aug  9 05:45 d_005
drwxr-xr-x 2 root root  4096 Aug  9 05:45 d_006
drwxr-xr-x 2 root root  4096 Aug  9 05:45 d_007
drwxr-xr-x 2 root root  4096 Aug  9 05:45 d_008
drwxr-xr-x 2 root root  4096 Aug  9 05:45 d_009
ls: cannot open directory ./file_srcdir/gqac006.sbu.lab.eng.bos.redhat.com/thrd_00/d_004/d_000: Stale file handle
ls: cannot open directory ./file_srcdir/gqac006.sbu.lab.eng.bos.redhat.com/thrd_00/d_004/d_001: Stale file handle
ls: cannot open directory ./file_srcdir/gqac006.sbu.lab.eng.bos.redhat.com/thrd_00/d_004/d_002: Stale file handle
ls: cannot open directory ./file_srcdir/gqac006.sbu.lab.eng.bos.redhat.com/thrd_00/d_004/d_003: Stale file handle
ls: cannot open directory ./file_srcdir/gqac006.sbu.lab.eng.bos.redhat.com/thrd_00/d_004/d_004: Stale file handle

</snip>

Expected results:
------------------

No stale file handle.Listing should be successful.

Additional info:
---------------

Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 3ee2c046-939b-4915-908b-859bfcad0840
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gqas001.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick0
Brick2: gqas014.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick1
Brick3: gqas015.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick2
Brick4: gqas016.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick3
Options Reconfigured:
client.event-threads: 4
server.event-threads: 4
cluster.lookup-optimize: on
ganesha.enable: on
features.cache-invalidation: on
nfs.disable: on
performance.readdir-ahead: on
performance.stat-prefetch: off
server.allow-insecure: on
nfs-ganesha: enable
cluster.enable-shared-storage: enable

Comment 4 Soumya Koduri 2016-08-24 06:20:19 UTC
Thanks Ambarish. So we shall move  this bug to MODIFIED state.

Comment 5 Niels de Vos 2016-09-12 05:39:48 UTC
All 3.8.x bugs are now reported against version 3.8 (without .x). For more information, see http://www.gluster.org/pipermail/gluster-devel/2016-September/050859.html

Comment 6 Niels de Vos 2017-11-07 10:40:08 UTC
This bug is getting closed because the 3.8 version is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.