Bug 1436086 - Parallel readdir on Gluster NFS displays less number of dentries
Summary: Parallel readdir on Gluster NFS displays less number of dentries
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: unclassified
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1439148
TreeView+ depends on / blocked
 
Reported: 2017-03-27 06:38 UTC by Poornima G
Modified: 2017-05-30 18:48 UTC (History)
1 user (show)

Fixed In Version: glusterfs-3.11.0
Clone Of:
: 1439148 (view as bug list)
Environment:
Last Closed: 2017-05-30 18:48:21 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Poornima G 2017-03-27 06:38:40 UTC
Description of problem:

    In readdirp fop, op_errno is overloaded to indicate the EOD detection.
    If op_errno contains ENOENT, then it indicates that there are no
    further entries pending read in the directory. Currently NFS uses the
    ENOENT to identify the EOD.
    
    Issue:
    NFS clients issues a 4K buffer for readdirp, readdir-ahead converts it
    to 128K buffer as its reading ahead. If there are 100 entries in the
    bricks, 128K can get all 100 and store in readdir-ahead, but only 23
    entries that can be fit in 4K will be sent to NFS. Since the whole
    100 entries were read from brick, the op_errno is set to ENOENT, and
    the op_errno is propagated as is when sent to NFS. Hence NFS client
    in reading 23 entries thinks it reached EOD.

Reproducer:
Run the test case with parallel readdir on
./tests/bugs/distribute/bug-1190734.t

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Worker Ant 2017-03-27 06:39:19 UTC
REVIEW: https://review.gluster.org/16953 (reddir-ahead: Fix EOD propagation problem) posted (#1) for review on master by Poornima G (pgurusid)

Comment 2 Worker Ant 2017-04-05 06:24:36 UTC
COMMIT: https://review.gluster.org/16953 committed in master by Raghavendra G (rgowdapp) 
------
commit 61f76f318faed395660f5bbcfe39616b39c158f0
Author: Poornima G <pgurusid>
Date:   Mon Mar 27 11:38:28 2017 +0530

    reddir-ahead: Fix EOD propagation problem
    
    In readdirp fop, op_errno is overloaded to indicate the EOD detection.
    If op_errno contains ENOENT, then it indicates that there are no
    further entries pending read in the directory. Currently NFS uses the
    ENOENT to identify the EOD.
    
    Issue:
    NFS clients issues a 4K buffer for readdirp, readdir-ahead converts it
    to 128K buffer as its reading ahead. If there are 100 entries in the
    bricks, 128K can get all 100 and store in readdir-ahead, but only 23
    entries that can be fit in 4K will be sent to NFS. Since the whole
    100 entries were read from brick, the op_errno is set to ENOENT, and
    the op_errno is propagated as is when sent to NFS. Hence NFS client
    in reading 23 entries thinks it reached EOD.
    
    Solution:
    Do not propogate ENOENT errno, unless all the entries are read
    from the readdir ahead buffer.
    
    Change-Id: I4f173a77b21ab9e98ae35e291a45b8fc0cde65bd
    BUG: 1436086
    Signed-off-by: Poornima G <pgurusid>
    Reviewed-on: https://review.gluster.org/16953
    Smoke: Gluster Build System <jenkins.org>
    Reviewed-by: Raghavendra G <rgowdapp>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>

Comment 3 Shyamsundar 2017-05-30 18:48:21 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report.

glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.