Bug 1136159 - Open fails with ENOENT while renames/readdirs are in progress
Summary: Open fails with ENOENT while renames/readdirs are in progress
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: posix
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Pranith Kumar K
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1136622 1136821
TreeView+ depends on / blocked
 
Reported: 2014-09-02 04:27 UTC by Pranith Kumar K
Modified: 2015-05-14 17:43 UTC (History)
2 users (show)

Fixed In Version: glusterfs-3.7.0
Clone Of:
: 1136622 1136821 (view as bug list)
Environment:
Last Closed: 2015-05-14 17:27:27 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Pranith Kumar K 2014-09-02 04:27:32 UTC
Description of problem:
Executing renames/readdirp/cat in a loop can lead to opens failing with ENOENT.

Version-Release number of selected component (if applicable):


How reproducible:
Very

Steps to Reproduce:
1. Created a plain replicate volume, disabled all performance xlators.
gluster volume set $1 performance.quick-read off
gluster volume set $1 performance.io-cache off
gluster volume set $1 performance.write-behind off
gluster volume set $1 performance.stat-prefetch off
gluster volume set $1 performance.read-ahead off

2. Mounted the volume on 2 mounts using -o direct-io-mode=yes
3. On one mount execute ls -lR
4. On the other mount execute:
echo abc > abc-ln
while true; do ln abc-ln abc; mv abc-ln abc; echo 3>/proc/sys/vm/drop_caches; cat abc; ln abc abc-ln; mv abc abc-ln; echo 3>/proc/sys/vm/drop_caches; cat abc-ln; done

Actual results:
brick logs print 'Not able to open file, No such file or directory' quite a few times even though the file is always present'

Expected results:
No failures should come in opens of files

Additional info:

Comment 1 Anand Avati 2014-09-02 04:29:40 UTC
REVIEW: http://review.gluster.org/8575 (storage/posix: Prefer gfid links for inode-handle) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 2 Anand Avati 2014-09-02 08:14:19 UTC
REVIEW: http://review.gluster.org/8575 (storage/posix: Prefer gfid links for inode-handle) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 3 Anand Avati 2014-09-02 12:10:23 UTC
COMMIT: http://review.gluster.org/8575 committed in master by Vijay Bellur (vbellur) 
------
commit 2c0a694b8d910c530899077c1d242ad1ea250965
Author: Pranith Kumar K <pkarampu>
Date:   Tue Sep 2 09:40:44 2014 +0530

    storage/posix: Prefer gfid links for inode-handle
    
    Problem:
    File path could change by other entry operations in-flight so if renames are in
    progress at the time of other operations like open, it may lead to failures.
    We observed that this issue can also happen while renames and readdirps/lookups
    are in progress because dentry-table is going stale sometimes.
    
    Fix:
    Prefer gfid-handles over paths for files. For directory handles prefering
    gfid-handles hits performance issues because it needs to resolve paths
    traversing up the symlinks.
    Tests which test if files are opened should check on gfid path after this change.
    So changed couple of tests to reflect the same.
    
    Note:
    This patch doesn't fix the issue for directories. I think a complete fix is to
    come up with an entry operation serialization xlator. Until then lets live with
    this.
    
    Change-Id: I10bda1083036d013f3a12588db7a71039d9da6c3
    BUG: 1136159
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/8575
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 4 Niels de Vos 2015-05-14 17:27:27 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 5 Niels de Vos 2015-05-14 17:35:34 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 6 Niels de Vos 2015-05-14 17:37:56 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 7 Niels de Vos 2015-05-14 17:43:29 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.