Bug 854633 - ENOENT when Node replaced by another machine
Summary: ENOENT when Node replaced by another machine
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: 2.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Amar Tumballi
QA Contact: Anush Shetty
URL:
Whiteboard:
Depends On: GLUSTER-3831
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-09-05 13:31 UTC by Vidya Sakar
Modified: 2013-12-19 00:08 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.4.0qa6
Doc Type: Bug Fix
Doc Text:
Clone Of: GLUSTER-3831
Environment:
Last Closed: 2013-09-23 22:39:10 UTC
Embargoed:


Attachments (Terms of Use)

Description Vidya Sakar 2012-09-05 13:31:08 UTC
+++ This bug was initially created as a clone of Bug #765563 +++

From gluster-devel mailing list

After machine2 replaced the node, machine1 gets the new inode number using READDIR, but the inode number returned by LOOKUP is the one for the deleted node. The only way to work it around is to unmount and remount the filesystem.

machine1# cd /gfs/stale && ls -l
total 4
drwxr-xr-x  2 root  wheel  2048 Nov 27 06:47 a

machine2# cd /gfs/stale && cp -r a b && rm -Rf a && mv b a

machine1# ls -l 
ls: a: No such file or directory
machine1# cd a
machine1# ls
ls: .: No such file or directory
machine1# cd /gfs/stale 
machine1# ls -l
ls: a: No such file or directory

--- Additional comment from amarts on 2011-12-01 02:38:40 EST ---

Is this tried to reproduce? I see that fuse-bridge already has a mechanism to re-validate the inodes.

Please try to reproduce on master branch, and confirm the behavior.

Comment 2 Anush Shetty 2013-01-16 06:03:36 UTC
Could you guys please point us to the fix here?

Comment 3 Amar Tumballi 2013-01-21 05:17:19 UTC
Anush, the changes from GFID backend and proper fd migrations should fix these things. There are specific bugs for these enhancement and hence no mention of any patches/fixes here. Please see that in 3.4.x releases (for now qa) this is fixed. None of our testing yeild this behavior anymore.

Comment 6 Anush Shetty 2013-08-07 12:22:22 UTC
Verified with glusterfs-fuse-3.4.0.17rhs-1.el6rhs.x86_64

Comment 7 Scott Haines 2013-09-23 22:39:10 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html

Comment 8 Scott Haines 2013-09-23 22:43:41 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html


Note You need to log in before you can comment on or make changes to this bug.