Bug 1396113 - many file related inconsistencies with gnfs
Summary: many file related inconsistencies with gnfs
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: gluster-nfs
Version: rhgs-3.2
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ---
Assignee: Niels de Vos
QA Contact: surabhi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-17 13:13 UTC by Nag Pavan Chilakam
Modified: 2017-02-20 08:31 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-02-15 14:31:44 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Nag Pavan Chilakam 2016-11-17 13:13:57 UTC
Description of problem:
=========================
Hit these issues while trying to validate "Reset cs->resolvedhard while resolving an entry"
code changes due to http://review.gluster.org/#/c/14941/
and bz#https://bugzilla.redhat.com/show_bug.cgi?id=1328451
I have a distributed volume having just two bricks say b1 and b2 hosted by nodes n1 and n2 respectively.
There are 2 more nodes n3 and n4 in the cluster

mounted the volume on two clients say c1 and c2 using n3 and n4 nfs servers respectively.


1) created a dir dir1
2) created files f{1..10}

Problems/Observations:
1) from c1 and dir dir1 did rm -rf * (to delete all f{1..10} then went to  c2 and did ls, following was the o/p
ls: cannot access f1: No such file or directory
ls: cannot access f2: No such file or directory
ls: cannot access f3: No such file or directory
ls: cannot access f5: No such file or directory
ls: cannot access f10: No such file or directory
ls: cannot access f4: No such file or directory
ls: cannot access f6: No such file or directory
ls: cannot access f7: No such file or directory
ls: cannot access f8: No such file or directory
ls: cannot access f9: No such file or directory
f1  f10  f2  f3  f4  f5  f6  f7  f8  f9
(ideally I should not be seeing anything, ls must just return nothing)


2) again created files f{1..10} immediately on c1, without doing a ls on c2
now if i do a "ls or stat f1" from c2, it says  for the first time 
[root@dhcp35-103 dirx]# stat f1
stat: cannot stat ‘f1’: No such file or directory

after that is continuosly says stat: cannot stat ‘f1’: Stale file handle
until i do a lookup of whole dir using ls

3)cleaned up all the above files in dirx and then did touch of x{1..10} from c1 and issued ls from c2.
c2 doesn't lists any file atleast for 4-5 attempt before getting the list of x{1..10}


4)
Version-Release number of selected component (if applicable):
[root@dhcp35-37 ~]# rpm -qa|grep gluster
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
glusterfs-server-3.8.4-3.el7rhgs.x86_64
glusterfs-ganesha-3.8.4-3.el7rhgs.x86_64
glusterfs-api-3.8.4-3.el7rhgs.x86_64
glusterfs-libs-3.8.4-3.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-3.el7rhgs.x86_64
nfs-ganesha-gluster-2.3.1-8.el7rhgs.x86_64
glusterfs-cli-3.8.4-3.el7rhgs.x86_64
python-gluster-3.8.4-3.el7rhgs.noarch
glusterfs-devel-3.8.4-3.el7rhgs.x86_64
glusterfs-events-3.8.4-3.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-3.el7rhgs.x86_64
glusterfs-fuse-3.8.4-3.el7rhgs.x86_64
glusterfs-api-devel-3.8.4-3.el7rhgs.x86_64
glusterfs-rdma-3.8.4-3.el7rhgs.x86_64
gluster-nagios-addons-0.2.7-1.el7rhgs.x86_64
glusterfs-3.8.4-3.el7rhgs.x86_64


Expected results:


Additional info:

Comment 2 Nag Pavan Chilakam 2016-11-18 07:17:49 UTC
hit this bug as part of validation of 1328451 - observing " Too many levels of symbolic links" after adding bricks and then issuing a replace brick

Comment 6 Niels de Vos 2016-11-29 14:54:56 UTC
To me this also sounds like caching done on the NFS-client, and not something that Gluster/NFS can (or is expected to) fix. When files are deleted on one NFS-client, it is common to see those files for a little longer on other NFS-clients. Mounting with "noac" or dropping the cached dentries and inodes might help in this case (echo 2 > /proc/sys/vm/drop_caches).

Otherwise doing the operations on the 2nd NFS-client with some delay may be sufficient too.

It is unclear to me if this problem is newly introduced with this particular code change, or if this has existed before (what I expect).

Note that "ls" also executes a stat() systemcall by default on RHEL. In order to prevent executing the stat(), you will need to run "/bin/ls" or escape the bash alias by running "\ls". The NFS-client can have dentries cached, causing no new READDIR to be sent to the server. In case the attributes already have expired, only the stat() would be done. Depending on the state of the caches and the changes on the Gluster volume, either ENOENT or ESTALE would get returned.

If none of the above hints help, we need a tcpdump captured on the Gluster/NFS server. The capture should include the mounting of the NFS-clients, the NFS traffic and GlusterFS traffic. It would also be helpful to have the rpcdebug output from the NFS-clients. This information makes it possible for us to track the operations returning and done on the (NFS) filehandles and (GlusterFS) GFIDs.

Comment 8 Nag Pavan Chilakam 2016-12-01 07:35:44 UTC
If we mount the vol on nfs using option noac then i don't see any of the problems.
Hence Looks more of a design limitation which we can live with, only problem will be that application relying on this data can fail.
However we can move it to 3.2.0-beyond


Note You need to log in before you can comment on or make changes to this bug.