Bug 1501253 - [GSS]Issues in accessing renamed file from multiple clients
Summary: [GSS]Issues in accessing renamed file from multiple clients
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: fuse
Version: rhgs-3.3
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: RHGS 3.4.0
Assignee: Raghavendra G
QA Contact: Vivek Das
Depends On:
Blocks: 1503135
TreeView+ depends on / blocked
Reported: 2017-10-12 09:27 UTC by Abhishek Kumar
Modified: 2018-09-11 06:21 UTC (History)
13 users (show)

Fixed In Version: glusterfs-3.12.2-2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2018-09-04 06:36:52 UTC
Target Upstream Version:

Attachments (Terms of Use)
lock-rename.c (2.67 KB, text/x-csrc)
2017-10-21 12:06 UTC, Soumya Koduri
no flags Details

System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 None None None 2018-09-04 06:38:34 UTC

Description Abhishek Kumar 2017-10-12 09:27:50 UTC
Description of problem:

Through Multiple client file update, few times results in  'No such file or directory' and whole completion time is much more than a single client update

Version-Release number of selected component (if applicable):


How reproducible:

Customer Environment

Steps to Reproduce:

- We have a file name 'property' which is being accessed by multiple clients.
- Client 1 takes a lock on file 'property'
- Create a tmp file something like 'property.tmp' 
- Update the tmp file
- Rename the tmp file to 'property' again and replaced the older 'property' file
- Other client repeat the process

Additional info:

Initial thought was issue can be a cache coherency issue. Imagine the following hypothetical scenario:
A file f1 is looked up by client2 and the inode number for f1 - say inode1 - is cached by VFS of client2.  A rename (f2, f1) is done by client1, which changes the inode number of f1 to inode2 (as inode2 was the inode number of f2 before rename). client2 tries to access f1. However VFS passes inode1 to glusterfs as the cache is not updated with new inode2 for f1. Since inode1 doesn't exist, the access fails with ENOENT.

As per this hypothesis,test has been done by setting entry-timeout and attribute-timeout values as 0 while mounting client1 and client2.
# mount -t glusterfs <volfile-server>:/<volfile-name> -o entry-timeout=0,attribute-timeout=0 <mount-path>

Even with this, same issue is happening

Comment 14 Soumya Koduri 2017-10-21 12:06:47 UTC
Created attachment 1341564 [details]

Comment 39 errata-xmlrpc 2018-09-04 06:36:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.