Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1501253 - [GSS]Issues in accessing renamed file from multiple clients
[GSS]Issues in accessing renamed file from multiple clients
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: fuse (Show other bugs)
3.3
Unspecified Unspecified
unspecified Severity medium
: ---
: RHGS 3.4.0
Assigned To: Raghavendra G
Vivek Das
:
Depends On:
Blocks: 1503135
  Show dependency treegraph
 
Reported: 2017-10-12 05:27 EDT by Abhishek Kumar
Modified: 2018-09-11 02:21 EDT (History)
13 users (show)

See Also:
Fixed In Version: glusterfs-3.12.2-2
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-09-04 02:36:52 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
lock-rename.c (2.67 KB, text/x-csrc)
2017-10-21 08:06 EDT, Soumya Koduri
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 None None None 2018-09-04 02:38 EDT

  None (edit)
Description Abhishek Kumar 2017-10-12 05:27:50 EDT
Description of problem:

Through Multiple client file update, few times results in  'No such file or directory' and whole completion time is much more than a single client update


Version-Release number of selected component (if applicable):

glusterfs-server-3.8.4-44.el7rhgs.x86_64

How reproducible:

Customer Environment

Steps to Reproduce:

- We have a file name 'property' which is being accessed by multiple clients.
- Client 1 takes a lock on file 'property'
- Create a tmp file something like 'property.tmp' 
- Update the tmp file
- Rename the tmp file to 'property' again and replaced the older 'property' file
- Other client repeat the process


Additional info:

Initial thought was issue can be a cache coherency issue. Imagine the following hypothetical scenario:
A file f1 is looked up by client2 and the inode number for f1 - say inode1 - is cached by VFS of client2.  A rename (f2, f1) is done by client1, which changes the inode number of f1 to inode2 (as inode2 was the inode number of f2 before rename). client2 tries to access f1. However VFS passes inode1 to glusterfs as the cache is not updated with new inode2 for f1. Since inode1 doesn't exist, the access fails with ENOENT.

As per this hypothesis,test has been done by setting entry-timeout and attribute-timeout values as 0 while mounting client1 and client2.
# mount -t glusterfs <volfile-server>:/<volfile-name> -o entry-timeout=0,attribute-timeout=0 <mount-path>

Even with this, same issue is happening
Comment 14 Soumya Koduri 2017-10-21 08:06 EDT
Created attachment 1341564 [details]
lock-rename.c
Comment 39 errata-xmlrpc 2018-09-04 02:36:52 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607

Note You need to log in before you can comment on or make changes to this bug.