Bug 1734305 - ctime: When healing ctime xattr for legacy files, if multiple clients access and modify the same file, the ctime might be updated incorrectly.
Summary: ctime: When healing ctime xattr for legacy files, if multiple clients access ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: core
Version: rhgs-3.5
Hardware: All
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.5.0
Assignee: Kotresh HR
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
Depends On: 1734299 1737745 1739436
Blocks: 1696809
TreeView+ depends on / blocked
 
Reported: 2019-07-30 08:19 UTC by Kotresh HR
Modified: 2019-10-31 13:22 UTC (History)
8 users (show)

Fixed In Version: glusterfs-6.0-12
Doc Type: No Doc Update
Doc Text:
Clone Of: 1734299
Environment:
Last Closed: 2019-10-30 12:22:31 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2019:3249 None None None 2019-10-30 12:22:54 UTC

Description Kotresh HR 2019-07-30 08:19:21 UTC
+++ This bug was initially created as a clone of Bug #1734299 +++

Description of problem:
Ctime heals the ctime xattr ("trusted.glusterfs.mdata") in lookup
if it's not present. In a multi client scenario, there is a race
which results in updating the ctime xattr to older value.

e.g. Let c1 and c2 be two clients and file1 be the file which
doesn't have the ctime xattr. Let the ctime of file1 be t1.
(from backend, ctime heals time attributes from backend when not present).

Now following operations are done on mount
c1 -> ls -l /mnt1/file1  |   c2 -> ls -l /mnt2/file1;echo "append" >> /mnt2/file1;

The race is that the both c1 and c2 didn't fetch the ctime xattr in lookup,
so both of them tries to heal ctime to time 't1'. If c2 wins the race and 
appends the file before c1 heals it, it sets the time to 't1' and updates 
it to 't2' (because of append). Now c1 proceeds to heal and sets it to 't1' 
which is incorrect.


Version-Release number of selected component (if applicable):
mainline

How reproducible:
Always

Steps to Reproduce:
1. Create single brick gluster volume and start it
2. Mount at /mnt1 and /mnt2
3. Disable ctime
    gluster volume set <volname> ctime off
4. Create a file
    touch /mnt/file1
5. Enable ctime
    gluster volume set <volname> ctime on
6. Put a breakpoint at gf_utime_set_mdata_lookup_cbk on '/mnt1'
7. ls -l /mnt1/file1
      This hits the break point, allow for root gfid and don't continue on stbuf->ia_gfid  equals to file1's gfid
8. ls -l /mnt2/file1
9. The ctime xattr is healed from /mnt2. Capture it.
    getfattr -d -m . -e hex /<brickpath>/file1 | grep mdata
10. echo "append" >> /mnt2/file1 and capture mdata
    getfattr -d -m . -e hex /<brickpath>/file1 | grep mdata
11. Continue the break point at step 7 and capture the mdata


Actual results:
mdata xattr at step 11 is equal to step 9 (Went back in time)

Expected results:
mdata xattr at step 11 should be equal to step 10

Additional info:

--- Additional comment from Worker Ant on 2019-07-30 08:14:18 UTC ---

REVIEW: https://review.gluster.org/23131 (posix/ctime: Fix race during lookup ctime xattr heal) posted (#1) for review on master by Kotresh HR

Comment 10 errata-xmlrpc 2019-10-30 12:22:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:3249


Note You need to log in before you can comment on or make changes to this bug.