Bug 1593538 - ctime: Access time is different with in same replica/EC volume
Summary: ctime: Access time is different with in same replica/EC volume
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: ctime
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1633015
TreeView+ depends on / blocked
 
Reported: 2018-06-21 03:54 UTC by Kotresh HR
Modified: 2023-09-14 04:30 UTC (History)
3 users (show)

Fixed In Version: glusterfs-6.0
Clone Of:
: 1633015 (view as bug list)
Environment:
Last Closed: 2019-03-25 16:30:27 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Kotresh HR 2018-06-21 03:54:56 UTC
Description of problem:
EC sends several read requests to only one subvolume: access, getxattr and seek. It also sends readv requests to a subset of subvolumes, but not all. AFR has a similar behavior for the same fops. Hence atime, would not get updated on all the subvolume.


Version-Release number of selected component (if applicable):
mainilne

How reproducible:
Always


Actual results:
atime is not consistent across replica/EC set

Expected results:
Even atime should be consistent

Additional info:

Comment 1 Kotresh HR 2018-06-21 03:58:30 UTC
Fixing this would affect performance. Most of the applications depend on consistency of ctime and mtime. I doubt any application is strictly dependent on atime's consistency. So should we really consider fixing this? If it's not fixed, then it would cause self traffic because of this xattr being inconsistent across replica.

Amar/Xavi,

What do you think?

Comment 2 Xavi Hernandez 2018-06-21 07:26:30 UTC
Keeping atime synchronized in all bricks in the naive way will probably have an important performance impact. EC and AFR would need to send all read requests to all bricks, just to update the atime. In the case of DHT, the problem is worse because accessing a directory (readdir) should update atime, which means that all subvolumes should be updated. This is not scalable.

However there are applications that use atime (though I'm not sure what degree of consistency they need), and not updating atime consistently could trigger self-heal constantly unless we explicitly handle and parse ctime xattr in AFR and EC.

I think we should provide this as an option (or depend on atime/noatime/relatime mount options). It should be implemented as an special update operation (maybe a setattr sent internally by utime xlator after successful reads) because otherwise we would need to do big changes in DHT, AFR and EC. In this case, read operations shouldn't update atime at all. It should only be updated by this special internal operation. This way we make sure it's kept consistent all the time.

We could provide options to update it continuously, lazily in the background, using the same semantics than relatime, or not updating it at all. This way the user can decide what's really needed.

Comment 3 Worker Ant 2018-09-04 07:27:03 UTC
REVIEW: https://review.gluster.org/21073 (ctime: Provide noatime option) posted (#1) for review on master by Kotresh HR

Comment 4 Worker Ant 2018-09-25 17:21:29 UTC
COMMIT: https://review.gluster.org/21073 committed in master by "Amar Tumballi" <amarts> with a commit message- ctime: Provide noatime option

Most of the applications are {c|m}time dependant
and very few are atime dependant. So provide noatime
option to not update atime when ctime feature is
enabled.

Also this option has to be enabled with ctime
feature to avoid unnecessary self heal. Since
AFR/EC reads data from single subvolume, atime
is only updated in one subvolume triggering self
heal.

updates: bz#1593538
Change-Id: I085fb33c882296545345f5df194cde7b6cbc337e
Signed-off-by: Kotresh HR <khiremat>

Comment 5 Shyamsundar 2019-03-25 16:30:27 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/

Comment 6 Red Hat Bugzilla 2023-09-14 04:30:08 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.