+++ This bug was initially created as a clone of Bug #1593538 +++ Description of problem: EC sends several read requests to only one subvolume: access, getxattr and seek. It also sends readv requests to a subset of subvolumes, but not all. AFR has a similar behavior for the same fops. Hence atime, would not get updated on all the subvolume. Version-Release number of selected component (if applicable): mainilne How reproducible: Always Actual results: atime is not consistent across replica/EC set Expected results: Even atime should be consistent Additional info: --- Additional comment from Kotresh HR on 2018-06-20 23:58:30 EDT --- Fixing this would affect performance. Most of the applications depend on consistency of ctime and mtime. I doubt any application is strictly dependent on atime's consistency. So should we really consider fixing this? If it's not fixed, then it would cause self traffic because of this xattr being inconsistent across replica. Amar/Xavi, What do you think? --- Additional comment from Xavi Hernandez on 2018-06-21 03:26:30 EDT --- Keeping atime synchronized in all bricks in the naive way will probably have an important performance impact. EC and AFR would need to send all read requests to all bricks, just to update the atime. In the case of DHT, the problem is worse because accessing a directory (readdir) should update atime, which means that all subvolumes should be updated. This is not scalable. However there are applications that use atime (though I'm not sure what degree of consistency they need), and not updating atime consistently could trigger self-heal constantly unless we explicitly handle and parse ctime xattr in AFR and EC. I think we should provide this as an option (or depend on atime/noatime/relatime mount options). It should be implemented as an special update operation (maybe a setattr sent internally by utime xlator after successful reads) because otherwise we would need to do big changes in DHT, AFR and EC. In this case, read operations shouldn't update atime at all. It should only be updated by this special internal operation. This way we make sure it's kept consistent all the time. We could provide options to update it continuously, lazily in the background, using the same semantics than relatime, or not updating it at all. This way the user can decide what's really needed. --- Additional comment from Worker Ant on 2018-09-04 03:27:03 EDT --- REVIEW: https://review.gluster.org/21073 (ctime: Provide noatime option) posted (#1) for review on master by Kotresh HR --- Additional comment from Worker Ant on 2018-09-25 13:21:29 EDT --- COMMIT: https://review.gluster.org/21073 committed in master by "Amar Tumballi" <amarts> with a commit message- ctime: Provide noatime option Most of the applications are {c|m}time dependant and very few are atime dependant. So provide noatime option to not update atime when ctime feature is enabled. Also this option has to be enabled with ctime feature to avoid unnecessary self heal. Since AFR/EC reads data from single subvolume, atime is only updated in one subvolume triggering self heal. updates: bz#1593538 Change-Id: I085fb33c882296545345f5df194cde7b6cbc337e Signed-off-by: Kotresh HR <khiremat>
REVIEW: https://review.gluster.org/21281 (ctime: Provide noatime option) posted (#1) for review on release-5 by Kotresh HR
COMMIT: https://review.gluster.org/21281 committed in release-5 by "Shyamsundar Ranganathan" <srangana> with a commit message- ctime: Provide noatime option Most of the applications are {c|m}time dependant and very few are atime dependant. So provide noatime option to not update atime when ctime feature is enabled. Also this option has to be enabled with ctime feature to avoid unnecessary self heal. Since AFR/EC reads data from single subvolume, atime is only updated in one subvolume triggering self heal. Backport of: > Patch: https://review.gluster.org/21073 > BUG: 1593538 > Change-Id: I085fb33c882296545345f5df194cde7b6cbc337e > Signed-off-by: Kotresh HR <khiremat> (cherry picked from commit 89636be4c73b12de2e11c75d8e59527bb243f147) updates: bz#1633015 Change-Id: I085fb33c882296545345f5df194cde7b6cbc337e Signed-off-by: Kotresh HR <khiremat>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report. glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html [2] https://www.gluster.org/pipermail/gluster-users/