Bug 1585894 - posix/ctime: EC self heal of directory is blocked with ctime feature enabled
Summary: posix/ctime: EC self heal of directory is blocked with ctime feature enabled
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: posix
Version: 4.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Kotresh HR
QA Contact:
URL:
Whiteboard:
Depends On: 1584981
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-06-05 02:40 UTC by Kotresh HR
Modified: 2019-04-23 10:09 UTC (History)
2 users (show)

Fixed In Version: glusterfs-v4.1.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1584981
Environment:
Last Closed: 2018-06-20 18:07:27 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Kotresh HR 2018-06-05 02:40:52 UTC
+++ This bug was initially created as a clone of Bug #1584981 +++

Description of problem:
EC self heal of directory is blocked with ctime feature enabled.
It's found out that trusted.glusterfs.mdata xattr's value is differnt
among the EC subvolume set. If that happens, it is expected EC can't
heal the xattr and hence it's blocked

Version-Release number of selected component (if applicable):
mainline

How reproducible:
Most of the times

Steps to Reproduce:
Run following EC test case found in gluster repo.
1. prove -v ./tests/bugs/ec/bug-1547662.t


Actual results:
The test case fails most of the times 

Expected results:
The test case should always pass

Additional info:

--- Additional comment from Worker Ant on 2018-06-01 02:09:28 EDT ---

REVIEW: https://review.gluster.org/20120 (posix/ctime: Fix fops racing in updating mtime/atime) posted (#1) for review on master by Kotresh HR

--- Additional comment from Worker Ant on 2018-06-03 05:10:16 EDT ---

COMMIT: https://review.gluster.org/20120 committed in master by "Amar Tumballi" <amarts> with a commit message- posix/ctime: Fix fops racing in updating mtime/atime

In distributed systems, there could be races with fops
updating mtime/atime which could result in different
mtime/atime for same file. So updating them only if
time is greater than the existing makes sure, only
the highest time is retained. If the mtime/atime
update comes from the explicit utime syscall, it is
allowed to set to previous time.

Thanks Xavi for helping in rooting the issue.

fixes: bz#1584981
Change-Id: If1230a75b96d7f9a828795189fcc699049e7826e
Signed-off-by: Kotresh HR <khiremat>

Comment 1 Worker Ant 2018-06-05 02:45:51 UTC
REVIEW: https://review.gluster.org/20146 (posix/ctime: Fix fops racing in updating mtime/atime) posted (#1) for review on release-4.1 by Kotresh HR

Comment 2 Worker Ant 2018-06-08 12:57:21 UTC
COMMIT: https://review.gluster.org/20146 committed in release-4.1 by "Shyamsundar Ranganathan" <srangana> with a commit message- posix/ctime: Fix fops racing in updating mtime/atime

In distributed systems, there could be races with fops
updating mtime/atime which could result in different
mtime/atime for same file. So updating them only if
time is greater than the existing makes sure, only
the highest time is retained. If the mtime/atime
update comes from the explicit utime syscall, it is
allowed to set to previous time.

Thanks Xavi for helping in rooting the issue.

Backport of:
> Patch: https://review.gluster.org/#/c/20120/
> BUG: 1584981
> Change-Id: If1230a75b96d7f9a828795189fcc699049e7826e
> Signed-off-by: Kotresh HR <khiremat>
(cherry picked from commit a6f0e7a4f1ca203762cae2ed5e426b52124c74dc)


fixes: bz#1585894
Change-Id: If1230a75b96d7f9a828795189fcc699049e7826e
Signed-off-by: Kotresh HR <khiremat>

Comment 3 Shyamsundar 2018-06-20 18:07:27 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report.

glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.