Bug 1593537

Summary: posix/ctime: Mdata value of a directory is different across replica/EC subvolume
Product: [Community] GlusterFS Reporter: Kotresh HR <khiremat>
Component: posixAssignee: Kotresh HR <khiremat>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 4.1CC: bugs, nchilaka
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-4.1.2 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1592275 Environment:
Last Closed: 2018-07-30 18:57:21 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1592275    
Bug Blocks:    

Description Kotresh HR 2018-06-21 03:42:30 UTC
+++ This bug was initially created as a clone of Bug #1592275 +++

Description of problem:
On a EC/replica volume, the mdata xattr is expected to be same. But it's different.

Version-Release number of selected component (if applicable):
mainline

How reproducible:
Mostly

Steps to Reproduce:
1. Create 2*(4+2) gluster disperse volume
2. Mount the volume
3. Create 1000 directories

Actual results:
The value of trusted.glusterfs.mdata is different on different subvolume of replica/EC set.

Expected results:
The value of trusted.glusterfs.mdata should be same on different subvolume of replica/EC set.

Additional info:

--- Additional comment from Worker Ant on 2018-06-18 06:12:06 EDT ---

REVIEW: https://review.gluster.org/20281 (posix/ctime: Fix differential ctime duing entry operations) posted (#3) for review on master by Kotresh HR

--- Additional comment from Worker Ant on 2018-06-20 02:51:50 EDT ---

COMMIT: https://review.gluster.org/20281 committed in master by "Amar Tumballi" <amarts> with a commit message- posix/ctime: Fix differential ctime duing entry operations

We should not be relying on backend file's time attributes
to load the initial ctime time attribute structure. This
is incorrect as each replica set would have witnessed the
file creation at different times.

For new file creation, ctime, atime and mtime should be
same, hence initiate the ctime structure with the time
from the frame. But for the files which were created
before ctime feature is enabled, this is not accurate
but still fine as the times would get eventually accurate.

fixes: bz#1592275
Change-Id: I206a469c83ee7b26da2fe096ae7bf8ff5986ad67
Signed-off-by: Kotresh HR <khiremat>

Comment 1 Worker Ant 2018-06-21 03:44:53 UTC
REVIEW: https://review.gluster.org/20340 (posix/ctime: Fix differential ctime duing entry operations) posted (#1) for review on release-4.1 by Kotresh HR

Comment 2 Worker Ant 2018-07-02 17:28:16 UTC
COMMIT: https://review.gluster.org/20340 committed in release-4.1 by "Shyamsundar Ranganathan" <srangana> with a commit message- posix/ctime: Fix differential ctime duing entry operations

We should not be relying on backend file's time attributes
to load the initial ctime time attribute structure. This
is incorrect as each replica set would have witnessed the
file creation at different times.

For new file creation, ctime, atime and mtime should be
same, hence initiate the ctime structure with the time
from the frame. But for the files which were created
before ctime feature is enabled, this is not accurate
but still fine as the times would get eventually accurate.

Backport of:
  > Patch: https://review.gluster.org/#/c/20281/
  > BUG: 1592275
  > Change-Id: I206a469c83ee7b26da2fe096ae7bf8ff5986ad67
  > Signed-off-by: Kotresh HR <khiremat>
  (cherry picked from commit 841991130c94e3fcf4076917be6da9ce90406932)

fixes: bz#1593537
Change-Id: I206a469c83ee7b26da2fe096ae7bf8ff5986ad67
Signed-off-by: Kotresh HR <khiremat>

Comment 3 Shyamsundar 2018-07-30 18:57:21 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.1.2, please open a new bug report.

glusterfs-4.1.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2018-July/000106.html
[2] https://www.gluster.org/pipermail/gluster-users/