+++ This bug was initially created as a clone of Bug #1171048 +++ Description of problem: In a replication setup the same file has different timestamps (all three: Access, Modify and Change differ) on the different nodes. The difference is relatively small, in ms range and it is about the same as the difference of the system-clocks. The underlying filesystem is ext4. Version-Release number of selected component (if applicable): tested on: 3.6.1 3.5.1 3.4.2 How reproducible: always Steps to Reproduce: 0. gluster volume is mounted through glusterfs-fuse on both replica node server using their own hostname to /mnt (on node1: mount -t glusterfs node1:gv_share /mnt; on node2: mount -t glusterfs node2:gv_share /mnt) 1. create file on node1, eg. node1# dd if=/dev/zero of=/mnt/testfile bs=1k count=1 2. node1# stat /mnt/testfile 3. node2# stat /mnt/testfile Actual results: The 2 stat results differ with a couple of ms. After "touch"-ing the file, the "modify" timestamp gets updated (rounded to integer s) and then it will be the same on both nodes, however the "change" will be still different. Expected results: the 2 stat results should be the same on both nodes Additional info: I use ntp to synchronise the system clocks of the nodes but obviously there is always some fluctuation in the ms range. The differences in the timestams means a problem for IBM TSM backup client: if the TSM client runs on both server and does an incremental backup to a common TSM node. The 2 TSM clients from the two replica servers see different timestamps and wants to update the file on the TSM server however it is the same unchanged file it just has a different timestamp depending on the server on which the TSM client runs. --- Additional comment from Anuradha on 2016-06-15 05:18:18 EDT --- Hi, We have taken this requirement to be fixed in the upcoming releases. Thanks, Anuradha.
All 3.8.x bugs are now reported against version 3.8 (without .x). For more information, see http://www.gluster.org/pipermail/gluster-devel/2016-September/050859.html
This bug is getting closed because the 3.8 version is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.