+++ This bug was initially created as a clone of Bug #1025951 +++ Description of problem: Currently, slave's timestamp marker is "*.xtime", but for correct processing and pruning of changelogs from the passive node, 'stime' key helps in identifying how much the slave is lagging. For this, the slave's timestamp marker key needs to be "*.stime" instead of "*.xtime" as 'stime' has special processing (min/max) in GlusterFS that correctly identifies the lag. Version-Release number of selected component (if applicable): mainline
https://code.engineering.redhat.com/gerrit/#/c/15079/
Please provide steps to verify.
Steps to verify: 1. Create and start a GlusterFS volume and mount the volume 2. Perform some I/O on the mount 3. Create and start a Geo-replication session b/w this volume and a slave GlusterFS volume 4. Allow the files/directories to get synced. 5. List the extended attributes on the bricks -- a key of the format "trusted.glusterfs.<master-uuid>.<slave-uuid>.stime" should exist
verified on the build glusterfs-3.4.0.44rhs # getfattr -d -m . -e hex /bricks/brick1/ getfattr: Removing leading '/' from absolute path names # file: bricks/brick1/ trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.b2a5d205-79ec-471e-a72f-782a2063c683.c261bc11-9898-4e49-83e8-758c494a82e9.stime=0x52860370000dd9e3 trusted.glusterfs.b2a5d205-79ec-471e-a72f-782a2063c683.xtime=0x52860370000dd9e3 trusted.glusterfs.dht=0x0000000100000000000000007ffffffe trusted.glusterfs.volume-id=0xb2a5d20579ec471ea72f782a2063c683
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1769.html