REVIEW: http://review.gluster.org/13571 (georep: avoid creating multiple entries with same gfid) posted (#1) for review on release-3.7 by Milind Changire (email@example.com)
Description of problem:
When application rolls logs on Master, the session goes FAULTY.
Investigation revealed that there is an issue with CREATE + RENAME log replay from geo-rep.
COMMIT: http://review.gluster.org/13571 committed in release-3.7 by Vijay Bellur (firstname.lastname@example.org)
Author: Milind Changire <email@example.com>
Date: Fri Jan 29 13:53:07 2016 +0530
georep: avoid creating multiple entries with same gfid
CREATE + RENAME changelogs replayed by geo-replication cause
stale old-name entries with same gfid on slave nodes.
A gfid is a unique key in the file-system and should not be
assigned to multiple entries.
Create entry on slave only if lstat(gfid) at aux-mount fails.
This applies to files as well as directories.
Signed-off-by: Milind Changire <firstname.lastname@example.org>
Smoke: Gluster Build System <email@example.com>
Reviewed-by: Kotresh HR <firstname.lastname@example.org>
Reviewed-by: Aravinda VK <email@example.com>
NetBSD-regression: NetBSD Build System <firstname.lastname@example.org>
CentOS-regression: Gluster Build System <email@example.com>
(cherry picked from commit 87d93fac9fcc4b258b7eb432ac4151cdd043534f)
Has any performance characterization been done to ascertain the percentage of creates being affected due to the additional stat()?
No performance characterization tests have been done specifically.
However, the lstat() is done for _every_ entry creation i.e. 100% of the time, since there's no way to identify if the logs are being played for the first time or after a georep restart to conditionally lstat()
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.9, please open a new bug report.
glusterfs-3.7.9 has been announced on the Gluster mailinglists , packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist  and the update infrastructure for your distribution.