Description of problem: Changelog should be treated as discontinuous only on changelog enable/disable. On brick restart scenario, the changelog can be treated as continuous based on following rationale. 1. In plain distributed volume, if brick goes down, no I/O can happen onto the brick. Hence changelog is intact with data on disk. 2. In distributed replicate volume, if brick goes down, since self-heal traffic is captured in changelog. Eventually, I/O happened whend brick down is captured in changelog. This will help consumers like glusterfind and geo-replication who depend on historical changelog consumption. Especiallly glusterfind as it doesn't have a fallback mechanism when changelog history fails. Version-Release number of selected component (if applicable): mainline How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: http://review.gluster.org/10222 (features/changelog: Consider only changelog on/off as changelog breakage) posted (#1) for review on master by Kotresh HR (khiremat)
REVIEW: http://review.gluster.org/10222 (features/changelog: Consider only changelog on/off as changelog breakage) posted (#2) for review on master by Kotresh HR (khiremat)
REVIEW: http://review.gluster.org/10222 (features/changelog: Consider only changelog on/off as changelog breakage) posted (#3) for review on master by Kotresh HR (khiremat)
REVIEW: http://review.gluster.org/10222 (features/changelog: Consider only changelog on/off as changelog breakage) posted (#4) for review on master by Kotresh HR (khiremat)
REVIEW: http://review.gluster.org/10222 (features/changelog: Consider only changelog on/off as changelog breakage) posted (#5) for review on master by Kotresh HR (khiremat)
REVIEW: http://review.gluster.org/10222 (features/changelog: Consider only changelog on/off as changelog breakage) posted (#6) for review on master by Kotresh HR (khiremat)
COMMIT: http://review.gluster.org/10222 committed in master by Vijay Bellur (vbellur) ------ commit 275f7244ff9bfae085cfc8ee103990100e41057f Author: Kotresh HR <khiremat> Date: Mon Apr 13 20:28:21 2015 +0530 features/changelog: Consider only changelog on/off as changelog breakage Earlier, both chagelog on/off and brick restart were considered to be changelog breakage and treated as changelog not being continuous. As a result, new HTIME.TSTAMP file was created on both the above cases. Now the change is made such that only on changelog enable/disable, the changelog is considered to be discontinuous. New HTIME.TSTAMP file is not created on brick restart, the changelogs files are appended to last HTIME.TSTAMP file. Treating changelog as continuous in above scenario is important as changelog history API will fail otherwise. It can successfully get changes between start and end timestamps only when changelog is continuous (Changelogs in single HTIME.TSTAMP file are treated as continuous). Without this change, changelog history API would fail, and it would become necessary to fallback to other mechanisms like xsync FSCrawl in case geo-rep to detect changes in this time window. But Xsync FSCrawl would not be applicable to other consumers like glusterfind. Rationale: 1. In plain distributed volume, if brick goes down, no I/O can happen onto the brick. Hence changelog is intact with data on disk. 2. In distributed replicate volume, if brick goes down, since self-heal traffic is captured in changelog. Eventually, I/O happened whend brick down is captured in changelog. Change-Id: I2eb66efe6ee9a9228fb1fcb38d6e7696b9559d5b BUG: 1211327 Signed-off-by: Kotresh HR <khiremat> Reviewed-on: http://review.gluster.org/10222 Reviewed-by: Venky Shankar <vshankar> Tested-by: Venky Shankar <vshankar> Tested-by: Gluster Build System <jenkins.com> Tested-by: NetBSD Build System
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user