Bug 1217944 - Changelog: Changelog should be treated as discontinuous only on changelog enable/disable
Summary: Changelog: Changelog should be treated as discontinuous only on changelog ena...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: 3.7.0
Hardware: All
OS: All
unspecified
high
Target Milestone: ---
Assignee: Kotresh HR
QA Contact:
URL:
Whiteboard:
Depends On: 1211327
Blocks: glusterfs-3.7.0
TreeView+ depends on / blocked
 
Reported: 2015-05-03 07:58 UTC by Kotresh HR
Modified: 2015-05-14 17:35 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.7.0beta2
Doc Type: Bug Fix
Doc Text:
Clone Of: 1211327
Environment:
Last Closed: 2015-05-14 17:27:29 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Kotresh HR 2015-05-03 07:58:18 UTC
+++ This bug was initially created as a clone of Bug #1211327 +++

Description of problem:
Changelog should be treated as discontinuous only on changelog enable/disable.
On brick restart scenario, the changelog can be treated as continuous based on following rationale.

1. In plain distributed volume, if brick goes down, no I/O can
       happen onto the brick. Hence changelog is intact with data
       on disk.
2. In distributed replicate volume, if brick goes down, since
   self-heal traffic is captured in changelog. Eventually,
   I/O happened whend brick down is captured in changelog.

This will help consumers like glusterfind and geo-replication who depend
on historical changelog consumption. Especiallly glusterfind as it doesn't
have a fallback mechanism when changelog history fails.

Version-Release number of selected component (if applicable):
mainline

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:

Comment 1 Anand Avati 2015-05-03 08:01:02 UTC
REVIEW: http://review.gluster.org/10507 (features/changelog: Consider only changelog on/off as changelog breakage) posted (#1) for review on release-3.7 by Kotresh HR (khiremat)

Comment 2 Anand Avati 2015-05-04 05:21:23 UTC
REVIEW: http://review.gluster.org/10507 (features/changelog: Consider only changelog on/off as changelog breakage) posted (#2) for review on release-3.7 by Kotresh HR (khiremat)

Comment 3 Anand Avati 2015-05-05 07:05:35 UTC
COMMIT: http://review.gluster.org/10507 committed in release-3.7 by Vijay Bellur (vbellur) 
------
commit baac2c28ee98e47a3fc0ecf1db3779c7372df526
Author: Kotresh HR <khiremat>
Date:   Mon Apr 13 20:28:21 2015 +0530

    features/changelog: Consider only changelog on/off as changelog breakage
    
    Earlier, both chagelog on/off and brick restart were considered
    to be changelog breakage and treated as changelog not being
    continuous. As a result, new HTIME.TSTAMP file was created on
    both the above cases. Now the change is made such that only
    on changelog enable/disable, the changelog is considered to be
    discontinuous. New HTIME.TSTAMP file is not created on brick
    restart, the changelogs files are appended to last HTIME.TSTAMP
    file.
    
    Treating changelog as continuous in above scenario is important
    as changelog history API will fail otherwise. It can successfully
    get changes between start and end timestamps only when changelog
    is continuous (Changelogs in single HTIME.TSTAMP file are treated
    as continuous). Without this change, changelog history API would
    fail, and it would become necessary to fallback to other mechanisms
    like xsync FSCrawl in case geo-rep to detect changes in this time
    window. But Xsync FSCrawl would not be applicable to other
    consumers like glusterfind.
    
    Rationale:
    1. In plain distributed volume, if brick goes down, no I/O can
       happen onto the brick. Hence changelog is intact with data
       on disk.
    2. In distributed replicate volume, if brick goes down, since
       self-heal traffic is captured in changelog. Eventually,
       I/O happened whend brick down is captured in changelog.
    
    BUG: 1217944
    Change-Id: Ifa6d932818fe1a3a914e87ac84f1d2ded01c1288
    Signed-off-by: Kotresh HR <khiremat>
    Reviewed-on: http://review.gluster.org/10222
    Reviewed-on: http://review.gluster.org/10507
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Aravinda VK <avishwan>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 4 Niels de Vos 2015-05-14 17:27:29 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 5 Niels de Vos 2015-05-14 17:28:51 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 6 Niels de Vos 2015-05-14 17:35:21 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.