Bug 1247882 - [geo-rep]: killing brick from replica pair makes geo-rep session faulty with Traceback "ChangelogException"
Summary: [geo-rep]: killing brick from replica pair makes geo-rep session faulty with ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: 3.7.0
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
Assignee: Kotresh HR
QA Contact:
URL:
Whiteboard:
Depends On: 1236546 1239044
Blocks: 1236554
TreeView+ depends on / blocked
 
Reported: 2015-07-29 07:25 UTC by Kotresh HR
Modified: 2015-09-09 09:38 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.7.4
Clone Of: 1239044
Environment:
Last Closed: 2015-09-09 09:38:41 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Kotresh HR 2015-07-29 07:25:41 UTC
+++ This bug was initially created as a clone of Bug #1239044 +++

+++ This bug was initially created as a clone of Bug #1236546 +++

Description of problem:
=======================
Even when the ntp is configured and the systems are in sync and same timezone. Killing the Active bricks makes the passive brick faulty too with the history crawl failing.

[2015-07-01 15:31:06.146286] I [master(/rhs/brick1/b1):1123:crawl] _GMaster: starting history crawl... turns: 1, stime: (1435744752, 0)
[2015-07-01 15:31:06.147336] E [repce(agent):117:worker] <top>: call failed:
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 113, in worker
    res = getattr(self.obj, rmeth)(*in_data[2:])
  File "/usr/libexec/glusterfs/python/syncdaemon/changelogagent.py", line 54, in history
    num_parallel)
  File "/usr/libexec/glusterfs/python/syncdaemon/libgfchangelog.py", line 100, in cl_history_changelog
    cls.raise_changelog_err()
  File "/usr/libexec/glusterfs/python/syncdaemon/libgfchangelog.py", line 27, in raise_changelog_err
    raise ChangelogException(errn, os.strerror(errn))
ChangelogException: [Errno 2] No such file or directory
[2015-07-01 15:31:06.149779]

It fails for first time and succeeds later.

Version-Release number of selected component (if applicable):
=============================================================

mainline


How reproducible:
=================

Always


Steps Carried:
==============

1. Create Master and Slave Cluster
2. Create and Start Master volume (4x2) from four nodes (node1..node4)
3. Create and Start slave volume (2x2)
4. Create Meta volume (1x3) (node1..node3)
5. Create geo-rep session between master and slave volume
6. Set the config use_meta_volume to true
7. Start the geo-rep session
8. Mount the volume on Fuse
9. Start creating data from fuse client
10. While data creation is in progress, kill few active bricks {kill -9 pid}. {Make sure that the corresponding replica brick is UP}
11. Check the geo-rep status and log.

--- Additional comment from Kotresh HR on 2015-07-03 07:02:44 EDT ---

I got it the reason for first time failure. The register time is the end time we pass for the history API. Since the PASSIVE worker register much earlier along with ACTIVE worker and start time it passes the stime i.e., register time < stime

For history API, start time > end time which obviously fails.

When it registers for second time,  register time > stime and hence it passes.

There are no side effects with respect to DATA sync. It is just worker going down and coming back. We will fix this but not a BLOCKER definitely.

Comment 1 Anand Avati 2015-08-03 06:56:12 UTC
REVIEW: http://review.gluster.org/11784 (geo-rep: Fix history failure) posted (#2) for review on release-3.7 by Kotresh HR (khiremat)

Comment 2 Anand Avati 2015-08-06 05:50:03 UTC
COMMIT: http://review.gluster.org/11784 committed in release-3.7 by Venky Shankar (vshankar) 
------
commit b7118970edab7c3ab9c7039ef340c40326ff6930
Author: Kotresh HR <khiremat>
Date:   Fri Jul 3 16:32:56 2015 +0530

    geo-rep: Fix history failure
    
    Both ACTIVE and PASSIVE workers register to changelog
    at almost same time. When PASSIVE worker becomes ACTIVE,
    the start and end time would be current stime and register_time
    repectively for history API. Hence register_time would be less
    then stime for which history obviously fails. But it will
    be successful for the next restart as new register_time > stime.
    
    Fix is to pass current time as the end time to history call
    instead of the register_time.
    
    Also improvised the logging for ACTIVE/PASSIVE switching.
    
    BUG: 1247882
    Change-Id: I40c582cc32fe29a6c30340ec81a3b5d30e461e71
    Reviewed-on: http://review.gluster.org/11524
    Tested-by: Gluster Build System <jenkins.com>
    Tested-by: NetBSD Build System <jenkins.org>
    Reviewed-by: Aravinda VK <avishwan>
    Reviewed-by: Venky Shankar <vshankar>
    Signed-off-by: Kotresh HR <khiremat>
    Reviewed-on: http://review.gluster.org/11784
    Reviewed-by: Milind Changire <mchangir>

Comment 3 Kaushal 2015-09-09 09:38:41 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.4, please open a new bug report.

glusterfs-3.7.4 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12496
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.