Bug 1247882 - [geo-rep]: killing brick from replica pair makes geo-rep session faulty with Traceback "ChangelogException"
[geo-rep]: killing brick from replica pair makes geo-rep session faulty with ...
Product: GlusterFS
Classification: Community
Component: geo-replication (Show other bugs)
x86_64 Linux
unspecified Severity urgent
: ---
: ---
Assigned To: Kotresh HR
: ZStream
Depends On: 1236546 1239044
Blocks: 1236554
  Show dependency treegraph
Reported: 2015-07-29 03:25 EDT by Kotresh HR
Modified: 2015-09-09 05:38 EDT (History)
8 users (show)

See Also:
Fixed In Version: glusterfs-3.7.4
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1239044
Last Closed: 2015-09-09 05:38:41 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Kotresh HR 2015-07-29 03:25:41 EDT
+++ This bug was initially created as a clone of Bug #1239044 +++

+++ This bug was initially created as a clone of Bug #1236546 +++

Description of problem:
Even when the ntp is configured and the systems are in sync and same timezone. Killing the Active bricks makes the passive brick faulty too with the history crawl failing.

[2015-07-01 15:31:06.146286] I [master(/rhs/brick1/b1):1123:crawl] _GMaster: starting history crawl... turns: 1, stime: (1435744752, 0)
[2015-07-01 15:31:06.147336] E [repce(agent):117:worker] <top>: call failed:
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 113, in worker
    res = getattr(self.obj, rmeth)(*in_data[2:])
  File "/usr/libexec/glusterfs/python/syncdaemon/changelogagent.py", line 54, in history
  File "/usr/libexec/glusterfs/python/syncdaemon/libgfchangelog.py", line 100, in cl_history_changelog
  File "/usr/libexec/glusterfs/python/syncdaemon/libgfchangelog.py", line 27, in raise_changelog_err
    raise ChangelogException(errn, os.strerror(errn))
ChangelogException: [Errno 2] No such file or directory
[2015-07-01 15:31:06.149779]

It fails for first time and succeeds later.

Version-Release number of selected component (if applicable):


How reproducible:


Steps Carried:

1. Create Master and Slave Cluster
2. Create and Start Master volume (4x2) from four nodes (node1..node4)
3. Create and Start slave volume (2x2)
4. Create Meta volume (1x3) (node1..node3)
5. Create geo-rep session between master and slave volume
6. Set the config use_meta_volume to true
7. Start the geo-rep session
8. Mount the volume on Fuse
9. Start creating data from fuse client
10. While data creation is in progress, kill few active bricks {kill -9 pid}. {Make sure that the corresponding replica brick is UP}
11. Check the geo-rep status and log.

--- Additional comment from Kotresh HR on 2015-07-03 07:02:44 EDT ---

I got it the reason for first time failure. The register time is the end time we pass for the history API. Since the PASSIVE worker register much earlier along with ACTIVE worker and start time it passes the stime i.e., register time < stime

For history API, start time > end time which obviously fails.

When it registers for second time,  register time > stime and hence it passes.

There are no side effects with respect to DATA sync. It is just worker going down and coming back. We will fix this but not a BLOCKER definitely.
Comment 1 Anand Avati 2015-08-03 02:56:12 EDT
REVIEW: http://review.gluster.org/11784 (geo-rep: Fix history failure) posted (#2) for review on release-3.7 by Kotresh HR (khiremat@redhat.com)
Comment 2 Anand Avati 2015-08-06 01:50:03 EDT
COMMIT: http://review.gluster.org/11784 committed in release-3.7 by Venky Shankar (vshankar@redhat.com) 
commit b7118970edab7c3ab9c7039ef340c40326ff6930
Author: Kotresh HR <khiremat@redhat.com>
Date:   Fri Jul 3 16:32:56 2015 +0530

    geo-rep: Fix history failure
    Both ACTIVE and PASSIVE workers register to changelog
    at almost same time. When PASSIVE worker becomes ACTIVE,
    the start and end time would be current stime and register_time
    repectively for history API. Hence register_time would be less
    then stime for which history obviously fails. But it will
    be successful for the next restart as new register_time > stime.
    Fix is to pass current time as the end time to history call
    instead of the register_time.
    Also improvised the logging for ACTIVE/PASSIVE switching.
    BUG: 1247882
    Change-Id: I40c582cc32fe29a6c30340ec81a3b5d30e461e71
    Reviewed-on: http://review.gluster.org/11524
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Tested-by: NetBSD Build System <jenkins@build.gluster.org>
    Reviewed-by: Aravinda VK <avishwan@redhat.com>
    Reviewed-by: Venky Shankar <vshankar@redhat.com>
    Signed-off-by: Kotresh HR <khiremat@redhat.com>
    Reviewed-on: http://review.gluster.org/11784
    Reviewed-by: Milind Changire <mchangir@redhat.com>
Comment 3 Kaushal 2015-09-09 05:38:41 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.4, please open a new bug report.

glusterfs-3.7.4 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12496
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.