Bug 1422760 - [Geo-rep] Recreating geo-rep session with same slave after deleting with reset-sync-time fails to sync
Summary: [Geo-rep] Recreating geo-rep session with same slave after deleting with rese...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Kotresh HR
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1205162 1422811 1422818 1422819
TreeView+ depends on / blocked
 
Reported: 2017-02-16 05:49 UTC by Kotresh HR
Modified: 2017-05-30 18:44 UTC (History)
1 user (show)

Fixed In Version: glusterfs-3.11.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1422811 1422818 1422819 (view as bug list)
Environment:
Last Closed: 2017-05-30 18:44:10 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Kotresh HR 2017-02-16 05:49:57 UTC
Description of problem:
When geo-rep session is deleted with the option reset-sync-time and the data on slave is deleted, the recreation of session with same old slave volume is not syncing data from master to slave. It is observed, it happens only if the data to slave is synced via xsync before deletion of the geo-rep session. Only entries under root are being synced.

Version-Release number of selected component (if applicable):
mainline

How reproducible:
Always

Steps to Reproduce:
1.  Create geo-rep session between 'master' and 'slave' volume.
2.  Set change-detector to 'xsync'
3.  Create data on master and let it sync to slave
4.  Delete geo-rep session with 'reset-rsync-time'
5.  Delete data on slave volume.
6.  Recreate geo-rep session with same master and slave volume
7.  Start the geo-rep

Actual results:
Only first level entries under root are being synced and rest are not synced.

Expected results:
It is expected all the data from master is synced again.

Additional info:

Comment 1 Kotresh HR 2017-02-16 05:53:46 UTC
Patch posted: https://review.gluster.org/#/c/16629/1

Comment 2 Worker Ant 2017-02-16 05:58:48 UTC
REVIEW: https://review.gluster.org/16629 (geo-rep: Fix xsync crawl) posted (#2) for review on master by Kotresh HR (khiremat)

Comment 3 Worker Ant 2017-02-16 07:50:35 UTC
COMMIT: https://review.gluster.org/16629 committed in master by Aravinda VK (avishwan) 
------
commit 267578ec0d6b29483a1bd402165ea8c388ad825e
Author: Kotresh HR <khiremat>
Date:   Wed Feb 15 03:44:17 2017 -0500

    geo-rep: Fix xsync crawl
    
    If stime is set to (0, 0) on master brick root, it
    is expected to do complete sync ignoring the stime
    set on sub directories. But while initializing the
    stime variable for comparison, it was initailized
    to (-1, 0) instead of (0, 0). Fixed the same.
    
    The stime is set to (0, 0) with the 'reset-sync-time' option
    while deleting session.
    
    'gluster vol geo-rep master fedora1::slave delete reset-sync-time'
    
    The scenario happens when geo-rep session is deleted as above and
    for some reason the session is re-established with same slave volume
    after deleting data on slave volume.
    
    Change-Id: Ie5bc8f008dead637a09495adeef5577e2b33bc90
    BUG: 1422760
    Signed-off-by: Kotresh HR <khiremat>
    Reviewed-on: https://review.gluster.org/16629
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Smoke: Gluster Build System <jenkins.org>
    Reviewed-by: Aravinda VK <avishwan>

Comment 4 Shyamsundar 2017-05-30 18:44:10 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report.

glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.