Bug 1357772 - [georep]: If a georep session is recreated the existing files which are deleted from slave doesn't get sync again from master
Summary: [georep]: If a georep session is recreated the existing files which are delet...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: 3.7.13
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Milind Changire
QA Contact:
URL:
Whiteboard:
Depends On: 1205162 1311926 1357773
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-07-19 06:32 UTC by Milind Changire
Modified: 2016-11-30 19:25 UTC (History)
12 users (show)

Fixed In Version: glusterfs-3.7.14
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1311926
Environment:
Last Closed: 2016-08-02 07:24:47 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Milind Changire 2016-07-19 06:32:24 UTC
+++ This bug was initially created as a clone of Bug #1311926 +++

+++ This bug was initially created as a clone of Bug #1205162 +++

Description of problem:
=======================

If the files are deleted from slave volume after the session is deleted between master and slave volume. These files will never again sync after recreating the session. It is because we maintain the information in master for the files that are already sync.

Version-Release number of selected component (if applicable):
=============================================================

glusterfs-3.6.0.53-1.el6rhs.x86_64

How reproducible:
=================
1/1

Steps to Reproduce:
==================
1. Create and start a georep session between master and slave volume.
2. Create data to the master volume
3. Let the georep sync the data to the slave volume.
4. Once the data is synced to slave volume, stop and delete the session between master and slave.
5. Delete the files from slave volume
6. Re-create and start the session between master and slave volume.
7. The files that were deleted from slave volume doesn't get sync from master

--- Additional comment from Aravinda VK on 2015-12-08 04:27:51 EST ---

As part of geo-rep delete command, we should remove stime xattrs from Master Brick roots. So that on re-creation it will start syncing from beginning.

--- Additional comment from Vijay Bellur on 2016-04-22 07:31:29 EDT ---

REVIEW: http://review.gluster.org/14051 (georep: [WIP] delete stime xattr on session delete) posted (#1) for review on master by Milind Changire (mchangir)

--- Additional comment from Vijay Bellur on 2016-05-12 01:47:15 EDT ---

REVIEW: http://review.gluster.org/14051 (georep: delete stime xattr on session delete) posted (#2) for review on master by Milind Changire (mchangir)

--- Additional comment from Vijay Bellur on 2016-05-19 08:47:58 EDT ---

REVIEW: http://review.gluster.org/14051 (georep: reset stime xattr on session delete) posted (#3) for review on master by Milind Changire (mchangir)

--- Additional comment from Vijay Bellur on 2016-05-24 10:59:12 EDT ---

REVIEW: http://review.gluster.org/14051 (georep: reset stime xattr on session delete) posted (#4) for review on master by Milind Changire (mchangir)

--- Additional comment from Vijay Bellur on 2016-05-24 12:54:35 EDT ---

REVIEW: http://review.gluster.org/14051 (georep: reset stime xattr on session delete) posted (#5) for review on master by Milind Changire (mchangir)

--- Additional comment from Vijay Bellur on 2016-05-27 02:58:20 EDT ---

REVIEW: http://review.gluster.org/14051 (georep: reset stime xattr on session delete) posted (#6) for review on master by Milind Changire (mchangir)

--- Additional comment from Vijay Bellur on 2016-05-27 03:01:31 EDT ---

REVIEW: http://review.gluster.org/14051 (georep: reset stime xattr on session delete) posted (#7) for review on master by Milind Changire (mchangir)

--- Additional comment from Vijay Bellur on 2016-05-27 03:22:08 EDT ---

REVIEW: http://review.gluster.org/14051 (georep: reset stime xattr on session delete) posted (#8) for review on master by Milind Changire (mchangir)

--- Additional comment from Vijay Bellur on 2016-06-02 03:15:38 EDT ---

REVIEW: http://review.gluster.org/14051 (georep: add reset_sync_time option for session delete) posted (#9) for review on master by Milind Changire (mchangir)

--- Additional comment from Vijay Bellur on 2016-06-02 07:46:01 EDT ---

REVIEW: http://review.gluster.org/14051 (georep: add reset-sync-time option for session delete) posted (#10) for review on master by Milind Changire (mchangir)

--- Additional comment from Vijay Bellur on 2016-06-03 01:12:20 EDT ---

REVIEW: http://review.gluster.org/14051 (georep: add reset-sync-time option for session delete) posted (#11) for review on master by Milind Changire (mchangir)

--- Additional comment from Vijay Bellur on 2016-06-03 05:05:43 EDT ---

REVIEW: http://review.gluster.org/14051 (georep: add reset-sync-time option for session delete) posted (#12) for review on master by Milind Changire (mchangir)

--- Additional comment from Vijay Bellur on 2016-06-09 04:56:42 EDT ---

REVIEW: http://review.gluster.org/14051 (georep: add reset-sync-time option for session delete) posted (#13) for review on master by Milind Changire (mchangir)

--- Additional comment from Vijay Bellur on 2016-06-27 07:05:06 EDT ---

REVIEW: http://review.gluster.org/14051 (georep: add reset-sync-time option for session delete) posted (#14) for review on master by Milind Changire (mchangir)

--- Additional comment from Vijay Bellur on 2016-06-29 02:41:58 EDT ---

COMMIT: http://review.gluster.org/14051 committed in master by Aravinda VK (avishwan) 
------
commit 70fd68d94f768c098b3178c151fa92c5079a8cfd
Author: Milind Changire <mchangir>
Date:   Fri Apr 22 16:56:47 2016 +0530

    georep: add reset-sync-time option for session delete
    
    Set the stime xattr at all the brick roots to (0,0) if the argument
    reset-sync-time has been provided on the command-line.
    To avoid testing against directory specific stime, the remote
    stime is assumed to be minus_infinity, if the root directory
    stime is set to (0,0), before the directory scan begins.
    This triggers a full volume resync to slave in the case of a
    geo-rep session recreation with the same master-slave volume
    pair.
    
    Command synopsis:
    gluster volume geo-replication <MASTERVOL> <SLAVE>::<SLAVEVOL> delete \
        [reset-sync-time]
    
    Update gluster cli man page to include new sub-command reset-sync-time.
    
    Change-Id: Ie4ce03b9425ed9bb81eda8681058c0fc6f990948
    BUG: 1311926
    Signed-off-by: Milind Changire <mchangir>
    Reviewed-on: http://review.gluster.org/14051
    Reviewed-by: Kotresh HR <khiremat>
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Aravinda VK <avishwan>

Comment 1 Vijay Bellur 2016-07-19 06:34:59 UTC
REVIEW: http://review.gluster.org/14952 (georep: add reset-sync-time option for session delete) posted (#1) for review on release-3.7 by Milind Changire (mchangir)

Comment 2 Vijay Bellur 2016-07-21 10:04:27 UTC
COMMIT: http://review.gluster.org/14952 committed in release-3.7 by Aravinda VK (avishwan) 
------
commit 301e4e8366759c45aaff03a7953ab5248b5f61de
Author: Milind Changire <mchangir>
Date:   Fri Apr 22 16:56:47 2016 +0530

    georep: add reset-sync-time option for session delete
    
    Set the stime xattr at all the brick roots to (0,0) if the argument
    reset-sync-time has been provided on the command-line.
    To avoid testing against directory specific stime, the remote
    stime is assumed to be minus_infinity, if the root directory
    stime is set to (0,0), before the directory scan begins.
    This triggers a full volume resync to slave in the case of a
    geo-rep session recreation with the same master-slave volume
    pair.
    
    Command synopsis:
    gluster volume geo-replication <MASTERVOL> <SLAVE>::<SLAVEVOL> delete \
        [reset-sync-time]
    
    Update gluster cli man page to include new sub-command reset-sync-time.
    
    Change-Id: Ie4ce03b9425ed9bb81eda8681058c0fc6f990948
    BUG: 1357772
    Signed-off-by: Milind Changire <mchangir>
    Reviewed-on: http://review.gluster.org/14051
    Reviewed-by: Kotresh HR <khiremat>
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Aravinda VK <avishwan>
    (cherry picked from commit 70fd68d94f768c098b3178c151fa92c5079a8cfd)
    Reviewed-on: http://review.gluster.org/14952

Comment 3 Kaushal 2016-08-02 07:24:47 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.14, please open a new bug report.

glusterfs-3.7.14 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-devel/2016-August/050319.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.