Bug 1600405 - [geo-rep]: Geo-replication not syncing renamed symlink
Summary: [geo-rep]: Geo-replication not syncing renamed symlink
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Kotresh HR
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1601314 1611113
TreeView+ depends on / blocked
 
Reported: 2018-07-12 07:57 UTC by Kotresh HR
Modified: 2018-10-23 15:13 UTC (History)
1 user (show)

Fixed In Version: glusterfs-5.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1601314 1611113 (view as bug list)
Environment:
Last Closed: 2018-10-23 15:13:48 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Kotresh HR 2018-07-12 07:57:23 UTC
Description of problem:
Geo-rep sometimes fails to sync the rename of symlink
    if the I/O is as follows
    
      1. touch file1
      2. ln -s "./file1" sym_400
      3. mv sym_400 renamed_sym_400
      4. mkdir sym_400

The file 'renamed_sym_400' failed to sync to slave
    


Version-Release number of selected component (if applicable):
mainline

How reproducible:
Few times, looks like race

Steps to Reproduce:
1. setup geo-rep, start it.
2. Stop geo-rep 
3. On master do following I/O
        1. touch file1
        2. ln -s "./file1" sym_400
        3. mv sym_400 renamed_sym_400
        4. mkdir sym_400
4. Find the brick on which rename_sym_400 is present on master
   and kill that brick
5. Start geo-rep so that other bricks processes there changelog first
6. Once other bricks are in changelog crawl, bring back brick which was down.
7. It also moves to changelog but 'renamed_sym_400' doesn't sync


Actual results:
'renamed_sym_400' doesn't sync

Expected results:
'renamed_sym_400' should sync

Additional info:

Comment 1 Worker Ant 2018-07-12 08:34:29 UTC
REVIEW: https://review.gluster.org/20496 (geo-rep: Fix symlink rename syncing issue) posted (#1) for review on master by Kotresh HR

Comment 2 Worker Ant 2018-07-12 14:46:17 UTC
COMMIT: https://review.gluster.org/20496 committed in master by "Kotresh HR" <khiremat> with a commit message- geo-rep: Fix symlink rename syncing issue

Problem:
   Geo-rep sometimes fails to sync the rename of symlink
if the I/O is as follows

  1. touch file1
  2. ln -s "./file1" sym_400
  3. mv sym_400 renamed_sym_400
  4. mkdir sym_400

 The file 'renamed_sym_400' failed to sync to slave

Cause:
  Assume there are three distribute subvolume (brick1, brick2, brick3).
  The changelogs are recorded as follows for above I/O pattern.
  Note that the MKDIR is recorded on all bricks.

  1. brick1:
     -------

     CREATE file1
     SYMLINK sym_400
     RENAME sym_400 renamed_sym_400
     MKDIR sym_400

  2. brick2:
     -------

     MKDIR sym_400

  3. brick3:
     -------

     MKDIR sym_400

  The operations on 'brick1' should be processed sequentially. But
  since MKDIR is recorded on all the bricks, The brick 'brick2/brick3'
  processed MKDIR first before 'brick1' causing out of order syncing
  and created directory sym_400 first.

  Now 'brick1' processed it's changelog.

     CREATE file1 -> succeeds
     SYMLINK sym_400 -> No longer present in master. Ignored
     RENAME sym_400 renamed_sym_400
            While processing RENAME, if source('sym_400') doesn't
            present, destination('renamed_sym_400') is created. But
            geo-rep stats the name 'sym_400' to confirm source file's
            presence. In this race, since source name 'sym_400' is
            present as directory, it doesn't create destination.
            Hence RENAME is ignored.

Fix:
  The fix is not rely only on stat of source name during RENAME.
  It should stat the name and if the name is present, gfid should
  be same. Only then it can conclude the presence of source.

fixes: bz#1600405
Change-Id: I9fbec4f13ca6a182798a7f81b356fe2003aff969
Signed-off-by: Kotresh HR <khiremat>

Comment 3 Shyamsundar 2018-10-23 15:13:48 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report.

glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.