Bug 1432046 - symlinks trigger faulty geo-replication state (rsnapshot usecase)
Summary: symlinks trigger faulty geo-replication state (rsnapshot usecase)
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: mainline
Hardware: x86_64
OS: Linux
Target Milestone: ---
Assignee: Kotresh HR
QA Contact:
Depends On:
Blocks: 1431081 1486120 1503174
TreeView+ depends on / blocked
Reported: 2017-03-14 12:15 UTC by Niels de Vos
Modified: 2017-12-08 17:32 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.13.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1431081
: 1486120 1503174 (view as bug list)
Last Closed: 2017-12-08 17:32:42 UTC
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:

Attachments (Terms of Use)

Description Niels de Vos 2017-03-14 12:15:29 UTC
+++ This bug was initially created as a clone of Bug #1431081 +++
+++                                                           +++
+++ Use this bug to get a fix in the master branch before     +++
+++ backporting it to the maintained versions.                +++

Description of problem:
operations as done by rsnapshot easily trigger faulty geo-replication state

Version-Release number of selected component (if applicable):
3.10, but 3.8 affected the same way

How reproducible:

Steps to Reproduce:
0) create volume on master and slave and setup geo-replication between them
1) mount master volume and cd to it (nfs or fuse doesn't make a difference)
2) simulate a rsnapshot run that updates a symlink like this:

mkdir /tmp/symlinkbug
ln -f -s /does/not/exist /tmp/symlinkbug/a_symlink
rsync -a /tmp/symlinkbug ./
cp -al symlinkbug symlinkbug.0
ln -f -s /does/not/exist2 /tmp/symlinkbug/a_symlink
rsync -a /tmp/symlinkbug ./
cp -al symlinkbug symlinkbug.1

(rsnapshot uses hardlinks between rotations, that's why it is using cp -al)

Actual results:
geo-replication goes to faulty state, and symlinkbug/a_symlink still points to the old location

Expected results:
geo-replication should update the link destination, and not choke/go into faulty state on the second cp -al

Additional info:
you can pause in between the steps, add checkpoints and verify those to have all steps synced separately - doesn't make a difference.

Comment 1 Worker Ant 2017-08-09 10:36:51 UTC
REVIEW: https://review.gluster.org/18011 (geo-rep: Fix syncing of hardlink of symlink) posted (#2) for review on master by Kotresh HR (khiremat@redhat.com)

Comment 2 Worker Ant 2017-08-24 03:13:56 UTC
COMMIT: https://review.gluster.org/18011 committed in master by Aravinda VK (avishwan@redhat.com) 
commit e893962deaabab8e934813f8a0443a8f94e009f2
Author: Kotresh HR <khiremat@redhat.com>
Date:   Tue Aug 8 10:12:14 2017 -0400

    geo-rep: Fix syncing of hardlink of symlink
    If there is a hardlink to a symlink on master
    and if the symlink file is deleted on master,
    geo-rep fails to sync the hardlink.
    Typical Usecase:
    It's easily hit with rsnapshot use case where
    it uses hardlinks.
    Example Reproducer:
    Setup geo-replication between master and slave
    volume and in master mount point, do the following.
     1. mkdir /tmp/symlinkbug
     2. ln -f -s /does/not/exist /tmp/symlinkbug/a_symlink
     3. rsync -a /tmp/symlinkbug ./
     4. cp -al symlinkbug symlinkbug.0
     5. ln -f -s /does/not/exist2 /tmp/symlinkbug/a_symlink
     6. rsync -a /tmp/symlinkbug ./
     7. cp -al symlinkbug symlinkbug.1
    If the source was not present while syncing hardlink,
    it was always packing the blob as regular file.
    If the source was not present while syncing hardlink,
    pack the blob based on the mode.
    Change-Id: Iaa12d6f99de47b18e0650e7c4eb455f23f8390f2
    BUG: 1432046
    Signed-off-by: Kotresh HR <khiremat@redhat.com>
    Reported-by: Christian Lohmaier <lohmaier+rhbz@gmail.com>
    Reviewed-on: https://review.gluster.org/18011
    Smoke: Gluster Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    Reviewed-by: Aravinda VK <avishwan@redhat.com>

Comment 3 Shyamsundar 2017-12-08 17:32:42 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.13.0, please open a new bug report.

glusterfs-3.13.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-December/000087.html
[2] https://www.gluster.org/pipermail/gluster-users/

Note You need to log in before you can comment on or make changes to this bug.