Bug 1476212 - [geo-rep]: few of the self healed hardlinks on master did not sync to slave
[geo-rep]: few of the self healed hardlinks on master did not sync to slave
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: geo-replication (Show other bugs)
3.10
x86_64 Linux
unspecified Severity urgent
: ---
: ---
Assigned To: Kotresh HR
:
Depends On: 1475308 1474380
Blocks: 1476208
  Show dependency treegraph
 
Reported: 2017-07-28 06:05 EDT by Kotresh HR
Modified: 2017-08-21 09:42 EDT (History)
8 users (show)

See Also:
Fixed In Version: glusterfs-3.10.5
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1475308
Environment:
Last Closed: 2017-08-21 09:42:02 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Kotresh HR 2017-07-28 06:05:38 EDT
Description of problem:
=======================

In the following scenario, the sync of hardlinks do not happen to slave. 

Scenario 1:

1. Create geo-rep between master and slave
2. Mount the volume
3. Create a file (file1)
4. Let the file sync to slave
5. kill one set of replica for a subvolume containing file1
6. create a hardlink of file1 (ln file1 file2).=> Ensure that the file2 hashes to the same subvolume of file1
7. Start the master volume forcefully to heal file2 . Wait for heal to happen
8. Kill the other set of the replica (than the step 5)
9. Start the geo-replication

In the above scenario the hardlinks are not synced to slave and there are no errors. 

Scenario 2:

Step 1 to Step 5 remains same
6. create a hardlink of file1 (ln file1 file2).=> Ensure that the file2 hashes to the different subvolume of file1
Step 7 to Step 8 remains same

In this scenario, sync happens as follows:
   a. If both the bricks active are (selfhealed bricks) which has recoreded MKNOD. Sync happens.
   b. If the self healed brick containing MKNOD for sticky bit file becomes PASSIVE, hardlinks are not synced. 


Version-Release number of selected component (if applicable):
=============================================================

mainline


How reproducible:
=================

Always with the above steps.

--- Additional comment from Worker Ant on 2017-07-26 08:24:07 EDT ---

REVIEW: https://review.gluster.org/17880 (geo-rep: Fix syncing of self healed hardlinks) posted (#1) for review on master by Kotresh HR (khiremat@redhat.com)
Comment 1 Worker Ant 2017-07-28 06:09:03 EDT
REVIEW: https://review.gluster.org/17907 (geo-rep: Fix syncing of self healed hardlinks) posted (#1) for review on release-3.10 by Kotresh HR (khiremat@redhat.com)
Comment 2 Worker Ant 2017-08-11 06:45:55 EDT
COMMIT: https://review.gluster.org/17907 committed in release-3.10 by Shyamsundar Ranganathan (srangana@redhat.com) 
------
commit a060485aae8cb508e05e0f9e46e3f0ec823a7f22
Author: Kotresh HR <khiremat@redhat.com>
Date:   Wed Jul 26 08:09:31 2017 -0400

    geo-rep: Fix syncing of self healed hardlinks
    
    Problem:
    In a distribute replicate volume, if the hardlinks
    are created when a subvolume is down, it gets
    healed from other subvolume when it comes up.
    If this subvolume becomes ACTIVE in geo-rep
    there are chances that those hardlinks won't
    be synced to slave.
    
    Cause:
    AFR can't detect hardlinks during self heal.
    It just create those files using mknod and
    the same is recorded in changelog. Geo-rep
    processes these mknod and ignores it as
    it finds gfid already on slave.
    
    Solution:
    Geo-rep should process the mknod as link
    if the gfid already exists on slave.
    
    > Change-Id: I2f721b462b38a74c60e1df261662db4b99b32057
    > BUG: 1475308
    > Signed-off-by: Kotresh HR <khiremat@redhat.com>
    > Reviewed-on: https://review.gluster.org/17880
    > Smoke: Gluster Build System <jenkins@build.gluster.org>
    > CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    > Reviewed-by: Aravinda VK <avishwan@redhat.com>
    (cherry picked from commit d685e4238fafba8f58bf01174c79cb5ca35203e5)
    
    
    Change-Id: I2f721b462b38a74c60e1df261662db4b99b32057
    BUG: 1476212
    Signed-off-by: Kotresh HR <khiremat@redhat.com>
    Reviewed-on: https://review.gluster.org/17907
    Smoke: Gluster Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    Reviewed-by: Aravinda VK <avishwan@redhat.com>
Comment 3 Shyamsundar 2017-08-21 09:42:02 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.5, please open a new bug report.

glusterfs-3.10.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-August/000079.html
[2] https://www.gluster.org/pipermail/gluster-users/

Note You need to log in before you can comment on or make changes to this bug.