Red Hat Bugzilla – Bug 1475308
[geo-rep]: few of the self healed hardlinks on master did not sync to slave
Last modified: 2017-12-08 12:35:59 EST
Description of problem:
In the following scenario, the sync of hardlinks do not happen to slave.
1. Create geo-rep between master and slave
2. Mount the volume
3. Create a file (file1)
4. Let the file sync to slave
5. kill one set of replica for a subvolume containing file1
6. create a hardlink of file1 (ln file1 file2).=> Ensure that the file2 hashes to the same subvolume of file1
7. Start the master volume forcefully to heal file2 . Wait for heal to happen
8. Kill the other set of the replica (than the step 5)
9. Start the geo-replication
In the above scenario the hardlinks are not synced to slave and there are no errors.
Step 1 to Step 5 remains same
6. create a hardlink of file1 (ln file1 file2).=> Ensure that the file2 hashes to the different subvolume of file1
Step 7 to Step 8 remains same
In this scenario, sync happens as follows:
a. If both the bricks active are (selfhealed bricks) which has recoreded MKNOD. Sync happens.
b. If the self healed brick containing MKNOD for sticky bit file becomes PASSIVE, hardlinks are not synced.
Version-Release number of selected component (if applicable):
Always with the above steps.
REVIEW: https://review.gluster.org/17880 (geo-rep: Fix syncing of self healed hardlinks) posted (#1) for review on master by Kotresh HR (email@example.com)
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.13.0, please open a new bug report.
glusterfs-3.13.0 has been announced on the Gluster mailinglists , packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist  and the update infrastructure for your distribution.