Description of problem: ======================= Performing rename on a file which had a hardlink creates multiple entries of same file on the client On a geo-rep tiered setup performed following: 1. created 205 files, let it sync to slave. verified checksum to be matching. 2. create hardlinks of same file, let it sync to slave. verified checksum to be matching. 3. rebalance migration reported failures for hardlinks which is expected. 4. rename the original 205 files. Ended up having multiple entries of same files on the client. These renamed files never got sync to slave Master: ======= [root@mia for_hardlinks]# ll | grep o_rename.99 -rw-r--r--. 2 root root 204800 Feb 5 15:01 o_rename.99 -rw-r--r--. 2 root root 204800 Feb 5 15:01 o_rename.99 [root@mia for_hardlinks]# Slave: ====== [root@mia for_hardlinks]# ll | grep .99 -rw-r--r--. 2 root root 204800 Feb 5 14:54 hl.199 -rw-r--r--. 2 root root 204800 Feb 5 15:01 hl.99 -rw-r--r--. 2 root root 204800 Feb 5 14:54 o_rename.199 -rw-r--r--. 2 root root 204800 Feb 5 15:01 original.99 [root@mia for_hardlinks]# Version-Release number of selected component (if applicable): ============================================================= glusterfs-3.7.5-18.el7rhgs.x86_64 Geo-rep application Consequence: Rename of a file having hardlink has issues in syncing to slave How reproducible: ================= 2/2
Verified the below specific case with build: glusterfs-3.7.5-19 Test Case: 1. Create 2k files (file.*} 2. Let migration start. 3. Create 2k hardlinks of files file.* {hardlink_1.*} 4. Migration would not happen for any file as every file now has hardlinks. 5. Create 2k another hardlinks of files file.* {hardlink_2.*} 6. Rename original files {files.* to rename_file.*} 7. Rename 1st hardlink hardlink_1.* to firsthardlink_rename.* 8. Delete 2nd hardlinkl_2.* 9. Create hardlinks with same name hardlink_1.* and hardlink_2.* 10. Truncate 2nd hardlinks to size 0 11. Truncate Original file again to size 10 12. Rename 2nd hardlink hardlink_2.* to secondhardlink_rename.* 13. chmod all files to 777 14. Create symlink of files rename_file.* to symlink_file.* 15. Rename rename_file.* to original file.* 16. Append original file.* 17. Remove all hardlinks {hardlink_1.*,hardlink_2.*,firsthardlink_rename.*,secondhardlink_rename.*} 18. Migration should resume as their are no further hardlinks. Result of migration before step 3: ================================== [root@dhcp37-95 glusterfs]# gluster volume rebalance master tier status Node Promoted files Demoted files Status --------- --------- --------- --------- localhost 0 0 in progress 10.70.37.67 0 151 in progress 10.70.37.177 0 0 in progress 10.70.37.187 0 162 in progress 10.70.37.206 0 0 in progress 10.70.37.153 0 0 in progress Tiering Migration Functionality: master: success [root@dhcp37-95 glusterfs]# Result of migratoin after step 17: ================================== [root@dhcp37-95 glusterfs]# gluster volume rebalance master tier status Node Promoted files Demoted files Status --------- --------- --------- --------- localhost 313 0 in progress 10.70.37.67 0 151 in progress 10.70.37.177 0 0 in progress 10.70.37.187 0 162 in progress 10.70.37.206 0 0 in progress 10.70.37.153 0 0 in progress Tiering Migration Functionality: master: success [root@dhcp37-95 glusterfs]# Verified the number of files,hardlinks,symlinks after every step and the count matches to the expected. Moving this bug as verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0193.html