Bug 1305172 - [Tier]: Endup in multiple entries of same file on client after rename which had a hardlinks
Summary: [Tier]: Endup in multiple entries of same file on client after rename which h...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: tier
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: RHGS 3.1.2
Assignee: Nithya Balachandran
QA Contact: Rahul Hinduja
URL:
Whiteboard:
Depends On:
Blocks: 1260783 1305277 1311836
TreeView+ depends on / blocked
 
Reported: 2016-02-05 22:12 UTC by Rahul Hinduja
Modified: 2016-09-17 15:44 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.7.5-19
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1305277 (view as bug list)
Environment:
Last Closed: 2016-03-01 06:09:14 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0193 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 update 2 2016-03-01 10:20:36 UTC

Description Rahul Hinduja 2016-02-05 22:12:23 UTC
Description of problem:
=======================

Performing rename on a file which had a hardlink creates multiple entries of same file on the client

On a geo-rep tiered setup performed following:

1. created 205 files, let it sync to slave. verified checksum to be matching.
2. create hardlinks of same file, let it sync to slave. verified checksum to be matching.
3. rebalance migration reported failures for hardlinks which is expected. 
4. rename the original 205 files. 

Ended up having multiple entries of same files on the client. These renamed files never got sync to slave

Master:
=======

[root@mia for_hardlinks]# ll | grep o_rename.99
-rw-r--r--. 2 root root 204800 Feb  5 15:01 o_rename.99
-rw-r--r--. 2 root root 204800 Feb  5 15:01 o_rename.99
[root@mia for_hardlinks]# 

Slave:
======

[root@mia for_hardlinks]# ll | grep .99
-rw-r--r--. 2 root root 204800 Feb  5 14:54 hl.199
-rw-r--r--. 2 root root 204800 Feb  5 15:01 hl.99
-rw-r--r--. 2 root root 204800 Feb  5 14:54 o_rename.199
-rw-r--r--. 2 root root 204800 Feb  5 15:01 original.99
[root@mia for_hardlinks]# 


Version-Release number of selected component (if applicable):
=============================================================

glusterfs-3.7.5-18.el7rhgs.x86_64

Geo-rep application Consequence: Rename of a file having hardlink has issues in syncing to slave

How reproducible:
=================

2/2

Comment 5 Rahul Hinduja 2016-02-09 13:26:09 UTC
Verified the below specific case with build: glusterfs-3.7.5-19

Test Case:

1. Create 2k files (file.*}
2. Let migration start.
3. Create 2k hardlinks of files file.* {hardlink_1.*}
4. Migration would not happen for any file as every file now has hardlinks. 
5. Create 2k another hardlinks of files file.* {hardlink_2.*}
6. Rename original files {files.* to rename_file.*}
7. Rename 1st hardlink hardlink_1.* to firsthardlink_rename.*
8. Delete 2nd hardlinkl_2.*
9. Create hardlinks with same name hardlink_1.* and hardlink_2.*
10. Truncate 2nd hardlinks to size 0
11. Truncate Original file again to size 10
12. Rename 2nd hardlink hardlink_2.* to secondhardlink_rename.*
13. chmod all files to 777
14. Create symlink of files rename_file.* to symlink_file.*
15. Rename rename_file.* to original file.*
16. Append original file.*
17. Remove all hardlinks {hardlink_1.*,hardlink_2.*,firsthardlink_rename.*,secondhardlink_rename.*}
18. Migration should resume as their are no further hardlinks.

Result of migration before step 3:
==================================

[root@dhcp37-95 glusterfs]# gluster volume rebalance master tier status
Node                 Promoted files       Demoted files        Status              
---------            ---------            ---------            ---------           
localhost            0                    0                    in progress         
10.70.37.67          0                    151                  in progress         
10.70.37.177         0                    0                    in progress         
10.70.37.187         0                    162                  in progress         
10.70.37.206         0                    0                    in progress         
10.70.37.153         0                    0                    in progress         
Tiering Migration Functionality: master: success
[root@dhcp37-95 glusterfs]#

Result of migratoin after step 17:
==================================

[root@dhcp37-95 glusterfs]# gluster volume rebalance master tier status
Node                 Promoted files       Demoted files        Status              
---------            ---------            ---------            ---------           
localhost            313                  0                    in progress         
10.70.37.67          0                    151                  in progress         
10.70.37.177         0                    0                    in progress         
10.70.37.187         0                    162                  in progress         
10.70.37.206         0                    0                    in progress         
10.70.37.153         0                    0                    in progress         
Tiering Migration Functionality: master: success
[root@dhcp37-95 glusterfs]# 


Verified the number of files,hardlinks,symlinks after every step and the count matches to the expected. Moving this bug as verified.

Comment 7 errata-xmlrpc 2016-03-01 06:09:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html


Note You need to log in before you can comment on or make changes to this bug.