Bug 1121059 - [Dist-geo-rep] : In a cascaded setup, after hardlink sync, slave level 2 volume has sticky bit files found on mount-point.
Summary: [Dist-geo-rep] : In a cascaded setup, after hardlink sync, slave level 2 volu...
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication
Version: rhgs-3.0
Hardware: x86_64
OS: Linux
Target Milestone: ---
: RHGS 3.0.0
Assignee: Kotresh HR
QA Contact: Bhaskar Bandari
Depends On:
Blocks: 1121072 1122037
TreeView+ depends on / blocked
Reported: 2014-07-18 09:42 UTC by Vijaykumar Koppad
Modified: 2015-05-13 16:55 UTC (History)
10 users (show)

Fixed In Version: glusterfs-
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1121072 1122037 (view as bug list)
Last Closed: 2014-09-22 19:44:41 UTC

Attachments (Terms of Use)

System ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:1278 normal SHIPPED_LIVE Red Hat Storage Server 3.0 bug fix and enhancement update 2014-09-22 23:26:55 UTC

Description Vijaykumar Koppad 2014-07-18 09:42:47 UTC
Description of problem:  In a cascaded setup, after hardlink sync, slave level 2 volume has sticky files found on mount-point.

This has happened in cascaded setup on slave level2 volume while syncing hardlinks. There are more number of file on slave level 2 volume than master and slave level 1 volume. This has happened in slave level 2 volume only. 
file count on master is 17456
file count on slave is 17489

There was error while calculating md5sum
Calculating  slave checksum ...

Failed to get the checksum of slave with following error
md5sum: /tmp/tmpZUlbzy/thread3/level01/level11/53c7ad33%%TI64COMAMS: No data available
/tmp/tmpZUlbzy/thread3/level01/level11/53c7ad33%%TI64COMAMS: short read
ftw (-p) returned -1 (Success), terminating

There are few files with 2 entries in the directory and we can also see stick bit files on the mount point.
# ls /mnt/slave/thread0/level02/level12/level22/level32/hardlink_to_files/ -l
total 8
---------T 1 root  root     0 Jul 17 18:08 53c7c386%%0OUTYNSNBL
-r-------- 2 60664  2735 1266 Jul 17 16:32 53c7c386%%5UI8FJ3P3V
---------T 1 root  root     0 Jul 17 18:08 53c7c386%%7323VONN1K
-rw--wxrwx 2 50486 51232 1461 Jul 17 16:41 53c7c386%%OZV5T9I51D
---------T 1 root  root     0 Jul 17 18:08 53c7c387%%1M171U4F6V
---------T 1 root  root     0 Jul 17 18:08 53c7c387%%2O0FVVBHUZ
--wx-wx--x 2 42173 37786 1222 Jul 17 16:32 53c7c387%%67QTB5HYS3
---xr-xrwx 2  7886 62050 1514 Jul 17 16:41 53c7c387%%7B9NWNYBGV
---xr-xrwx 2  7886 62050 1514 Jul 17 16:41 53c7c387%%7B9NWNYBGV
---------T 1 root  root     0 Jul 17 18:08 53c7c387%%9F3CMK6ZLX
---------T 1 root  root     0 Jul 17 18:08 53c7c387%%SM0CONAEGX

# ls /mnt/slave/thread0/level02/level12/level22/level32/hardlink_to_files/53c7c387%%7B9NWNYBGV -l
---------T 1 root root 0 Jul 17 18:08 /mnt/slave/thread0/level02/level12/level22/level32/hardlink_to_files/53c7c387%%7B9NWNYBGV

In above paste, there is file "53c7c387%%7B9NWNYBGV" which has 2 entries and also there are some file with sticky bit. 

In the intermediate master (slave level 1 volume) the active node which has the sticky bit for the file 53c7c386%%0OUTYNSNBL has the entry in changelogs like this 
# grep -r "d90aff2a-d55f-454f-9794-df4eefd1b82d" *
1f8a8e6b046b00c682675ebf692f5968/.processed/CHANGELOG.1405600673:E d90aff2a-d55f-454f-9794-df4eefd1b82d MKNOD 33280 0 0 28571791-a541-4ab2-8e38-ca5924308b57%2F53c7c386%25%250OUTYNSNBL
1f8a8e6b046b00c682675ebf692f5968/.processed/CHANGELOG.1405600673:M d90aff2a-d55f-454f-9794-df4eefd1b82d NULL

This changelog entry shouldn't be there in the node which has sticky bit file for the file 53c7c386%%0OUTYNSNBL

Version-Release number of selected component (if applicable): glusterfs-

How reproducible: Didn't try to reproduce. 

Steps to Reproduce:
1.create a cascaded geo-rep setup with between master, imaster and slave
2. create some file on master using the command "crefi -T 5 -n 5 --multi -b 10 -d 10 --random --min=1K --max=10K   /mnt/master/"
3.After it sync all the data, create hardlinks to all the files using the command "crefi -T 2 -n 5 --multi -b 10 -d 10 --random --min=1K --max=10K --fop=hardlink   /mnt/master/"

Actual results: Sticky bit files are created on slave level 2 volume, when hardlinks are created on master. 

Expected results: sticky bit files are not supposed to be created on any slaves. 

Additional info:

Comment 2 Vijaykumar Koppad 2014-07-18 13:06:41 UTC
Happens with renames too.

Comment 4 Vijaykumar Koppad 2014-08-12 10:51:05 UTC
verfified on the build glusterfs-

Comment 8 errata-xmlrpc 2014-09-22 19:44:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.