Bug 1122037 - [Dist-geo-rep] : In a cascaded setup, after hardlink sync, slave level 2 volume has sticky bit files found on mount-point.
Summary: [Dist-geo-rep] : In a cascaded setup, after hardlink sync, slave level 2 volu...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: mainline
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Kotresh HR
QA Contact:
URL:
Whiteboard:
: 1121072 (view as bug list)
Depends On: 1121059
Blocks: 1121072
TreeView+ depends on / blocked
 
Reported: 2014-07-22 12:06 UTC by Kotresh HR
Modified: 2015-05-14 17:35 UTC (History)
10 users (show)

Fixed In Version: glusterfs-3.7.0beta1
Doc Type: Bug Fix
Doc Text:
Clone Of: 1121059
Environment:
Last Closed: 2015-05-14 17:26:02 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Kotresh HR 2014-07-22 12:06:42 UTC
+++ This bug was initially created as a clone of Bug #1121059 +++

Description of problem:  In a cascaded setup, after hardlink sync, slave level 2 volume has sticky files found on mount-point.

This has happened in cascaded setup on slave level2 volume while syncing hardlinks. There are more number of file on slave level 2 volume than master and slave level 1 volume. This has happened in slave level 2 volume only. 
===============================================================================
file count on master is 17456
file count on slave is 17489
===============================================================================

There was error while calculating md5sum
===============================================================================
Calculating  slave checksum ...

Failed to get the checksum of slave with following error
md5sum: /tmp/tmpZUlbzy/thread3/level01/level11/53c7ad33%%TI64COMAMS: No data available
/tmp/tmpZUlbzy/thread3/level01/level11/53c7ad33%%TI64COMAMS: short read
ftw (-p) returned -1 (Success), terminating
===============================================================================

There are few files with 2 entries in the directory and we can also see stick bit files on the mount point.
===============================================================================
# ls /mnt/slave/thread0/level02/level12/level22/level32/hardlink_to_files/ -l
total 8
---------T 1 root  root     0 Jul 17 18:08 53c7c386%%0OUTYNSNBL
-r-------- 2 60664  2735 1266 Jul 17 16:32 53c7c386%%5UI8FJ3P3V
---------T 1 root  root     0 Jul 17 18:08 53c7c386%%7323VONN1K
-rw--wxrwx 2 50486 51232 1461 Jul 17 16:41 53c7c386%%OZV5T9I51D
---------T 1 root  root     0 Jul 17 18:08 53c7c387%%1M171U4F6V
---------T 1 root  root     0 Jul 17 18:08 53c7c387%%2O0FVVBHUZ
--wx-wx--x 2 42173 37786 1222 Jul 17 16:32 53c7c387%%67QTB5HYS3
---xr-xrwx 2  7886 62050 1514 Jul 17 16:41 53c7c387%%7B9NWNYBGV
---xr-xrwx 2  7886 62050 1514 Jul 17 16:41 53c7c387%%7B9NWNYBGV
---------T 1 root  root     0 Jul 17 18:08 53c7c387%%9F3CMK6ZLX
---------T 1 root  root     0 Jul 17 18:08 53c7c387%%SM0CONAEGX

# ls /mnt/slave/thread0/level02/level12/level22/level32/hardlink_to_files/53c7c387%%7B9NWNYBGV -l
---------T 1 root root 0 Jul 17 18:08 /mnt/slave/thread0/level02/level12/level22/level32/hardlink_to_files/53c7c387%%7B9NWNYBGV
===============================================================================

In above paste, there is file "53c7c387%%7B9NWNYBGV" which has 2 entries and also there are some file with sticky bit. 


In the intermediate master (slave level 1 volume) the active node which has the sticky bit for the file 53c7c386%%0OUTYNSNBL has the entry in changelogs like this 
=============================================================================
# grep -r "d90aff2a-d55f-454f-9794-df4eefd1b82d" *
1f8a8e6b046b00c682675ebf692f5968/.processed/CHANGELOG.1405600673:E d90aff2a-d55f-454f-9794-df4eefd1b82d MKNOD 33280 0 0 28571791-a541-4ab2-8e38-ca5924308b57%2F53c7c386%25%250OUTYNSNBL
1f8a8e6b046b00c682675ebf692f5968/.processed/CHANGELOG.1405600673:M d90aff2a-d55f-454f-9794-df4eefd1b82d NULL
==============================================================================

This changelog entry shouldn't be there in the node which has sticky bit file for the file 53c7c386%%0OUTYNSNBL

Version-Release number of selected component (if applicable): glusterfs-3.6.0.24-1.el6rhs


How reproducible: Didn't try to reproduce. 


Steps to Reproduce:
1.create a cascaded geo-rep setup with between master, imaster and slave
2. create some file on master using the command "crefi -T 5 -n 5 --multi -b 10 -d 10 --random --min=1K --max=10K   /mnt/master/"
3.After it sync all the data, create hardlinks to all the files using the command "crefi -T 2 -n 5 --multi -b 10 -d 10 --random --min=1K --max=10K --fop=hardlink   /mnt/master/"


Actual results: Sticky bit files are created on slave level 2 volume, when hardlinks are created on master. 


Expected results: sticky bit files are not supposed to be created on any slaves. 
 

Additional info

Comment 1 Anand Avati 2014-07-22 12:08:18 UTC
REVIEW: http://review.gluster.org/8347 (features/changelog: Capture "correct" internal FOPs) posted (#2) for review on master by Kotresh HR (khiremat@redhat.com)

Comment 2 Kotresh HR 2014-07-22 12:48:35 UTC
Description of problem:  In a cascaded setup, after hardlink sync, slave level 2 volume has sticky files found on mount-point.

This has happened in cascaded setup on slave level2 volume while syncing hardlinks. There are more number of file on slave level 2 volume than master and slave level 1 volume. This has happened in slave level 2 volume only. 
===============================================================================
file count on master is 17456
file count on slave is 17489
===============================================================================

There was error while calculating md5sum
===============================================================================
Calculating  slave checksum ...

Failed to get the checksum of slave with following error
md5sum: /tmp/tmpZUlbzy/thread3/level01/level11/53c7ad33%%TI64COMAMS: No data available
/tmp/tmpZUlbzy/thread3/level01/level11/53c7ad33%%TI64COMAMS: short read
ftw (-p) returned -1 (Success), terminating
===============================================================================

There are few files with 2 entries in the directory and we can also see stick bit files on the mount point.
===============================================================================
# ls /mnt/slave/thread0/level02/level12/level22/level32/hardlink_to_files/ -l
total 8
---------T 1 root  root     0 Jul 17 18:08 53c7c386%%0OUTYNSNBL
-r-------- 2 60664  2735 1266 Jul 17 16:32 53c7c386%%5UI8FJ3P3V
---------T 1 root  root     0 Jul 17 18:08 53c7c386%%7323VONN1K
-rw--wxrwx 2 50486 51232 1461 Jul 17 16:41 53c7c386%%OZV5T9I51D
---------T 1 root  root     0 Jul 17 18:08 53c7c387%%1M171U4F6V
---------T 1 root  root     0 Jul 17 18:08 53c7c387%%2O0FVVBHUZ
--wx-wx--x 2 42173 37786 1222 Jul 17 16:32 53c7c387%%67QTB5HYS3
---xr-xrwx 2  7886 62050 1514 Jul 17 16:41 53c7c387%%7B9NWNYBGV
---xr-xrwx 2  7886 62050 1514 Jul 17 16:41 53c7c387%%7B9NWNYBGV
---------T 1 root  root     0 Jul 17 18:08 53c7c387%%9F3CMK6ZLX
---------T 1 root  root     0 Jul 17 18:08 53c7c387%%SM0CONAEGX

# ls /mnt/slave/thread0/level02/level12/level22/level32/hardlink_to_files/53c7c387%%7B9NWNYBGV -l
---------T 1 root root 0 Jul 17 18:08 /mnt/slave/thread0/level02/level12/level22/level32/hardlink_to_files/53c7c387%%7B9NWNYBGV
===============================================================================

In above paste, there is file "53c7c387%%7B9NWNYBGV" which has 2 entries and also there are some file with sticky bit. 


In the intermediate master (slave level 1 volume) the active node which has the sticky bit for the file 53c7c386%%0OUTYNSNBL has the entry in changelogs like this 
=============================================================================
# grep -r "d90aff2a-d55f-454f-9794-df4eefd1b82d" *
1f8a8e6b046b00c682675ebf692f5968/.processed/CHANGELOG.1405600673:E d90aff2a-d55f-454f-9794-df4eefd1b82d MKNOD 33280 0 0 28571791-a541-4ab2-8e38-ca5924308b57%2F53c7c386%25%250OUTYNSNBL
1f8a8e6b046b00c682675ebf692f5968/.processed/CHANGELOG.1405600673:M d90aff2a-d55f-454f-9794-df4eefd1b82d NULL
==============================================================================

This changelog entry shouldn't be there in the node which has sticky bit file for the file 53c7c386%%0OUTYNSNBL

Version-Release number of selected component (if applicable): glusterfs-3.6.0.24-1.el6rhs


How reproducible: Didn't try to reproduce. 


Steps to Reproduce:
1.create a cascaded geo-rep setup with between master, imaster and slave
2. create some file on master using the command "crefi -T 5 -n 5 --multi -b 10 -d 10 --random --min=1K --max=10K   /mnt/master/"
3.After it sync all the data, create hardlinks to all the files using the command "crefi -T 2 -n 5 --multi -b 10 -d 10 --random --min=1K --max=10K --fop=hardlink   /mnt/master/"


Actual results: Sticky bit files are created on slave level 2 volume, when hardlinks are created on master. 


Expected results: sticky bit files are not supposed to be created on any slaves. 
 

Additional inf

Comment 3 Anand Avati 2014-07-23 09:36:20 UTC
REVIEW: http://review.gluster.org/8347 (features/changelog: Capture "correct" internal FOPs) posted (#3) for review on master by Kotresh HR (khiremat@redhat.com)

Comment 4 Anand Avati 2014-07-23 23:15:27 UTC
COMMIT: http://review.gluster.org/8347 committed in master by Vijay Bellur (vbellur@redhat.com) 
------
commit 8a24b1487d440c23b00b56aba5b9bfb77af6ef03
Author: Venky Shankar <vshankar@redhat.com>
Date:   Fri Jul 18 15:36:42 2014 +0530

    features/changelog: Capture "correct" internal FOPs
    
    This patch fixes changelog capturing internal FOPs in a cascaded
    setup, where the intermediate master would record internal FOPs
    (generated by DHT on link()/rename()). This is due to I/O happening
    on the intermediate slave on geo-replication's auxillary mount with
    client-pid -1. Currently, the internal FOP capturing logic depends
    on client pid being non-negative and the presence of a special key
    in dictionary. Due to this, internal FOPs on an inter-mediate master
    would be recorded in the changelog. Checking client-pid being
    non-negative was introduced to capture AFR self-heal traffic in
    changelog, thereby breaking cascading setups. By coincidence,
    AFR self-heal daemon uses -1 as frame->root->pid thereby making
    is hard to differentiate b/w geo-rep's auxillary mount and self-heal
    daemon.
    
    Change-Id: Ib7bd71e80dd1856770391edb621ba9819cab7056
    BUG: 1122037
    Original-Author: Venky Shankar <vshankar@redhat.com>
    Signed-off-by: Kotresh H R <khiremat@redhat.com>
    Reviewed-on: http://review.gluster.org/8347
    Reviewed-by: Venky Shankar <vshankar@redhat.com>
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Vijay Bellur <vbellur@redhat.com>

Comment 5 Aravinda VK 2014-12-29 05:27:59 UTC
*** Bug 1121072 has been marked as a duplicate of this bug. ***

Comment 6 Niels de Vos 2015-05-14 17:26:02 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 7 Niels de Vos 2015-05-14 17:28:07 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 8 Niels de Vos 2015-05-14 17:35:13 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.