Bug 812314

Summary: Sticky bit is left when you do remove-brick while geo-replication session is active b/w the master and slave
Product: [Community] GlusterFS Reporter: Vijaykumar Koppad <vkoppad>
Component: geo-replicationAssignee: shishir gowda <sgowda>
Status: CLOSED WORKSFORME QA Contact: Vijaykumar Koppad <vkoppad>
Severity: high Docs Contact:
Priority: medium    
Version: mainlineCC: amarts, bbandari, gluster-bugs, nsathyan
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-05-25 11:46:40 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
croped master log.
none
maser-rebalance log file . none

Description Vijaykumar Koppad 2012-04-13 11:03:07 UTC
Created attachment 577299 [details]
croped master log.

Description of problem:
While geo-replication session is active b/w master and slave, if you do remove-brick,only sticky bit is synced on the slave. On the master there might be data or only sticky bit that particular file . This is inconsistent. 


Version-Release number of selected component (if applicable):3.3qa34(after this patch applied (
http://review.gluster.com/#change,3144)

How reproducible:Not consistent.


Steps to Reproduce:
1.Start a geo-replication session b/w master and slave.
2.Create some finite data of large file (100MB) distributed in many directories. 
3.Add a bricks to the master and do rebalance .
4.Then remove those added brick or the other bricks.
5. Do ll * on both master ans slave mount point. 

  
Actual results: Will get sticky bit of one of the file.


Expected results: ALl files should be synced properly.


Additional info: on master mount point. 

large7:
total 878980
---------T. 1 root root         0 Apr 13 00:07 file0
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file1
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file2
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file3
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file4
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file5
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file6
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file7
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file8
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file9

On slave mount point-
large7:
total 76
---------T. 1 root root         0 Apr 13 00:07 file0
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file1
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file2
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file3
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file4
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file5
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file6
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file7
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file8
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file9

Comment 1 Vijaykumar Koppad 2012-04-13 11:05:12 UTC
Created attachment 577300 [details]
maser-rebalance log file .

Comment 2 shishir gowda 2012-04-27 05:39:31 UTC
Can you check if this issue is still open?

Comment 3 shishir gowda 2012-05-07 06:16:12 UTC
This issue should be fixed as part of bug 812287. Please reopen the bug if you are able to reproduce it.

Comment 4 Vijaykumar Koppad 2012-05-25 11:46:40 UTC
bug 821139 looks similar to this bug for me . But i am not able to reproduce this. I am closing it now .