Bug 812314 - Sticky bit is left when you do remove-brick while geo-replication session is active b/w the master and slave
Summary: Sticky bit is left when you do remove-brick while geo-replication session is ...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: mainline
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ---
Assignee: shishir gowda
QA Contact: Vijaykumar Koppad
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-04-13 11:03 UTC by Vijaykumar Koppad
Modified: 2015-12-01 16:45 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-05-25 11:46:40 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)
croped master log. (25.62 KB, text/x-log)
2012-04-13 11:03 UTC, Vijaykumar Koppad
no flags Details
maser-rebalance log file . (432.62 KB, text/x-log)
2012-04-13 11:05 UTC, Vijaykumar Koppad
no flags Details

Description Vijaykumar Koppad 2012-04-13 11:03:07 UTC
Created attachment 577299 [details]
croped master log.

Description of problem:
While geo-replication session is active b/w master and slave, if you do remove-brick,only sticky bit is synced on the slave. On the master there might be data or only sticky bit that particular file . This is inconsistent. 


Version-Release number of selected component (if applicable):3.3qa34(after this patch applied (
http://review.gluster.com/#change,3144)

How reproducible:Not consistent.


Steps to Reproduce:
1.Start a geo-replication session b/w master and slave.
2.Create some finite data of large file (100MB) distributed in many directories. 
3.Add a bricks to the master and do rebalance .
4.Then remove those added brick or the other bricks.
5. Do ll * on both master ans slave mount point. 

  
Actual results: Will get sticky bit of one of the file.


Expected results: ALl files should be synced properly.


Additional info: on master mount point. 

large7:
total 878980
---------T. 1 root root         0 Apr 13 00:07 file0
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file1
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file2
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file3
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file4
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file5
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file6
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file7
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file8
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file9

On slave mount point-
large7:
total 76
---------T. 1 root root         0 Apr 13 00:07 file0
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file1
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file2
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file3
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file4
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file5
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file6
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file7
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file8
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file9

Comment 1 Vijaykumar Koppad 2012-04-13 11:05:12 UTC
Created attachment 577300 [details]
maser-rebalance log file .

Comment 2 shishir gowda 2012-04-27 05:39:31 UTC
Can you check if this issue is still open?

Comment 3 shishir gowda 2012-05-07 06:16:12 UTC
This issue should be fixed as part of bug 812287. Please reopen the bug if you are able to reproduce it.

Comment 4 Vijaykumar Koppad 2012-05-25 11:46:40 UTC
bug 821139 looks similar to this bug for me . But i am not able to reproduce this. I am closing it now .


Note You need to log in before you can comment on or make changes to this bug.