Bug 812314 - Sticky bit is left when you do remove-brick while geo-replication session is active b/w the master and slave
Sticky bit is left when you do remove-brick while geo-replication session is ...
Status: CLOSED WORKSFORME
Product: GlusterFS
Classification: Community
Component: geo-replication (Show other bugs)
mainline
x86_64 Linux
medium Severity high
: ---
: ---
Assigned To: shishir gowda
Vijaykumar Koppad
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-04-13 07:03 EDT by Vijaykumar Koppad
Modified: 2015-12-01 11:45 EST (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-05-25 07:46:40 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
croped master log. (25.62 KB, text/x-log)
2012-04-13 07:03 EDT, Vijaykumar Koppad
no flags Details
maser-rebalance log file . (432.62 KB, text/x-log)
2012-04-13 07:05 EDT, Vijaykumar Koppad
no flags Details

  None (edit)
Description Vijaykumar Koppad 2012-04-13 07:03:07 EDT
Created attachment 577299 [details]
croped master log.

Description of problem:
While geo-replication session is active b/w master and slave, if you do remove-brick,only sticky bit is synced on the slave. On the master there might be data or only sticky bit that particular file . This is inconsistent. 


Version-Release number of selected component (if applicable):3.3qa34(after this patch applied (
http://review.gluster.com/#change,3144)

How reproducible:Not consistent.


Steps to Reproduce:
1.Start a geo-replication session b/w master and slave.
2.Create some finite data of large file (100MB) distributed in many directories. 
3.Add a bricks to the master and do rebalance .
4.Then remove those added brick or the other bricks.
5. Do ll * on both master ans slave mount point. 

  
Actual results: Will get sticky bit of one of the file.


Expected results: ALl files should be synced properly.


Additional info: on master mount point. 

large7:
total 878980
---------T. 1 root root         0 Apr 13 00:07 file0
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file1
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file2
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file3
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file4
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file5
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file6
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file7
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file8
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file9

On slave mount point-
large7:
total 76
---------T. 1 root root         0 Apr 13 00:07 file0
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file1
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file2
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file3
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file4
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file5
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file6
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file7
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file8
-rw-r--r--. 1 root root 100000000 Apr 12 21:48 file9
Comment 1 Vijaykumar Koppad 2012-04-13 07:05:12 EDT
Created attachment 577300 [details]
maser-rebalance log file .
Comment 2 shishir gowda 2012-04-27 01:39:31 EDT
Can you check if this issue is still open?
Comment 3 shishir gowda 2012-05-07 02:16:12 EDT
This issue should be fixed as part of bug 812287. Please reopen the bug if you are able to reproduce it.
Comment 4 Vijaykumar Koppad 2012-05-25 07:46:40 EDT
bug 821139 looks similar to this bug for me . But i am not able to reproduce this. I am closing it now .

Note You need to log in before you can comment on or make changes to this bug.