This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 984942 - Dist-geo-rep : rebalance start on master volume triggers geo-rep and consequently deletes few files from slave.
Dist-geo-rep : rebalance start on master volume triggers geo-rep and consequ...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication (Show other bugs)
2.1
x86_64 Linux
unspecified Severity high
: ---
: ---
Assigned To: Venky Shankar
Vijaykumar Koppad
: TestBlocker
Depends On:
Blocks: 985236 989532
  Show dependency treegraph
 
Reported: 2013-07-16 08:34 EDT by Vijaykumar Koppad
Modified: 2014-08-24 20:50 EDT (History)
7 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0.12rhs.beta6-1
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-09-23 18:29:51 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Vijaykumar Koppad 2013-07-16 08:34:09 EDT
Description of problem: Add-brick and rebalance start will trigger geo-rep syncing and which results in deleting and creating few files on the slave ( ie number of files on slave goes up and down), and the geo-rep keeps on logging  
 "failed to sync" logs in geo-rep log-file. It will also have constant number of files in .processing directory.  

Version-Release number of selected component (if applicable): 3.4.0.12rhs.beta4-1.el6rhs.x86_64


How reproducible: Haven't tried to reproduce it. 


Steps to Reproduce:
1. Create and start geo-rep relationship between master(DIST_REP) and slave. 
2. Now add-brick and start rebalance on master. 
3. Check the number of files on the slave and also check geo-rep logs.

Actual results: Rebalance start triggers geo-rep and deletes few files from slave. 


Expected results: Rebalance shouldn't trigger geo-rep. 


Additional info:
Comment 4 Vijaykumar Koppad 2013-07-24 04:51:45 EDT
Verified in glusterfs-3.4.0.12rhs.beta6-1,
Comment 5 Scott Haines 2013-09-23 18:29:51 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html

Note You need to log in before you can comment on or make changes to this bug.