Description of problem: Add-brick and rebalance start will trigger geo-rep syncing and which results in deleting and creating few files on the slave ( ie number of files on slave goes up and down), and the geo-rep keeps on logging "failed to sync" logs in geo-rep log-file. It will also have constant number of files in .processing directory. Version-Release number of selected component (if applicable): 3.4.0.12rhs.beta4-1.el6rhs.x86_64 How reproducible: Haven't tried to reproduce it. Steps to Reproduce: 1. Create and start geo-rep relationship between master(DIST_REP) and slave. 2. Now add-brick and start rebalance on master. 3. Check the number of files on the slave and also check geo-rep logs. Actual results: Rebalance start triggers geo-rep and deletes few files from slave. Expected results: Rebalance shouldn't trigger geo-rep. Additional info:
https://code.engineering.redhat.com/gerrit/#/c/10516/
Verified in glusterfs-3.4.0.12rhs.beta6-1,
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html