Bug 1031687 - Dist-geo-rep : while doing first xsync crawl, disconnection with slave causes the geo-rep to re-crawl the whole file system and generate XSYNC-CHANGELOGS again.
Dist-geo-rep : while doing first xsync crawl, disconnection with slave causes...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication (Show other bugs)
2.1
x86_64 Linux
high Severity high
: ---
: RHGS 2.1.2
Assigned To: Kotresh HR
Vijaykumar Koppad
: ZStream
: 1034238 (view as bug list)
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-11-18 09:42 EST by Vijaykumar Koppad
Modified: 2015-05-15 14:35 EDT (History)
12 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0.50rhs
Doc Type: Bug Fix
Doc Text:
Previously, when the first xsync crawl was in progress, disconnection with the slave volume caused Geo-replication to re-crawl the entire file system and generate XSYNC-CHANGELOGS. With this update, xsync skips the directories which are already synced to the slave volume.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-02-25 03:04:24 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
khiremat: needinfo+


Attachments (Terms of Use)

  None (edit)
Description Vijaykumar Koppad 2013-11-18 09:42:18 EST
Description of problem: while doing first xsync crawl, disconnection with slave causes the geo-rep to re-crawl the whole file system and generate XSYNC-CHANGELOGS again.


Version-Release number of selected component (if applicable): glusterfs-3.4.0.44rhs

How reproducible:Happens everytime


Steps to Reproduce:
1.create a geo-rep relationship between master and slave
2.create some 20million files on master.
3.start a geo-rep session.

Actual results: some of the sessions experience disconnections with the slave and first xsync crawl does a full crawl again generated the XSYNC-CHANGELOG with the entries which has already been crawled . 


Expected results: It shouldn't do this recrawl, it should start from where it has left off 


Additional info:
Comment 4 Kotresh HR 2013-12-11 03:17:56 EST
*** Bug 1034238 has been marked as a duplicate of this bug. ***
Comment 8 Kotresh HR 2014-01-02 05:30:43 EST
Added Doc Text
Comment 9 Vijaykumar Koppad 2014-01-03 08:36:11 EST
Verified on the build glusterfs-3.4.0.53rhs-1.

Steps used to verify.

1.create a geo-rep relationship between master and slave.
2.create 500K files on master.
3.start a geo-rep session between master and slave.
4.run following on one of the master active nodes.
  while : ; do ps ax | grep "ssh " | awk '{print $1}' | xargs kill ; sleep 100 ; ps ax | grep "ssh " | awk '{print $1}' | xargs kill ; sleep 1000; done

5. Wait for it to complete the syncing.
Comment 10 Pavithra 2014-01-08 05:03:49 EST
Can you please verify the doc text for technical accuracy?
Comment 11 Kotresh HR 2014-01-08 05:58:17 EST
Doc text looks fine.
Comment 13 errata-xmlrpc 2014-02-25 03:04:24 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0208.html

Note You need to log in before you can comment on or make changes to this bug.