Bug 1139156
Summary: | dist-geo-rep: Few files are not synced to slave when files are being created during geo-rep start | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | M S Vishwanath Bhat <vbhat> | |
Component: | geo-replication | Assignee: | Kotresh HR <khiremat> | |
Status: | CLOSED ERRATA | QA Contact: | M S Vishwanath Bhat <vbhat> | |
Severity: | high | Docs Contact: | ||
Priority: | high | |||
Version: | rhgs-3.0 | CC: | aavati, amainkar, avishwan, csaba, khiremat, mzywusko, nlevinki, nsathyan, sharne, smanjara, ssamanta | |
Target Milestone: | --- | Keywords: | ZStream | |
Target Release: | RHGS 3.0.3 | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.6.0.31-1.el6rhs | Doc Type: | Bug Fix | |
Doc Text: |
Previously, Geo-replication missed synchronizing a few files to slave when I/O happened during geo-replication start. With this fix, slave does not miss any files if I/O happens during geo-replication start.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1139196 (view as bug list) | Environment: | ||
Last Closed: | 2015-01-15 13:39:34 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1139196, 1159205, 1162694 |
Description
M S Vishwanath Bhat
2014-09-08 09:04:49 UTC
Patch sent upstream: http://review.gluster.org/#/c/8650/ Upstream Patch: (Status: Merged) http://review.gluster.org/#/c/8650/ Downstream Patch: https://code.engineering.redhat.com/gerrit/#/c/35561/ Verified on glusterfs-3.6.0.33-1.el6rhs.x86_64 All data created during geo-rep start synced to the slave successfully. No files/dirs were skipped. # gluster v geo master acdc::slave stop Stopping geo-replication session between master & acdc::slave has been successful # gluster v geo master acdc::slave start Starting geo-replication session between master & acdc::slave has been successful # for i in {1..100};do touch f$i;done # ls dir1 dir2 dir30 dir41 dir52 dir63 dir74 dir85 dir96 f16 f27 f38 f49 f6 f70 f81 f92 dir10 dir20 dir31 dir42 dir53 dir64 dir75 dir86 dir97 f17 f28 f39 f5 f60 f71 f82 f93 dir100 dir21 dir32 dir43 dir54 dir65 dir76 dir87 dir98 f18 f29 f4 f50 f61 f72 f83 f94 dir11 dir22 dir33 dir44 dir55 dir66 dir77 dir88 dir99 f19 f3 f40 f51 f62 f73 f84 f95 dir12 dir23 dir34 dir45 dir56 dir67 dir78 dir89 f1 f2 f30 f41 f52 f63 f74 f85 f96 dir13 dir24 dir35 dir46 dir57 dir68 dir79 dir9 f10 f20 f31 f42 f53 f64 f75 f86 f97 dir14 dir25 dir36 dir47 dir58 dir69 dir8 dir90 f100 f21 f32 f43 f54 f65 f76 f87 f98 dir15 dir26 dir37 dir48 dir59 dir7 dir80 dir91 f11 f22 f33 f44 f55 f66 f77 f88 f99 dir16 dir27 dir38 dir49 dir6 dir70 dir81 dir92 f12 f23 f34 f45 f56 f67 f78 f89 dir17 dir28 dir39 dir5 dir60 dir71 dir82 dir93 f13 f24 f35 f46 f57 f68 f79 f9 dir18 dir29 dir4 dir50 dir61 dir72 dir83 dir94 f14 f25 f36 f47 f58 f69 f8 f90 dir19 dir3 dir40 dir51 dir62 dir73 dir84 dir95 f15 f26 f37 f48 f59 f7 f80 f91 [root@ccr master]# gluster v geo master acdc::slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE STATUS CHECKPOINT STATUS CRAWL STATUS ------------------------------------------------------------------------------------------------------------------------------------- abc master /bricks/brick0 nirvana::slave Initializing... N/A N/A dfg master /bricks/brick1 acdc::slave Initializing... N/A N/A hij master /bricks/brick3 rammstein::slave Initializing... N/A N/A klm master /bricks/brick2 led::slave Initializing... N/A N/A # ls -l /mnt/master | wc -l 201 # ls -l /mnt/slave | wc -l 201 arequal-checksum of master is : Entry counts Regular files : 200 Directories : 111 Symbolic links : 0 Other : 0 Total : 311 Metadata checksums Regular files : 3e9 Directories : 24d74c Symbolic links : 3e9 Other : 3e9 Checksums Regular files : 00 Directories : 30313130312e00 Symbolic links : 0 Other : 0 Total : 30313130312e00 arequal-checksum of geo_rep_slave slave: Entry counts Regular files : 200 Directories : 111 Symbolic links : 0 Other : 0 Total : 311 Metadata checksums Regular files : 3e9 Directories : 24d74c Symbolic links : 3e9 Other : 3e9 Checksums Regular files : 00 Directories : 30313130312e00 Symbolic links : 0 Other : 0 Total : 30313130312e00 Successfully synced all the files from master to the slave Please review and sign-off edited doc text. Doc text looks fine to me. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-0038.html |