Bug 1179701
Summary: | dist-geo-rep: Geo-rep skipped some files after replacing a node with the same hostname and IP | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | shilpa <smanjara> |
Component: | geo-replication | Assignee: | Aravinda VK <avishwan> |
Status: | CLOSED ERRATA | QA Contact: | Rahul Hinduja <rhinduja> |
Severity: | unspecified | Docs Contact: | |
Priority: | high | ||
Version: | rhgs-3.0 | CC: | aavati, annair, asriram, avishwan, bmohanra, csaba, nlevinki, nsathyan, rcyriac, rhinduja, sauchter |
Target Milestone: | --- | ||
Target Release: | RHGS 3.1.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | consistency | ||
Fixed In Version: | glusterfs-3.7.0-2.el6rhs | Doc Type: | Bug Fix |
Doc Text: |
Previously, when a new node was added to Red Hat Gluster Storage node, Historical Changelogs were not available. Due to issue in comparing the xtime, Hybrid crawl missed few files to sync. With this fix, Xtime compare logic in Geo-replication is fixed in Hybrid Crawl and it does not miss any files to sync to Slave.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2015-07-29 04:37:46 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1179709, 1202842, 1223636 |
Description
shilpa
2015-01-07 11:27:21 UTC
Hi Aravinda, Can you please review the edited doc text and sign off? (In reply to Pavithra from comment #2) > Hi Aravinda, > > Can you please review the edited doc text and sign off? doc text looks good to me. Made a minor edit. Verified with build: glusterfs-3.7.1-10.el6rhs.x86_64 Once the node is re-installed which got the same IP. Followed the steps mentioned in http://documentation-devel.engineering.redhat.com/site/documentation/en-US/Red_Hat_Storage/3/html-single/Administration_Guide/index.html#Replacing_a_Host_Machine_with_the_Same_Hostname Once the geo-rep started, it started performed the Hybrid crawl and sync the data to the slave. Master: ======= [root@wingo ~]# find /mnt/6m | wc -l 7565 [root@wingo ~]# [root@wingo scripts]# arequal-checksum -p /mnt/6m Entry counts Regular files : 4723 Directories : 871 Symbolic links : 1971 Other : 0 Total : 7565 Metadata checksums Regular files : 47a9e5 Directories : 24d481 Symbolic links : 5a815a Other : 3e9 Checksums Regular files : 2bc0ad4daf6f43f398647ccb254094b5 Directories : 5f77734b7d784455 Symbolic links : 7a33023b4e214744 Other : 0 Total : 96e0a0f6b976d457 [root@wingo scripts]# Slave: ====== [root@wingo ~]# find /mnt/6s | wc -l 7565 [root@wingo ~]# [root@wingo scripts]# arequal-checksum -p /mnt/6s Entry counts Regular files : 4723 Directories : 871 Symbolic links : 1971 Other : 0 Total : 7565 Metadata checksums Regular files : 47a9e5 Directories : 24d481 Symbolic links : 5a815a Other : 3e9 Checksums Regular files : 2bc0ad4daf6f43f398647ccb254094b5 Directories : 5f77734b7d784455 Symbolic links : 7a33023b4e214744 Other : 0 Total : 96e0a0f6b976d457 [root@wingo scripts]# The node which was re-installed was ACTIVE before and after re-installation, the node becomes ACTIVE {Without use-meta-volume} Data was synced to the replicate brick using "heal full" before the geo-rep session is created. Hi Aravinda, Please review the doc-text and sign-off if this looks ok. Changing the doc text flag to + Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html |