Bug 182638
Summary: | Multi-replication nodes not replicating both ways | ||
---|---|---|---|
Product: | Red Hat Directory Server | Reporter: | Ben Le <ble> |
Component: | Replication - General | Assignee: | Noriko Hosoi <nhosoi> |
Status: | CLOSED WORKSFORME | QA Contact: | Chandrasekar Kannan <ckannan> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 7.1 | CC: | benl, nkinder, rmeggins |
Target Milestone: | DS8.0 | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
URL: | tigerwoods.sfbay.redhat.com | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2007-09-23 23:22:14 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 152373, 240316 |
Description
Ben Le
2006-02-23 19:53:49 UTC
Do you have any errors in the node B error log? What steps were followed in 1)? You should have configured a change log on each server, configured each server to be a multiple master, assigned each one a unique replica ID number, created a replication consumer entry on each server, created a replication agreement with the other master. 2) - you do not need to restart the servers 3) - consumer? Both servers are masters, right? You should only have to initialize once. Per today's bug council; To will reproduce this bug. Setting target tracking and target milestone to 7.2 Ben, is this still an issue? To, Customer is no longer mentioned about this issue solution. It should be good to fix this issue in the future release. I can reproduce the issue following the steps. At step 3, I issued "Initial Consumer" on one of the 2 masters. Since then replication only works from that master to the other, but not the other way around. I see the following error mesgs on the master where replication is not syncing: [04/Jan/2007:13:21:45 -0800] NSMMReplicationPlugin - multimaster_be_state_change: replica dc=dsqa,dc=sjc2,dc=redhat,dc=com is coming online; enabling replication [04/Jan/2007:13:21:45 -0800] NSMMReplicationPlugin - replica_reload_ruv: Warning: new data for replica dc=dsqa,dc=sjc2,dc=redhat,dc=com does not match the data in the changelog. Recreating the changelog file. This could affect replication with replica's consumers in which case the consumers should be reinitialized. [04/Jan/2007:13:22:38 -0800] agmt="cn=repag1" (shadowfoot:9900) - Can't locate CSN 459d6fc1000000010000 in the changelog (DB rc=-30990). The consumer may need to be reinitialized. More notes: -Build used to reproduce: DS 71 SP3 on RHEL 4. -Before step 2, I verified sycns are happening both ways, and the db was in sync. -Noriko thinks step 3 would still keep both masters in sync, and should not require a "Initialize Consumer" from master B for replication to continue to work both ways. To and Noriko did some more experiments: We Steps to Reproduce: 1. After setup Multi-replicaton on both servers. 2. Restart both servers. 3. Run Consumer initialization on A. 4. Then update or delete the entry on the node B. Actual Results: The data change on the node B doesn't appear in node A. We removed the changelog on B. Then, the operations done against B were replicated to A. As To put in the comment #5, this error is caused by the inconsistency between the changelog and the data in the main db. First, I thought if B is reinitialized by A, the changelog on B should be cleaned up. But if there are some changes only recorded on B, the information will be lost. To prevent it, the initialization does not touch the changelog. So, there are two ways to resolve the inconsistency. 1) If you don't care the changes on B, remove the changelog on B (see http://www.redhat.com/docs/manuals/dir-server/ag/7.1/replicat.html#1110322). 2) If changes made on B need to be replicated to A, run consumer initialization on B, as well. DS7.2 is not a valid milestone anymore. Anything thats set to DS7.2 should be set to DS8.0. Will make further changes per bug council on 07/24/2007, after this. |