Bug 624442 - MMR: duplicate replica ID
MMR: duplicate replica ID
Status: CLOSED CURRENTRELEASE
Product: 389
Classification: Community
Component: Directory Server (Show other bugs)
1.1.2
All Linux
medium Severity medium
: ---
: ---
Assigned To: Noriko Hosoi
Viktor Ashirov
:
Depends On:
Blocks: 639035
  Show dependency treegraph
 
Reported: 2010-08-16 09:39 EDT by reinhard nappert
Modified: 2015-12-07 11:31 EST (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-07 11:31:58 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
git patch file (master) (7.06 KB, patch)
2011-01-13 17:41 EST, Noriko Hosoi
nkinder: review+
Details | Diff

  None (edit)
Description reinhard nappert 2010-08-16 09:39:35 EDT
Let's say, I just have two MM A <--> B. I start configuring the replica and agreement on A and assign id 1. Then I do the same for B with the id 2. Everything is fine. Then, I disable on both boxes the replication. Then, I start setting the same thing up, but I start with B and assign 1 as id. A gets 2 as id assigned. Now, the replication fails with the message: "Unable to acquire replica: error: duplicate replica ID detected"

I am pretty sure that it has to do with the RUV entry "nsuniqueid=ffffffff-ffffffff-ffffffff-ffffffff,dc=your,dc=suffix", because it still shows:

dn: nsuniqueid=ffffffff-ffffffff-ffffffff-ffffffff, dc=your,dc=suffix
 objectClass: top
 objectClass: nsTombstone
 objectClass: extensibleobject
 nsds50ruv: {replicageneration} 4c6445e4000000010000
 nsds50ruv: {replica 1 ldap://A:389}
 nsds50ruv: {replica 2 ldap://B:389}
 nsruvReplicaLastModified: {replica 1 ldap://A:389} 00000000
 nsruvReplicaLastModified: {replica 2 ldap://B:389} 00000000

 My replica configuration objects use the correct ids (1 for B) and (2 for A).
 All this said, I believe the server should internally delete the RUV entry, once the replica configuration object is deleted.
Comment 3 Noriko Hosoi 2011-01-12 17:17:14 EST
Thanks for the bug report.  I could reproduce the problem.

[12/Jan/2011:14:13:36 -0800] NSMMReplicationPlugin - agmt="cn=agmt1" (kiki:10390): Unable to aquire replica: the replica has the same Replica ID as this one. Replication is aborting.
[12/Jan/2011:14:13:36 -0800] NSMMReplicationPlugin - agmt="cn=agmt1" (kiki:10390): Incremental update failed and requires administrator action
Comment 4 Noriko Hosoi 2011-01-13 17:41:55 EST
Created attachment 473440 [details]
git patch file (master)

Description: Each replica has an RUV tombstone entry in the
backend db, which keeps nsds50ruv attribute values as follows:
nsds50ruv: {replicageneration} <replica_generation_csn>
nsds50ruv: {replica <rid> ldap://<host>:<port>} <last_modified>
...
When the replica is deleted, the RUV tombstone entry remains
in the db.  Then if the replica is added back with the different
replica id <rid-2>, the original nsds50ruv value {replica <rid>
ldap://<host>:<port>} was not updated.  This caused the problem
if the counter replica server happened to get the same replica
id <rid> that this server original had.

This patch compares the replica id <rid> in the RUV tombstone
entry with the new id <rid-2>.  If they don't match, recreate
the RUV tombstone entry.
Comment 5 Noriko Hosoi 2011-01-13 18:05:42 EST
Reviewed by Nathan (Thanks!!!)

Pushed to master.

$ git merge work
Updating 7dfe817..d05faee
Fast-forward
 ldap/servers/plugins/replication/repl5_replica.c |   46 ++++++++++++++++++---
 ldap/servers/plugins/replication/repl5_ruv.c     |    4 +-
 2 files changed, 42 insertions(+), 8 deletions(-)

$ git push
Counting objects: 15, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (8/8), done.
Writing objects: 100% (8/8), 1.60 KiB, done.
Total 8 (delta 6), reused 0 (delta 0)
To ssh://git.fedorahosted.org/git/389/ds.git
   7dfe817..d05faee  master -> master
Comment 6 Noriko Hosoi 2011-01-13 18:10:28 EST
Steps to verify.
1. Set up 2-way MMR: Master 1 with Replica ID 1
                     Master 2 with Replica ID 2
2. Disable Replica on the both Master 1 and 2
3. Setup MMR again: Master 1 with Replica ID 2
                    Master 2 with Replica ID 1
4. Create an agreement on Master 1 and 2
5. Initialize consumer on Master 1

If the initializing consumer is successful, the bug has been verified.
Comment 7 Amita Sharma 2011-05-10 03:56:23 EDT
Test Steps
===========
1. I have setup 2-way MMR between M1 and M2 with
M1 - Replica ID -11
M2 - Replica ID -12

2. Then , I have deleted the replication suffixes from both M1 and M2.

3. Recreated the suffixes and assigned the Replica IDs as:
M1 - Replica ID -12
M2 - Replica ID -11

4. Replication is happening properly without and error.

Hence, marking the bug as VERIFIED.
Comment 9 Marc Sauton 2015-01-21 19:12:07 EST
should this bugzilla 624442 closed?
the patch in https://bugzilla.redhat.com/attachment.cgi?id=473440
has been in 389-ds-base for some time, it is for sure in 389-ds-base-1.2.11.15-48
the salesforce case number 01336038 is about RHDS 8.2

Note You need to log in before you can comment on or make changes to this bug.