Bug 1274430 - [RFE] Handling replication conflict entries
[RFE] Handling replication conflict entries
Status: ASSIGNED
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: 389-ds-base (Show other bugs)
7.0
Unspecified Unspecified
high Severity unspecified
: rc
: ---
Assigned To: Ludwig
Viktor Ashirov
http://www.port389.org/docs/389ds/des...
: FutureFeature
: 747701 1213787 1395848 1437887 (view as bug list)
Depends On:
Blocks: 1113520 1399979 1467835 1472344 695797 756082
  Show dependency treegraph
 
Reported: 2015-10-22 13:21 EDT by Noriko Hosoi
Modified: 2017-08-14 09:45 EDT (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Noriko Hosoi 2015-10-22 13:21:58 EDT
This bug is created as a clone of upstream ticket:
https://fedorahosted.org/389/ticket/47784

The test case is M1-M2. provision entries on M1 and wait they are replicated to M2.
Then disable RA M1<->M2.
On M1, delete an entry (new_account1), on M2 add a child entry under this same entry.
Finally do some MODs on M1 (new_account19) and M2(new_account18) on test entries.
Enable replication

both operation DEL/Add child are failing

M1:
[18/Apr/2014:10:46:39 +0200] conn=3 op=32 DEL dn="cn=new_account1,cn=staged user,dc=example,dc=com"
[18/Apr/2014:10:46:39 +0200] conn=3 op=32 RESULT err=0 tag=107 nentries=0 etime=1 csn=5350e66f000000010000
...
[18/Apr/2014:10:46:45 +0200] conn=5 op=5 ADD dn="cn=child,cn=new_account1,cn=staged user,dc=example,dc=com"
[18/Apr/2014:10:46:47 +0200] conn=5 op=5 RESULT err=1 tag=105 nentries=0 etime=2 csn=5350e671000000020000

M2:
[18/Apr/2014:10:46:41 +0200] conn=3 op=18 ADD dn="cn=child,cn=new_account1,cn=staged user,dc=example,dc=com"
[18/Apr/2014:10:46:42 +0200] conn=3 op=18 RESULT err=0 tag=105 nentries=0 etime=1 csn=5350e671000000020000
...
[18/Apr/2014:10:46:48 +0200] conn=5 op=5 DEL dn="cn=new_account1,cn=staged user,dc=example,dc=com"
[18/Apr/2014:10:46:48 +0200] conn=5 op=5 RESULT err=66 tag=107 nentries=0 etime=0 csn=5350e66f000000010000


it does not break replication
    M1:
    ...
    [18/Apr/2014:10:46:43 +0200] conn=5 op=3 EXT oid="2.16.840.1.113730.3.5.12" name="replication-multimaster-extop"
    [18/Apr/2014:10:46:44 +0200] conn=5 op=3 RESULT err=0 tag=120 nentries=0 etime=1
    [18/Apr/2014:10:46:44 +0200] conn=5 op=4 SRCH base="cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config" scope=0 filter="(objectClass=*)" attrs="nsDS5ReplicaId"
    [18/Apr/2014:10:46:44 +0200] conn=5 op=4 RESULT err=0 tag=101 nentries=1 etime=0
    [18/Apr/2014:10:46:45 +0200] conn=5 op=5 ADD dn="cn=child,cn=new_account1,cn=staged user,dc=example,dc=com"
    [18/Apr/2014:10:46:47 +0200] conn=5 op=5 RESULT err=1 tag=105 nentries=0 etime=2 csn=5350e671000000020000
    [18/Apr/2014:10:46:47 +0200] conn=5 op=6 MOD dn="cn=new_account18,cn=staged user,dc=example,dc=com"
    [18/Apr/2014:10:46:49 +0200] conn=5 op=6 RESULT err=0 tag=103 nentries=0 etime=2 csn=5350e672000000020000
    [18/Apr/2014:10:46:52 +0200] conn=3 op=41 RESULT err=0 tag=103 nentries=0 etime=2 csn=5350e67a000000010000
    [18/Apr/2014:10:46:53 +0200] conn=5 op=7 EXT oid="2.16.840.1.113730.3.5.5" name="Netscape Replication End Session"
    [18/Apr/2014:10:46:53 +0200] conn=5 op=7 RESULT err=0 tag=120 nentries=0 etime=0
    ... 


    M2:
    [18/Apr/2014:10:46:45 +0200] conn=5 op=3 EXT oid="2.16.840.1.113730.3.5.12" name="replication-multimaster-extop"
    [18/Apr/2014:10:46:45 +0200] conn=5 op=3 RESULT err=0 tag=120 nentries=0 etime=0
    ...
    [18/Apr/2014:10:46:48 +0200] conn=5 op=5 DEL dn="cn=new_account1,cn=staged user,dc=example,dc=com"
    [18/Apr/2014:10:46:48 +0200] conn=5 op=5 RESULT err=66 tag=107 nentries=0 etime=0 csn=5350e66f000000010000
    [18/Apr/2014:10:46:49 +0200] conn=5 op=6 MOD dn="cn=new_account19,cn=staged user,dc=example,dc=com"
    [18/Apr/2014:10:46:50 +0200] conn=5 op=6 RESULT err=0 tag=103 nentries=0 etime=1 csn=5350e66f000100010000
    [18/Apr/2014:10:46:50 +0200] conn=5 op=7 MOD dn="cn=new_account19,cn=staged user,dc=example,dc=com"
    [18/Apr/2014:10:46:51 +0200] conn=5 op=7 RESULT err=0 tag=103 nentries=0 etime=1 csn=5350e672000000010000
    ...
    [18/Apr/2014:10:46:53 +0200] conn=5 op=10 EXT oid="2.16.840.1.113730.3.5.12" name="replication-multimaster-extop"
    [18/Apr/2014:10:46:53 +0200] conn=5 op=10 RESULT err=0 tag=120 nentries=0 etime=0
    [18/Apr/2014:10:46:55 +0200] conn=5 op=11 MOD dn="cn=new_account19,cn=staged user,dc=example,dc=com"
    [18/Apr/2014:10:46:57 +0200] conn=5 op=11 RESULT err=0 tag=103 nentries=0 etime=2 csn=5350e67a000000010000

So Both DEL/Add child are not replayed: the updates are just skipped.

On M1 the tombstone was resurected as a tombstone+glue entry
dn: cn=new_account1,cn=staged user,dc=example,dc=com
objectClass: top
objectClass: person
objectClass: nsTombstone
objectClass: extensibleobject
objectClass: glue
sn: new_account1
cn: new_account1

On M2 the entry is not a tombstone
dn: cn=new_account1,cn=staged user,dc=example,dc=com
objectClass: top
objectClass: person
sn: new_account1
cn: new_account1

The problem are 
	- the entry is different on both server
	- as ADD child is skipped, the child only exists on M2.
Comment 1 Martin Kosek 2017-04-03 05:56:25 EDT
Related bug - Bug 1437887.
Comment 4 Nathan Kinder 2017-04-06 11:38:04 EDT
*** Bug 1437887 has been marked as a duplicate of this bug. ***
Comment 5 Ludwig 2017-04-07 05:05:21 EDT
*** Bug 747701 has been marked as a duplicate of this bug. ***
Comment 6 Ludwig 2017-04-07 05:08:56 EDT
*** Bug 1213787 has been marked as a duplicate of this bug. ***
Comment 7 Ludwig 2017-04-07 05:11:46 EDT
*** Bug 1395848 has been marked as a duplicate of this bug. ***
Comment 8 Ludwig 2017-04-07 05:19:38 EDT
there are several bz for the replication conflict management. Since this one is used in the 7.4 RPL we will use it and close others as duplicate. 

For completeness, the associated upstream tickets will be referenced here:

https://pagure.io/389-ds-base/issue/49043
https://pagure.io/389-ds-base/issue/160
https://pagure.io/389-ds-base/issue/48161

Note You need to log in before you can comment on or make changes to this bug.