Red Hat Bugzilla – Bug 1274430
[RFE] Handling replication conflict entries
Last modified: 2018-04-10 10:16:20 EDT
This bug is created as a clone of upstream ticket: https://fedorahosted.org/389/ticket/47784 The test case is M1-M2. provision entries on M1 and wait they are replicated to M2. Then disable RA M1<->M2. On M1, delete an entry (new_account1), on M2 add a child entry under this same entry. Finally do some MODs on M1 (new_account19) and M2(new_account18) on test entries. Enable replication both operation DEL/Add child are failing M1: [18/Apr/2014:10:46:39 +0200] conn=3 op=32 DEL dn="cn=new_account1,cn=staged user,dc=example,dc=com" [18/Apr/2014:10:46:39 +0200] conn=3 op=32 RESULT err=0 tag=107 nentries=0 etime=1 csn=5350e66f000000010000 ... [18/Apr/2014:10:46:45 +0200] conn=5 op=5 ADD dn="cn=child,cn=new_account1,cn=staged user,dc=example,dc=com" [18/Apr/2014:10:46:47 +0200] conn=5 op=5 RESULT err=1 tag=105 nentries=0 etime=2 csn=5350e671000000020000 M2: [18/Apr/2014:10:46:41 +0200] conn=3 op=18 ADD dn="cn=child,cn=new_account1,cn=staged user,dc=example,dc=com" [18/Apr/2014:10:46:42 +0200] conn=3 op=18 RESULT err=0 tag=105 nentries=0 etime=1 csn=5350e671000000020000 ... [18/Apr/2014:10:46:48 +0200] conn=5 op=5 DEL dn="cn=new_account1,cn=staged user,dc=example,dc=com" [18/Apr/2014:10:46:48 +0200] conn=5 op=5 RESULT err=66 tag=107 nentries=0 etime=0 csn=5350e66f000000010000 it does not break replication M1: ... [18/Apr/2014:10:46:43 +0200] conn=5 op=3 EXT oid="2.16.840.1.113730.3.5.12" name="replication-multimaster-extop" [18/Apr/2014:10:46:44 +0200] conn=5 op=3 RESULT err=0 tag=120 nentries=0 etime=1 [18/Apr/2014:10:46:44 +0200] conn=5 op=4 SRCH base="cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config" scope=0 filter="(objectClass=*)" attrs="nsDS5ReplicaId" [18/Apr/2014:10:46:44 +0200] conn=5 op=4 RESULT err=0 tag=101 nentries=1 etime=0 [18/Apr/2014:10:46:45 +0200] conn=5 op=5 ADD dn="cn=child,cn=new_account1,cn=staged user,dc=example,dc=com" [18/Apr/2014:10:46:47 +0200] conn=5 op=5 RESULT err=1 tag=105 nentries=0 etime=2 csn=5350e671000000020000 [18/Apr/2014:10:46:47 +0200] conn=5 op=6 MOD dn="cn=new_account18,cn=staged user,dc=example,dc=com" [18/Apr/2014:10:46:49 +0200] conn=5 op=6 RESULT err=0 tag=103 nentries=0 etime=2 csn=5350e672000000020000 [18/Apr/2014:10:46:52 +0200] conn=3 op=41 RESULT err=0 tag=103 nentries=0 etime=2 csn=5350e67a000000010000 [18/Apr/2014:10:46:53 +0200] conn=5 op=7 EXT oid="2.16.840.1.113730.3.5.5" name="Netscape Replication End Session" [18/Apr/2014:10:46:53 +0200] conn=5 op=7 RESULT err=0 tag=120 nentries=0 etime=0 ... M2: [18/Apr/2014:10:46:45 +0200] conn=5 op=3 EXT oid="2.16.840.1.113730.3.5.12" name="replication-multimaster-extop" [18/Apr/2014:10:46:45 +0200] conn=5 op=3 RESULT err=0 tag=120 nentries=0 etime=0 ... [18/Apr/2014:10:46:48 +0200] conn=5 op=5 DEL dn="cn=new_account1,cn=staged user,dc=example,dc=com" [18/Apr/2014:10:46:48 +0200] conn=5 op=5 RESULT err=66 tag=107 nentries=0 etime=0 csn=5350e66f000000010000 [18/Apr/2014:10:46:49 +0200] conn=5 op=6 MOD dn="cn=new_account19,cn=staged user,dc=example,dc=com" [18/Apr/2014:10:46:50 +0200] conn=5 op=6 RESULT err=0 tag=103 nentries=0 etime=1 csn=5350e66f000100010000 [18/Apr/2014:10:46:50 +0200] conn=5 op=7 MOD dn="cn=new_account19,cn=staged user,dc=example,dc=com" [18/Apr/2014:10:46:51 +0200] conn=5 op=7 RESULT err=0 tag=103 nentries=0 etime=1 csn=5350e672000000010000 ... [18/Apr/2014:10:46:53 +0200] conn=5 op=10 EXT oid="2.16.840.1.113730.3.5.12" name="replication-multimaster-extop" [18/Apr/2014:10:46:53 +0200] conn=5 op=10 RESULT err=0 tag=120 nentries=0 etime=0 [18/Apr/2014:10:46:55 +0200] conn=5 op=11 MOD dn="cn=new_account19,cn=staged user,dc=example,dc=com" [18/Apr/2014:10:46:57 +0200] conn=5 op=11 RESULT err=0 tag=103 nentries=0 etime=2 csn=5350e67a000000010000 So Both DEL/Add child are not replayed: the updates are just skipped. On M1 the tombstone was resurected as a tombstone+glue entry dn: cn=new_account1,cn=staged user,dc=example,dc=com objectClass: top objectClass: person objectClass: nsTombstone objectClass: extensibleobject objectClass: glue sn: new_account1 cn: new_account1 On M2 the entry is not a tombstone dn: cn=new_account1,cn=staged user,dc=example,dc=com objectClass: top objectClass: person sn: new_account1 cn: new_account1 The problem are - the entry is different on both server - as ADD child is skipped, the child only exists on M2.
Related bug - Bug 1437887.
*** Bug 1437887 has been marked as a duplicate of this bug. ***
*** Bug 747701 has been marked as a duplicate of this bug. ***
*** Bug 1213787 has been marked as a duplicate of this bug. ***
*** Bug 1395848 has been marked as a duplicate of this bug. ***
there are several bz for the replication conflict management. Since this one is used in the 7.4 RPL we will use it and close others as duplicate. For completeness, the associated upstream tickets will be referenced here: https://pagure.io/389-ds-base/issue/49043 https://pagure.io/389-ds-base/issue/160 https://pagure.io/389-ds-base/issue/48161
*** Bug 1420228 has been marked as a duplicate of this bug. ***
Test suite passes for all automated tests. Test container with complex conflicts deletion will be fixed in later versions. [root@qeos-51 ds]# py.test -v dirsrvtests/tests/suites/replication/conflict_resolve_test.py ================ test session starts ================ platform linux -- Python 3.6.3, pytest-3.4.1, py-1.5.2, pluggy-0.6.0 -- /opt/rh/rh-python36/root/usr/bin/python3 cachedir: .pytest_cache metadata: {'Python': '3.6.3', 'Platform': 'Linux-3.10.0-855.el7.x86_64-x86_64-with-redhat-7.5-Maipo', 'Packages': {'pytest': '3.4.1', 'py': '1.5.2', 'pluggy': '0.6.0'}, 'Plugins': {'metadata': '1.6.0', 'html': '1.16.1'}} 389-ds-base: 1.3.7.5-18.el7 nss: 3.34.0-4.el7 nspr: 4.17.0-1.el7 openldap: 2.4.44-13.el7 svrcore: 4.1.3-2.el7 rootdir: /mnt/tests/rhds/tests/upstream/ds, inifile: plugins: metadata-1.6.0, html-1.16.1 collected 6 items dirsrvtests/tests/suites/replication/conflict_resolve_test.py::TestTwoMasters::test_add_modrdn PASSED [ 16%] dirsrvtests/tests/suites/replication/conflict_resolve_test.py::TestTwoMasters::test_complex_add_modify_modrdn_delete PASSED [ 33%] dirsrvtests/tests/suites/replication/conflict_resolve_test.py::TestTwoMasters::test_memberof_groups PASSED [ 50%] dirsrvtests/tests/suites/replication/conflict_resolve_test.py::TestTwoMasters::test_managed_entries PASSED [ 66%] dirsrvtests/tests/suites/replication/conflict_resolve_test.py::TestTwoMasters::test_nested_entries_with_children PASSED [ 83%] dirsrvtests/tests/suites/replication/conflict_resolve_test.py::TestThreeMasters::test_nested_entries PASSED [100%] ================ 6 passed in 509.76 seconds ================ Marking as verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0811