Bug 1274430 - [RFE] Handling replication conflict entries
Summary: [RFE] Handling replication conflict entries
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: 389-ds-base
Version: 7.0
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: rc
: ---
Assignee: Ludwig
QA Contact: Viktor Ashirov
Marc Muehlfeld
URL: http://www.port389.org/docs/389ds/des...
Whiteboard:
Keywords: FutureFeature
: 747701 772294 1213787 1395848 1420228 1437887 (view as bug list)
Depends On:
Blocks: 1113520 1399979 1420851 1467835 695797 756082 772294 1472344
TreeView+ depends on / blocked
 
Reported: 2015-10-22 17:21 UTC by Noriko Hosoi
Modified: 2019-03-12 14:31 UTC (History)
14 users (show)

(edit)
Directory Server no longer displays replication conflict entries in search results

Previously, if replication conflict entries existed in a replication topology, Directory Server returned them by default as part of the search result. As a consequence, certain LDAP clients behaved incorrectly if the server returned such entries. With this update, the server no longer returns conflict entries in a search and you have to explicitly request them. As a result, clients work as expected.

In addition, the update improves the resolution of more complex conflict scenarios.

For further details, see https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/html/administration_guide/managing_replication-solving_common_replication_conflicts.
Clone Of:
: 1498399 (view as bug list)
(edit)
Last Closed: 2018-04-10 14:15:15 UTC


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:0811 None None None 2018-04-10 14:16 UTC

Description Noriko Hosoi 2015-10-22 17:21:58 UTC
This bug is created as a clone of upstream ticket:
https://fedorahosted.org/389/ticket/47784

The test case is M1-M2. provision entries on M1 and wait they are replicated to M2.
Then disable RA M1<->M2.
On M1, delete an entry (new_account1), on M2 add a child entry under this same entry.
Finally do some MODs on M1 (new_account19) and M2(new_account18) on test entries.
Enable replication

both operation DEL/Add child are failing

M1:
[18/Apr/2014:10:46:39 +0200] conn=3 op=32 DEL dn="cn=new_account1,cn=staged user,dc=example,dc=com"
[18/Apr/2014:10:46:39 +0200] conn=3 op=32 RESULT err=0 tag=107 nentries=0 etime=1 csn=5350e66f000000010000
...
[18/Apr/2014:10:46:45 +0200] conn=5 op=5 ADD dn="cn=child,cn=new_account1,cn=staged user,dc=example,dc=com"
[18/Apr/2014:10:46:47 +0200] conn=5 op=5 RESULT err=1 tag=105 nentries=0 etime=2 csn=5350e671000000020000

M2:
[18/Apr/2014:10:46:41 +0200] conn=3 op=18 ADD dn="cn=child,cn=new_account1,cn=staged user,dc=example,dc=com"
[18/Apr/2014:10:46:42 +0200] conn=3 op=18 RESULT err=0 tag=105 nentries=0 etime=1 csn=5350e671000000020000
...
[18/Apr/2014:10:46:48 +0200] conn=5 op=5 DEL dn="cn=new_account1,cn=staged user,dc=example,dc=com"
[18/Apr/2014:10:46:48 +0200] conn=5 op=5 RESULT err=66 tag=107 nentries=0 etime=0 csn=5350e66f000000010000


it does not break replication
    M1:
    ...
    [18/Apr/2014:10:46:43 +0200] conn=5 op=3 EXT oid="2.16.840.1.113730.3.5.12" name="replication-multimaster-extop"
    [18/Apr/2014:10:46:44 +0200] conn=5 op=3 RESULT err=0 tag=120 nentries=0 etime=1
    [18/Apr/2014:10:46:44 +0200] conn=5 op=4 SRCH base="cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config" scope=0 filter="(objectClass=*)" attrs="nsDS5ReplicaId"
    [18/Apr/2014:10:46:44 +0200] conn=5 op=4 RESULT err=0 tag=101 nentries=1 etime=0
    [18/Apr/2014:10:46:45 +0200] conn=5 op=5 ADD dn="cn=child,cn=new_account1,cn=staged user,dc=example,dc=com"
    [18/Apr/2014:10:46:47 +0200] conn=5 op=5 RESULT err=1 tag=105 nentries=0 etime=2 csn=5350e671000000020000
    [18/Apr/2014:10:46:47 +0200] conn=5 op=6 MOD dn="cn=new_account18,cn=staged user,dc=example,dc=com"
    [18/Apr/2014:10:46:49 +0200] conn=5 op=6 RESULT err=0 tag=103 nentries=0 etime=2 csn=5350e672000000020000
    [18/Apr/2014:10:46:52 +0200] conn=3 op=41 RESULT err=0 tag=103 nentries=0 etime=2 csn=5350e67a000000010000
    [18/Apr/2014:10:46:53 +0200] conn=5 op=7 EXT oid="2.16.840.1.113730.3.5.5" name="Netscape Replication End Session"
    [18/Apr/2014:10:46:53 +0200] conn=5 op=7 RESULT err=0 tag=120 nentries=0 etime=0
    ... 


    M2:
    [18/Apr/2014:10:46:45 +0200] conn=5 op=3 EXT oid="2.16.840.1.113730.3.5.12" name="replication-multimaster-extop"
    [18/Apr/2014:10:46:45 +0200] conn=5 op=3 RESULT err=0 tag=120 nentries=0 etime=0
    ...
    [18/Apr/2014:10:46:48 +0200] conn=5 op=5 DEL dn="cn=new_account1,cn=staged user,dc=example,dc=com"
    [18/Apr/2014:10:46:48 +0200] conn=5 op=5 RESULT err=66 tag=107 nentries=0 etime=0 csn=5350e66f000000010000
    [18/Apr/2014:10:46:49 +0200] conn=5 op=6 MOD dn="cn=new_account19,cn=staged user,dc=example,dc=com"
    [18/Apr/2014:10:46:50 +0200] conn=5 op=6 RESULT err=0 tag=103 nentries=0 etime=1 csn=5350e66f000100010000
    [18/Apr/2014:10:46:50 +0200] conn=5 op=7 MOD dn="cn=new_account19,cn=staged user,dc=example,dc=com"
    [18/Apr/2014:10:46:51 +0200] conn=5 op=7 RESULT err=0 tag=103 nentries=0 etime=1 csn=5350e672000000010000
    ...
    [18/Apr/2014:10:46:53 +0200] conn=5 op=10 EXT oid="2.16.840.1.113730.3.5.12" name="replication-multimaster-extop"
    [18/Apr/2014:10:46:53 +0200] conn=5 op=10 RESULT err=0 tag=120 nentries=0 etime=0
    [18/Apr/2014:10:46:55 +0200] conn=5 op=11 MOD dn="cn=new_account19,cn=staged user,dc=example,dc=com"
    [18/Apr/2014:10:46:57 +0200] conn=5 op=11 RESULT err=0 tag=103 nentries=0 etime=2 csn=5350e67a000000010000

So Both DEL/Add child are not replayed: the updates are just skipped.

On M1 the tombstone was resurected as a tombstone+glue entry
dn: cn=new_account1,cn=staged user,dc=example,dc=com
objectClass: top
objectClass: person
objectClass: nsTombstone
objectClass: extensibleobject
objectClass: glue
sn: new_account1
cn: new_account1

On M2 the entry is not a tombstone
dn: cn=new_account1,cn=staged user,dc=example,dc=com
objectClass: top
objectClass: person
sn: new_account1
cn: new_account1

The problem are 
	- the entry is different on both server
	- as ADD child is skipped, the child only exists on M2.

Comment 1 Martin Kosek 2017-04-03 09:56:25 UTC
Related bug - Bug 1437887.

Comment 4 Nathan Kinder 2017-04-06 15:38:04 UTC
*** Bug 1437887 has been marked as a duplicate of this bug. ***

Comment 5 Ludwig 2017-04-07 09:05:21 UTC
*** Bug 747701 has been marked as a duplicate of this bug. ***

Comment 6 Ludwig 2017-04-07 09:08:56 UTC
*** Bug 1213787 has been marked as a duplicate of this bug. ***

Comment 7 Ludwig 2017-04-07 09:11:46 UTC
*** Bug 1395848 has been marked as a duplicate of this bug. ***

Comment 8 Ludwig 2017-04-07 09:19:38 UTC
there are several bz for the replication conflict management. Since this one is used in the 7.4 RPL we will use it and close others as duplicate. 

For completeness, the associated upstream tickets will be referenced here:

https://pagure.io/389-ds-base/issue/49043
https://pagure.io/389-ds-base/issue/160
https://pagure.io/389-ds-base/issue/48161

Comment 11 Marc Sauton 2017-10-19 15:14:48 UTC
*** Bug 1420228 has been marked as a duplicate of this bug. ***

Comment 13 Simon Pichugin 2018-02-22 14:15:10 UTC
Test suite passes for all automated tests. Test container with complex conflicts deletion will be fixed in later versions.

[root@qeos-51 ds]# py.test -v dirsrvtests/tests/suites/replication/conflict_resolve_test.py
================ test session starts ================
platform linux -- Python 3.6.3, pytest-3.4.1, py-1.5.2, pluggy-0.6.0 -- /opt/rh/rh-python36/root/usr/bin/python3
cachedir: .pytest_cache
metadata: {'Python': '3.6.3', 'Platform': 'Linux-3.10.0-855.el7.x86_64-x86_64-with-redhat-7.5-Maipo', 'Packages': {'pytest': '3.4.1', 'py': '1.5.2', 'pluggy': '0.6.0'}, 'Plugins': {'metadata': '1.6.0', 'html': '1.16.1'}}
389-ds-base: 1.3.7.5-18.el7
nss: 3.34.0-4.el7
nspr: 4.17.0-1.el7
openldap: 2.4.44-13.el7
svrcore: 4.1.3-2.el7

rootdir: /mnt/tests/rhds/tests/upstream/ds, inifile:
plugins: metadata-1.6.0, html-1.16.1
collected 6 items

dirsrvtests/tests/suites/replication/conflict_resolve_test.py::TestTwoMasters::test_add_modrdn PASSED [ 16%]
dirsrvtests/tests/suites/replication/conflict_resolve_test.py::TestTwoMasters::test_complex_add_modify_modrdn_delete PASSED [ 33%]
dirsrvtests/tests/suites/replication/conflict_resolve_test.py::TestTwoMasters::test_memberof_groups PASSED [ 50%]
dirsrvtests/tests/suites/replication/conflict_resolve_test.py::TestTwoMasters::test_managed_entries PASSED [ 66%]
dirsrvtests/tests/suites/replication/conflict_resolve_test.py::TestTwoMasters::test_nested_entries_with_children PASSED [ 83%]
dirsrvtests/tests/suites/replication/conflict_resolve_test.py::TestThreeMasters::test_nested_entries PASSED [100%]

================ 6 passed in 509.76 seconds ================

Marking as verified.

Comment 16 errata-xmlrpc 2018-04-10 14:15:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0811

Comment 17 Dmitri Pal 2019-03-12 14:31:43 UTC
*** Bug 772294 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.