Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1118079 - Multi master replication initialization incomplete after restore of one master
Multi master replication initialization incomplete after restore of one master
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: 389-ds-base (Show other bugs)
7.0
Unspecified Unspecified
medium Severity unspecified
: rc
: ---
Assigned To: Noriko Hosoi
Viktor Ashirov
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2014-07-09 20:30 EDT by Noriko Hosoi
Modified: 2015-03-05 04:36 EST (History)
2 users (show)

See Also:
Fixed In Version: 389-ds-base-1.3.3.1-1.el7
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-03-05 04:36:35 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:0416 normal SHIPPED_LIVE Important: 389-ds-base security, bug fix, and enhancement update 2015-03-05 09:26:33 EST

  None (edit)
Description Noriko Hosoi 2014-07-09 20:30:45 EDT
This bug is created as a clone of upstream ticket:
https://fedorahosted.org/389/ticket/47655

Initial configuration:
======================
2 directories DS1 and DS2 with multimaster replication properly configured and working.
Around 3 millions of entries in these directories.


Steps to follow to reproduce :
================================
1. Backup DS2 with db2bak.pl script 
See log "1.Backup_DS2.log"
[30/Dec/2013:11:12:06 +0100] - Beginning backup of 'ldbm database'

2. Restore this previous backup with bak2db command on DS2
See log "1.Restore_DS2.log"
[30/Dec/2013:13:03:51 +0100] - slapd shutting down - signaling operation threads

3. Initialize DS1 with DS2 data restored => initialization ends properly but only 400 000 entries are present in DS1
See logs "3.Init replication_DS*_KO.log"
[30/Dec/2013:13:18:38 +0100] NSMMReplicationPlugin - conn=11 op=3 repl="o=iah_extranet_msa": Begin total protocol

4. Wait for approximately 24 hours 
This log appears in DS2 error log : 
[31/Dec/2013:13:24:04 +0100] - Trimmed 1 changes from the changelog

Initialize consumer DS1 again with DS2 data => initialization ends properly and all the entries are present in DS1
See logs "4.Init replication_DS*_KO.log"
Comment 2 Sankar Ramalingam 2015-01-13 13:24:42 EST
No issues reported from reliab15 executed for 389-ds-base-1.3.3.1-11 builds. Hence, marking the bug as Verified.
Comment 4 errata-xmlrpc 2015-03-05 04:36:35 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-0416.html

Note You need to log in before you can comment on or make changes to this bug.