RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1349571 - Improve MMR replication convergence
Summary: Improve MMR replication convergence
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: 389-ds-base
Version: 7.3
Hardware: All
OS: Linux
Target Milestone: rc
: ---
Assignee: mreynolds
QA Contact: Viktor Ashirov
Petr Bokoc
Depends On:
Blocks: 1351323 1356898
TreeView+ depends on / blocked
Reported: 2016-06-23 17:12 UTC by Noriko Hosoi
Modified: 2020-09-13 21:40 UTC (History)
9 users (show)

Fixed In Version: 389-ds-base-
Doc Type: Enhancement
Doc Text:
New attribute for configuring replica release timeout In a multi-master replication environment where multiple masters receive updates at the same time, it was previously possible for a single master to obtain exclusive access to a replica and hold it for a very long time due to problems such as a slow network connection. During this time, other masters were blocked from accessing the same replica, which considerably slowed down the replication process. This update adds a new configuration attribute, "nsds5ReplicaReleaseTimeout", which can be used to specify a timeout in seconds. After the specified timeout period passes, the master releases the replica, allowing other masters to access it and send their updates.
Clone Of:
: 1351323 1358392 (view as bug list)
Last Closed: 2016-11-03 20:43:11 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Github 389ds 389-ds-base issues 1786 0 None None None 2020-09-13 21:40:09 UTC
Red Hat Product Errata RHSA-2016:2594 0 normal SHIPPED_LIVE Moderate: 389-ds-base security, bug fix, and enhancement update 2016-11-03 12:11:08 UTC

Description Noriko Hosoi 2016-06-23 17:12:47 UTC
This bug is created as a clone of upstream ticket:

Replication latency, especially over a WAN, can become worse when there are several masters receiving updates at the same time.  What happens is that one master will take exclusive access of a replica, and not release it for a very long time.  This blocks the other masters from sending their updates to that consumer, and this adds to the replication latency as those updates have to travel back and forth with all the other masters, and consumers.  See the bugzilla for more detailed info.

We need a way to notify a master that it is holding its exclusive access of a replica for too long, and that it needs to yield so other masters can start sending some of their updates to that replica.

Comment 1 mreynolds 2016-06-23 18:26:06 UTC
Fixed upstream:

Design doc for new feature:


Comment 2 Noriko Hosoi 2016-06-24 18:22:40 UTC
Justification: Important customer reported the problem.
(See also https://bugzilla.redhat.com/show_bug.cgi?id=1157799)

This improvement is beneficial for all the customers who deploy the Directory Server/IPA with the replication.

Comment 17 errata-xmlrpc 2016-11-03 20:43:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.