RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1258610 - total update request must not be lost
Summary: total update request must not be lost
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: 389-ds-base
Version: 7.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Noriko Hosoi
QA Contact: Viktor Ashirov
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-08-31 18:35 UTC by Noriko Hosoi
Modified: 2020-09-13 21:30 UTC (History)
4 users (show)

Fixed In Version: 389-ds-base-1.3.5.2-1.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-03 20:35:49 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github 389ds 389-ds-base issues 1586 0 None None None 2020-09-13 21:30:34 UTC
Red Hat Product Errata RHSA-2016:2594 0 normal SHIPPED_LIVE Moderate: 389-ds-base security, bug fix, and enhancement update 2016-11-03 12:11:08 UTC

Description Noriko Hosoi 2015-08-31 18:35:43 UTC
This bug is created as a clone of upstream ticket:
https://fedorahosted.org/389/ticket/48255

If a replication agreement is created with the autoinit flag the total protocol is started,
but if it fails it switches to incremntal and can loop forever in comparing the generationID. This is becaus next state is set to INCREMENTAl and the result of the total run is not checked.
{{{
  rp->next_state = STATE_PERFORMING_INCREMENTAL_UPDATE;
  .....
  rp->prp_total->run(rp->prp_total);
                
  agmt_replica_init_done (agmt);
}}}

there is a commment in total update:

{{{
    rc = acquire_replica (prp, REPL_NSDS50_TOTAL_PROTOCOL_OID, NULL /* ruv */);
    /* We never retry total protocol, even in case a transient error.
       This is because if somebody already updated the replica we don't
       want to do it again */
}}}
 
This is reasonable in some cases, but if the genrationID is still different, we know that no other server did update the replica.
Especially if a new server is added and we want to initialize it, the total update request should not get lost

Comment 1 Mike McCune 2016-03-28 23:12:48 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 4 Noriko Hosoi 2016-06-15 16:30:42 UTC
Based upon the git log, almost simultaneous replication total update and incremental updates were both failing.

> Bug Description:  if the initial replica acquire fails the
>                   protocol switches to incremental update
>                   but the database has not been initialized and
>                   the incremental update will als fail continously
> 
> Fix Description: for transient failures do a few retries.

0. Set up MMR (MasterA and MasterB)
1. Start the total update/bulk import on MasterA
2. Update entries on MasterA
Note: 2 is supposed to follow 1 immediately.

It'd depend upon the timing, repeat 1 & 2 several times.

If the total updates and the updates made on MasterA are successfully replicated to MasterB, we could say the fix is verified.

Comment 5 Punit Kundal 2016-07-15 12:36:48 UTC
RHEL:
RHEL 7.3 x86_64 Server

DS builds:
[root@ds ~]# rpm -qa | grep 389
389-ds-base-debuginfo-1.3.5.8-1.el7.x86_64
389-ds-base-1.3.5.10-5.el7.x86_64
389-ds-base-snmp-1.3.5.10-5.el7.x86_64
389-ds-base-libs-1.3.5.10-5.el7.x86_64

Steps Performed:

1. Created two standalone instances Master1 and Master2
 
2. Created two new suffixes on each instance as below:

[root@ds new_suffix]# ldapadd -x -D 'cn=Directory Manager' -w secret123 -h localhost -p 389 -f new_suffix.ldif
adding new entry "cn=UserData,cn=ldbm database,cn=plugins,cn=config"
 
adding new entry "cn=MyData,cn=ldbm database,cn=plugins,cn=config"
 
adding new entry "cn="dc=redhat,dc=com",cn=mapping tree,cn=config"
 
adding new entry "cn="dc=xyz,dc=com",cn=mapping tree,cn=config"
 
[root@ds new_suffix]# ldapadd -x -D 'cn=Directory Manager' -w secret123 -h localhost -p 1389 -f new_suffix.ldif
adding new entry "cn=UserData,cn=ldbm database,cn=plugins,cn=config"
 
adding new entry "cn=MyData,cn=ldbm database,cn=plugins,cn=config"
 
adding new entry "cn="dc=redhat,dc=com",cn=mapping tree,cn=config"
 
adding new entry "cn="dc=xyz,dc=com",cn=mapping tree,cn=config"
 
3. Imported 10k entries for dc=redhat,dc=com on master1

[root@ds ~]# /usr/lib64/dirsrv/slapd-master1/ldif2db -n UserData -i /var/lib/dirsrv/slapd-master1/ldif/redhat_data.ldif
importing data ...
[15/Jul/2016:15:03:02.164390053 +051800] import UserData: Import complete.  Processed 10002 entries in 4 seconds. (2500.50 entries/sec)
     
4. Imported 15k entries for dc=xyz,dc=com on master1

[root@ds ~]# /usr/lib64/dirsrv/slapd-master1/ldif2db -n MyData -i /var/lib/dirsrv/slapd-master1/ldif/xyz_data.ldif
importing data ...
[15/Jul/2016:15:04:49.187410762 +051800] import MyData: Import complete.  Processed 15002 entries in 4 seconds. (3750.50 entries/sec)
     
5. Verified that the data for both the suffixes has been imported properly on master1

[root@ds ~]# ldapsearch -xLLL -D 'cn=Directory Manager' -w secret123 -h localhost -p 389 -b 'dc=redhat,dc=com' | wc -l
120014
[root@ds ~]# ldapsearch -xLLL -D 'cn=Directory Manager' -w secret123 -h localhost -p 389 -b 'dc=xyz,dc=com' | wc -l
180014
 
6. Configured a 2x MMR setup for both the suffixes dc=redhat,dc=com and dc=xyz,dc=com by adding required replication configuration entries on both masters
 
7. Verified that no data has been replicated to master2 for either suffix

[root@ds ~]# ldapsearch -xLLL -D 'cn=Directory Manager' -w secret123 -h localhost -p 1389 -b 'dc=redhat,dc=com' dn | wc -l
No such object (32)
0
[root@ds ~]# ldapsearch -xLLL -D 'cn=Directory Manager' -w secret123 -h localhost -p 1389 -b 'dc=xyz,dc=com' dn | wc -l
No such object (32)
0
 
8. Initiated a total update for suffix dc=redhat,dc=com on master1

[root@ds ~]# ldapmodify -x -D 'cn=Directory Manager' -w secret123 -h localhost -p 389
dn: cn=Agreement1,cn=replicaA,cn="dc=redhat,dc=com",cn=mapping tree,cn=config
changetype: modify
replace: nsds5BeginReplicaRefresh
nsds5BeginReplicaRefresh: start
modifying entry "cn=Agreement1,cn=replicaA,cn="dc=redhat,dc=com",cn=mapping tree,cn=config"
 
9. while total update was progressing, modified the last 11 entries under dc=redhat,dc=com suffix and added mail attribute to them

[root@ds ~]# ldapmodify -x -D 'cn=Directory Manager' -w secret123 -h localhost -p 389 -f conf_files/ldif_files/redhat_mod.ldif
modifying entry "uid=tuser9990,ou=people,dc=redhat,dc=com"
 
modifying entry "uid=tuser9991,ou=people,dc=redhat,dc=com"
 
modifying entry "uid=tuser9992,ou=people,dc=redhat,dc=com"
 
modifying entry "uid=tuser9993,ou=people,dc=redhat,dc=com"
 
modifying entry "uid=tuser9994,ou=people,dc=redhat,dc=com"
 
modifying entry "uid=tuser9995,ou=people,dc=redhat,dc=com"
 
modifying entry "uid=tuser9996,ou=people,dc=redhat,dc=com"
 
modifying entry "uid=tuser9997,ou=people,dc=redhat,dc=com"
 
modifying entry "uid=tuser9998,ou=people,dc=redhat,dc=com"
 
modifying entry "uid=tuser9999,ou=people,dc=redhat,dc=com"
 
modifying entry "uid=tuser10000,ou=people,dc=redhat,dc=com"
 
10. Verified that data for dc=redhat,dc=com was consistent across both masters

[root@ds ~]# ldapsearch -xLLL -D 'cn=Directory Manager' -w secret123 -h localhost -p 389 -b 'dc=redhat,dc=com' | wc -l
120025
[root@ds ~]# ldapsearch -xLLL -D 'cn=Directory Manager' -w secret123 -h localhost -p 1389 -b 'dc=redhat,dc=com' | wc -l
120025
 
11. Performed steps 9 and 10 for suffix dc=xyz,dc=com on master1
 
12. Verified that data for dc=xyz,dc=com was consistent across both masters

[root@ds ~]# ldapsearch -xLLL -D 'cn=Directory Manager' -w secret123 -h localhost -p 389 -b 'dc=xyz,dc=com' | wc -l
180025
[root@ds ~]# ldapsearch -xLLL -D 'cn=Directory Manager' -w secret123 -h localhost -p 1389 -b 'dc=xyz,dc=com' | wc -l
180025

Comment 7 errata-xmlrpc 2016-11-03 20:35:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2594.html


Note You need to log in before you can comment on or make changes to this bug.