RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2057058 - Import may break replication because changelog starting csn may not be created
Summary: Import may break replication because changelog starting csn may not be created
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: 389-ds-base
Version: 8.7
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: rc
: 8.7
Assignee: Pierre Rogier
QA Contact: RHDS QE
Zuzana Zoubkova
URL:
Whiteboard: sync-to-jira
Depends On: 2057056
Blocks: 2057059 2113056
TreeView+ depends on / blocked
 
Reported: 2022-02-22 16:22 UTC by mreynolds
Modified: 2022-11-08 10:30 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
.Import from an LDIF file with replication metadata now works correctly Previously, importing an LDIF file with replication metadata could cause the replication to fail in certain cases: In the first case, a replication update vector (RUV) entry placed before the suffix entry in an imported LDIF file was ignored. As a consequence, the replication with the imported replica failed, because of a generation ID mismatch. This update ensures that Directory Server writes the skipped RUV entry at the end of the import. In the second case, a changelog reinitialized after an RUV mismatch did not contain the starting change sequence numbers (CSNs). As a consequence, the replication with the imported replica failed, because of a missing CSN in the changelog. This update ensures that Directory Server creates the RUV `maxcsn` entries, when reinitializing the changelog. As a result, with this update, administrators do not have to reinitialize the replication after importing from an LDIF file that contains replication metadata.
Clone Of: 2057056
: 2057059 2113056 (view as bug list)
Environment:
Last Closed: 2022-11-08 09:38:12 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github 389ds 389-ds-base issues 5098 0 None closed Multiple issues around replication and CI test test_online_reinit_may_hang 2022-06-10 11:57:29 UTC
Red Hat Issue Tracker IDMDS-2026 0 None None None 2022-02-22 16:38:09 UTC
Red Hat Issue Tracker IDMDS-2264 0 None None None 2022-05-09 08:01:10 UTC
Red Hat Issue Tracker RHELPLAN-113066 0 None None None 2022-02-22 16:38:29 UTC
Red Hat Product Errata RHBA-2022:7552 0 None None None 2022-11-08 09:38:51 UTC

Description mreynolds 2022-02-22 16:22:37 UTC
+++ This bug was initially created as a clone of Bug #2057056 +++

Description of problem:

Import and bulk import may break the replication (RUV may be lost - and changelog starting csn may not be created.)

Investigation of CI test failure when working on issue 4939 (LMDB import redesign) showed that the problem was not because of lmdb import code but that there were several issue around:

1. The test itself. ( Should not have to reinitialize the replication after the import (i.e: replication should not be broken by the import)

2. The bdb import ( RUV entry is skipped)

3. The changelog initialization code (if changelog is empty either the starting csn of the database RUV should be updated or dummy changes about the max csn(s) should be created

4. Replication should not be broken when reimporting a master from an ldif including replication data.
so IMHO a step is missing in the CI test (tocheck that replication is still working after the import and before the total update
(Total update may still be needed because originally the hang that leaded to create the bug was occurring in the total update)

5. While running the test on supplier1 in bdb mode:

[07/Jan/2022:22:34:49.975351682 +0100] - WARN - bdb_import_foreman - import userRoot: Skipping entry "nsuniqueid=ffffffff-ffffffff-ffffffff-ffffffff,dc=example,dc=com" which has no parent, ending at line 15 of file "/home/progier/sb/i4939/tst/ci-install/var/lib/dirsrv/slapd-supplier1/ldif/supplier1.ldif"
...
[07/Jan/2022:22:34:50.647405077 +0100] - INFO - bdb_public_bdb_import_main - import userRoot: Import complete. Processed 20 entries (1 were skipped) in 1 seconds. (20.00 entries/sec)
...
[07/Jan/2022:22:34:55.111294779 +0100] - DEBUG - NSMMReplicationPlugin - changelog max RUV: {replicageneration} 61d8b1e2000000010000
[07/Jan/2022:22:34:55.114265534 +0100] - DEBUG - NSMMReplicationPlugin - changelog max RUV: {replica 1} 61d8b1e2000100010000 61d8b1e9000100010000 00000000
[07/Jan/2022:22:34:55.117366657 +0100] - DEBUG - NSMMReplicationPlugin - changelog max RUV: {replica 2} 61d8b1e6000100020000 61d8b1ea000200020000 00000000
[07/Jan/2022:22:34:55.120676337 +0100] - DEBUG - NSMMReplicationPlugin - database RUV: {replicageneration} 61d8b1ff000000010000
[07/Jan/2022:22:34:55.123860710 +0100] - DEBUG - NSMMReplicationPlugin - database RUV: {replica 1 ldap://linux.home:39001}
==> So the database RUV was recreated empty (with another generation number by the import)
which is clearly wrong as it breaks the replication.
3) while running the test on supplier1 in lmdb mode:
[07/Jan/2022:22:35:58.666675869 +0100] - INFO - dbmdb_public_dbmdb_import_main - import userRoot: Import complete. Processed 20 entries in 1 seconds. (20.00 entries/sec)
...
[07/Jan/2022:22:36:02.388680247 +0100] - DEBUG - NSMMReplicationPlugin - changelog program - cldb_SetReplicaDB: cldb is set
[07/Jan/2022:22:36:02.430453701 +0100] - DEBUG - NSMMReplicationPlugin - changelog max RUV: {replicageneration} 61d8b229000000010000
[07/Jan/2022:22:36:02.433352533 +0100] - DEBUG - NSMMReplicationPlugin - database RUV: {replicageneration} 61d8b229000000010000
[07/Jan/2022:22:36:02.436294638 +0100] - DEBUG - NSMMReplicationPlugin - database RUV: {replica 1 ldap://linux.home:39001} 61d8b22a000000010000 61d8b231000100010000 00000000
[07/Jan/2022:22:36:02.439217513 +0100] - DEBUG - NSMMReplicationPlugin - database RUV: {replica 2 ldap://linux.home:39002} 61d8b22e000100020000 61d8b232000100020000 00000000
...
[07/Jan/2022:22:36:13.797250968 +0100] - ERR - agmt="cn=002" (linux:39002) - clcache_load_buffer - Can't locate CSN 61d8b231000100010000 in the changelog (DB rc=-12797). If replication stops, the consumer may need to be reinitialized.
[07/Jan/2022:22:36:13.800517424 +0100] - DEBUG - agmt="cn=002" (linux:39002) - clcache_load_buffer - rc=-12797
[07/Jan/2022:22:36:13.803327610 +0100] - DEBUG - NSMMReplicationPlugin - changelog program - _cl5CheckMissingCSN - the change with 61d8b231000100010000 csn was never logged because it was imported during replica initialization


==> This time the database RUV is rightly preserved but replication fails because the supplier1 maxcsn is not in the changelog.  (either the mincsn should have the same value to ignore the error (to be confirmed) or some dummy changelog records should exists in the changelog)



Upstream ticket:

https://github.com/389ds/389-ds-base/issues/5098

Comment 8 thierry bordaz 2022-06-10 11:57:30 UTC
Fixed in 	389-ds-base-1.4.3.30-3.module+el8.7.0+15600+347cafc6 => MODIFIED

Comment 12 bsmejkal 2022-07-18 07:44:44 UTC
============================================================================================================ test session starts =============================================================================================================
platform linux -- Python 3.6.8, pytest-7.0.1, pluggy-1.0.0 -- /usr/bin/python3.6
cachedir: .pytest_cache
metadata: {'Python': '3.6.8', 'Platform': 'Linux-4.18.0-402.el8.x86_64-x86_64-with-redhat-8.7-Ootpa', 'Packages': {'pytest': '7.0.1', 'py': '1.11.0', 'pluggy': '1.0.0'}, 'Plugins': {'metadata': '1.11.0', 'html': '3.1.1', 'libfaketime': '0.1.2', 'flaky': '3.7.0'}}
389-ds-base: 1.4.3.30-3.module+el8.7.0+15600+347cafc6
nss: 3.79.0-4.el8_6
nspr: 4.34.0-3.el8_6
openldap: 2.4.46-18.el8
cyrus-sasl: 2.1.27-6.el8_5
FIPS: disabled
rootdir: /mnt/tests/rhds/tests/upstream/ds/dirsrvtests, configfile: pytest.ini
plugins: metadata-1.11.0, html-3.1.1, libfaketime-0.1.2, flaky-3.7.0
collected 1 item                                                                                                                                                                                                                             

dirsrvtests/tests/suites/replication/regression_m2_test.py::test_online_reinit_may_hang PASSED                                                                                                                                         [100%]

======================================================================================================= 1 passed, 6 warnings in 55.71s =======================================================================================================

Marking as VERIFIED.

Comment 16 errata-xmlrpc 2022-11-08 09:38:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (389-ds:1.4 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:7552


Note You need to log in before you can comment on or make changes to this bug.