RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1507194 - cleanallruv could break replication if anchor csn in ruv originated in deleted replica
Summary: cleanallruv could break replication if anchor csn in ruv originated in delete...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: 389-ds-base
Version: 7.4
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Ludwig
QA Contact: Viktor Ashirov
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-10-28 06:53 UTC by German Parente
Modified: 2021-06-10 13:23 UTC (History)
12 users (show)

Fixed In Version: 389-ds-base-1.3.7.5-12
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-10 14:21:13 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github 389ds 389-ds-base issues 2472 0 None None None 2020-09-13 22:03:33 UTC
Github 389ds 389-ds-base issues 2505 0 None None None 2020-09-13 22:04:18 UTC
Red Hat Product Errata RHBA-2018:0811 0 None None None 2018-04-10 14:22:03 UTC

Description German Parente 2017-10-28 06:53:09 UTC
Description of problem:

When claanallruv deletes a replica, it also deletes all the entries in the changelog that originated in that replica but if the ruv is still pointing to one of those changes, replication is broken.

Cleanallruv should re-calculate the anchor csn or avoid deleting from the changelog the updates which csn is present in the ruv.

For instance, we have seen this scenario:

[XX/Oct/2017:02:30:13.591580305 +0200] - ERR - agmt="cn=replica1-to-replica2" (replica1:389) - clcache_load_buffer - Can't locate CSN 59ecbf880001003e0000 in the changelog (DB rc=-30988). If replication stops, the consumer may need to be reinitialized.
[27/Oct/2017:02:30:13.592294772 +0200] - ERR - NSMMReplicationPlugin - changelog program - repl_plugin_name_cl - agmt="cn=replica1-to-replica2" (replica2:389): CSN 59ecbf880001003e0000 not found, we aren't as up to date, or we purged

3e => 62

ipa-replica-manage clean-dangling-ruv 
unable to decode: {replica 62} 59ecbf880001003e0000 59ecc0090005003e0000

The cleanallruv operation was still pending but and clean-dangling-ruv has not managed to clean the corrupted ruv but the csn is not anymore in the changelog.

The replica id 62 was already deleted.

Version-Release number of selected component (if applicable): 
389-ds-base-1.3.6.1-19.el7_4.x86_64

Comment 2 mreynolds 2017-10-30 20:04:51 UTC
How often does this occur?  Any chance there is a reproducer?

Do we have the database RUV entry from before and after the cleanallruv task was run?  

One comment says the cleanAllRUV task is still running.  In the logs I see the cleanAllRUV task starting, but not finishing or more importantly, it's not purging the change log.

errors:[26/Oct/2017:15:32:36.203318929 +0200] - INFO - NSMMReplicationPlugin - multimaster_extop_cleanruv - CleanAllRUV Task - Launching cleanAllRUV thread...
errors:[26/Oct/2017:15:32:36.252696137 +0200] - INFO - NSMMReplicationPlugin - multimaster_extop_cleanruv - CleanAllRUV Task - Launching cleanAllRUV thread...
errors:[26/Oct/2017:17:29:38.831210809 +0200] - INFO - NSMMReplicationPlugin - multimaster_extop_cleanruv - CleanAllRUV Task - Launching cleanAllRUV thread...
errors:[26/Oct/2017:17:29:38.839862036 +0200] - INFO - NSMMReplicationPlugin - multimaster_extop_cleanruv - CleanAllRUV Task - Launching cleanAllRUV thread...
errors:[27/Oct/2017:02:27:39.415596254 +0200] - INFO - NSMMReplicationPlugin - multimaster_extop_cleanruv - CleanAllRUV Task - Launching cleanAllRUV thread...
errors:[27/Oct/2017:02:27:39.594251800 +0200] - INFO - NSMMReplicationPlugin - multimaster_extop_cleanruv - CleanAllRUV Task - Launching cleanAllRUV thread...
errors:[27/Oct/2017:02:27:39.776221420 +0200] - INFO - NSMMReplicationPlugin - multimaster_extop_cleanruv - CleanAllRUV Task - Launching cleanAllRUV thread...
errors:[27/Oct/2017:02:27:39.908068884 +0200] - INFO - NSMMReplicationPlugin - multimaster_extop_cleanruv - CleanAllRUV Task - Launching cleanAllRUV thread...


What I also don't see is the cleanAllRUV logging of the different phases of cleaning.  The next message should of been:  "Cleaning rid (62)...", but none of that logging is present in any of the SOS files.

Perhaps the cleaning task was fully run already and it's not in the errors log in the SOS reports, but I can only comment on what is provided.  From what I'm seeing this issue is not related to the cleanAllRUV task.

For more on troubleshooting the cleanAllRUV task please see:  

http://www.port389.org/docs/389ds/FAQ/troubleshoot-cleanallruv.html#troubleshooting-the-cleanallruv-task

Comment 8 Ludwig 2018-01-11 15:59:34 UTC
upstream tickets fixed

Comment 10 Amita Sharma 2018-02-01 11:09:21 UTC
[root@qeos-44 replication]# pytest regression_test.py::test_cleanallruv_repl
================================================================ test session starts =================================================================
platform linux -- Python 3.6.3, pytest-3.4.0, py-1.5.2, pluggy-0.6.0
metadata: {'Python': '3.6.3', 'Platform': 'Linux-3.10.0-837.el7.x86_64-x86_64-with-redhat-7.5-Maipo', 'Packages': {'pytest': '3.4.0', 'py': '1.5.2', 'pluggy': '0.6.0'}, 'Plugins': {'metadata': '1.5.1', 'html': '1.16.1'}}
389-ds-base: 1.3.7.5-14.el7
nss: 3.34.0-4.el7
nspr: 4.17.0-1.el7
openldap: 2.4.44-12.el7
svrcore: 4.1.3-2.el7

rootdir: /mnt/tests/rhds/tests/upstream/ds/dirsrvtests/tests/suites/replication, inifile:
plugins: metadata-1.5.1, html-1.16.1
collected 1 item                                                                                                                                     

regression_test.py .                                                                                                                           [100%]

============================================================= 1 passed in 164.25 seconds =============================================================
[root@qeos-44 replication]#

Comment 13 errata-xmlrpc 2018-04-10 14:21:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0811


Note You need to log in before you can comment on or make changes to this bug.