Bug 1507194
Summary: | cleanallruv could break replication if anchor csn in ruv originated in deleted replica | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | German Parente <gparente> |
Component: | 389-ds-base | Assignee: | Ludwig <lkrispen> |
Status: | CLOSED ERRATA | QA Contact: | Viktor Ashirov <vashirov> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 7.4 | CC: | amsharma, atripath, bjarolim, enewland, lkrispen, moddi, mreynolds, msauton, nkinder, rmeggins, tbordaz, tmihinto |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | 389-ds-base-1.3.7.5-12 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2018-04-10 14:21:13 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
German Parente
2017-10-28 06:53:09 UTC
How often does this occur? Any chance there is a reproducer? Do we have the database RUV entry from before and after the cleanallruv task was run? One comment says the cleanAllRUV task is still running. In the logs I see the cleanAllRUV task starting, but not finishing or more importantly, it's not purging the change log. errors:[26/Oct/2017:15:32:36.203318929 +0200] - INFO - NSMMReplicationPlugin - multimaster_extop_cleanruv - CleanAllRUV Task - Launching cleanAllRUV thread... errors:[26/Oct/2017:15:32:36.252696137 +0200] - INFO - NSMMReplicationPlugin - multimaster_extop_cleanruv - CleanAllRUV Task - Launching cleanAllRUV thread... errors:[26/Oct/2017:17:29:38.831210809 +0200] - INFO - NSMMReplicationPlugin - multimaster_extop_cleanruv - CleanAllRUV Task - Launching cleanAllRUV thread... errors:[26/Oct/2017:17:29:38.839862036 +0200] - INFO - NSMMReplicationPlugin - multimaster_extop_cleanruv - CleanAllRUV Task - Launching cleanAllRUV thread... errors:[27/Oct/2017:02:27:39.415596254 +0200] - INFO - NSMMReplicationPlugin - multimaster_extop_cleanruv - CleanAllRUV Task - Launching cleanAllRUV thread... errors:[27/Oct/2017:02:27:39.594251800 +0200] - INFO - NSMMReplicationPlugin - multimaster_extop_cleanruv - CleanAllRUV Task - Launching cleanAllRUV thread... errors:[27/Oct/2017:02:27:39.776221420 +0200] - INFO - NSMMReplicationPlugin - multimaster_extop_cleanruv - CleanAllRUV Task - Launching cleanAllRUV thread... errors:[27/Oct/2017:02:27:39.908068884 +0200] - INFO - NSMMReplicationPlugin - multimaster_extop_cleanruv - CleanAllRUV Task - Launching cleanAllRUV thread... What I also don't see is the cleanAllRUV logging of the different phases of cleaning. The next message should of been: "Cleaning rid (62)...", but none of that logging is present in any of the SOS files. Perhaps the cleaning task was fully run already and it's not in the errors log in the SOS reports, but I can only comment on what is provided. From what I'm seeing this issue is not related to the cleanAllRUV task. For more on troubleshooting the cleanAllRUV task please see: http://www.port389.org/docs/389ds/FAQ/troubleshoot-cleanallruv.html#troubleshooting-the-cleanallruv-task upstream tickets fixed [root@qeos-44 replication]# pytest regression_test.py::test_cleanallruv_repl ================================================================ test session starts ================================================================= platform linux -- Python 3.6.3, pytest-3.4.0, py-1.5.2, pluggy-0.6.0 metadata: {'Python': '3.6.3', 'Platform': 'Linux-3.10.0-837.el7.x86_64-x86_64-with-redhat-7.5-Maipo', 'Packages': {'pytest': '3.4.0', 'py': '1.5.2', 'pluggy': '0.6.0'}, 'Plugins': {'metadata': '1.5.1', 'html': '1.16.1'}} 389-ds-base: 1.3.7.5-14.el7 nss: 3.34.0-4.el7 nspr: 4.17.0-1.el7 openldap: 2.4.44-12.el7 svrcore: 4.1.3-2.el7 rootdir: /mnt/tests/rhds/tests/upstream/ds/dirsrvtests/tests/suites/replication, inifile: plugins: metadata-1.5.1, html-1.16.1 collected 1 item regression_test.py . [100%] ============================================================= 1 passed in 164.25 seconds ============================================================= [root@qeos-44 replication]# Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0811 |