RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1240845 - cleanallruv should completely clean changelog
Summary: cleanallruv should completely clean changelog
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: 389-ds-base
Version: 7.0
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: ---
Assignee: Noriko Hosoi
QA Contact: Viktor Ashirov
URL:
Whiteboard:
Depends On:
Blocks: 1133060 1260001 1270002
TreeView+ depends on / blocked
 
Reported: 2015-07-07 21:55 UTC by Noriko Hosoi
Modified: 2020-09-13 21:26 UTC (History)
6 users (show)

Fixed In Version: 389-ds-base-1.3.4.0-6.el7
Doc Type: Bug Fix
Doc Text:
After the cleanAllRUV task finished, the change log still contained entries from the cleaned rid. Now, cleanAllRUV cleans the change log completely as expected.
Clone Of:
: 1260001 1270002 (view as bug list)
Environment:
Last Closed: 2015-11-19 11:42:55 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github 389ds 389-ds-base issues 1539 0 None None None 2020-09-13 21:26:51 UTC
Red Hat Product Errata RHBA-2015:2351 0 normal SHIPPED_LIVE 389-ds-base bug fix and enhancement update 2015-11-19 10:28:44 UTC

Description Noriko Hosoi 2015-07-07 21:55:18 UTC
This bug is created as a clone of upstream ticket:
https://fedorahosted.org/389/ticket/48208

Currently cleanallruv triggers change log trimming, but this potentially does not remove all the changelog entries that came from the invalid rid(or the cleaned rid).  There are certain conditions, like the server crashing and the changelog being rescanned, that can cause the db RUV to get polluted with the invalid rid again.  The RUV element for the invalid rid is usually missing the URL as well, which causes issues for FreeIPA tools.

Comment 1 mreynolds 2015-07-08 18:12:39 UTC
Fixed upstream: 1.3.4/1.3.5

Comment 2 Viktor Ashirov 2015-07-09 20:53:26 UTC
Hi Mark,
could you please provide steps to verify? 

Thanks!

Comment 3 mreynolds 2015-07-10 11:46:32 UTC
(In reply to Viktor Ashirov from comment #2)
> Hi Mark,
> could you please provide steps to verify? 
> 
> Thanks!

Verification steps

[1]  Setup 2 way MMR:  replica A(rid 1) & replica B(rid 2)
[2]  Get replication working - send an update from each side
[3]  Stop Replica B
[4]  Remove repl agreement on Replica A that points to Replica B
[5]  Run cleanallruv, and wait for it finish
[6]  Kill replica A (kill -9 PID)
[7]  Start replica A
[8]  Run a search for the database RUV, and make sure you don't see replica B's RUV element (rid 2):

ldapsearch -xLLL -D "cn=directory manager" -W -b dc=example,dc=com \
 '(&(nsuniqueid=ffffffff-ffffffff-ffffffff-ffffffff)(objectclass=nstombstone))' nsds50ruv

Comment 6 Viktor Ashirov 2015-08-20 10:36:11 UTC
Build tested: 389-ds-base-1.3.4.0-13.el7.x86_64

[1]  Setup 2 way MMR:  replica A(rid 1) & replica B(rid 2)
I had M1 (rid 1231) and M2 (rid 1232)
[2]  Get replication working - send an update from each side
[3]  Stop Replica B
# stop-dirsrv M2
[4]  Remove repl agreement on Replica A that points to Replica B
[5]  Run cleanallruv, and wait for it finish
[20/Aug/2015:12:26:25 +0200] NSMMReplicationPlugin - CleanAllRUV Task (rid 1232): Initiating CleanAllRUV Task... 
[20/Aug/2015:12:26:25 +0200] NSMMReplicationPlugin - CleanAllRUV Task (rid 1232): Retrieving maxcsn... 
[20/Aug/2015:12:26:25 +0200] NSMMReplicationPlugin - CleanAllRUV Task (rid 1232): Found maxcsn (00000000000000000000) 
[20/Aug/2015:12:26:25 +0200] NSMMReplicationPlugin - CleanAllRUV Task (rid 1232): Cleaning rid (1232)... 
[20/Aug/2015:12:26:25 +0200] NSMMReplicationPlugin - CleanAllRUV Task (rid 1232): Waiting to process all the updates from the deleted replica... 
[20/Aug/2015:12:26:25 +0200] NSMMReplicationPlugin - CleanAllRUV Task (rid 1232): Waiting for all the replicas to be online... 
[20/Aug/2015:12:26:25 +0200] NSMMReplicationPlugin - CleanAllRUV Task (rid 1232): Waiting for all the replicas to receive all the deleted replica updates... 
[20/Aug/2015:12:26:25 +0200] NSMMReplicationPlugin - CleanAllRUV Task (rid 1232): Sending cleanAllRUV task to all the replicas... 
[20/Aug/2015:12:26:25 +0200] NSMMReplicationPlugin - CleanAllRUV Task (rid 1232): Cleaning local ruv's... 
[20/Aug/2015:12:26:26 +0200] NSMMReplicationPlugin - CleanAllRUV Task (rid 1232): Waiting for all the replicas to be cleaned... 
[20/Aug/2015:12:26:26 +0200] NSMMReplicationPlugin - CleanAllRUV Task (rid 1232): Waiting for all the replicas to finish cleaning... 
[20/Aug/2015:12:26:26 +0200] NSMMReplicationPlugin - CleanAllRUV Task (rid 1232): Successfully cleaned rid(1232). 

[6]  Kill replica A (kill -9 PID)
# kill -9 $(pgrep ns-slapd)

[7]  Start replica A
# start-dirsrv M1

[8]  Run a search for the database RUV, and make sure you don't see replica B's RUV element (rid 2):

# ldapsearch -H ldap://localhost:1189 -xLLL -D "cn=directory manager" -w Secret123 -b dc=example,dc=com  '(&(nsuniqueid=ffffffff-ffffffff-ffffffff-ffffffff)(objectclass=nstombstone))' nsds50ruv
dn: cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config
nsds50ruv: {replicageneration} 55d5a81c000004cf0000
nsds50ruv: {replica 1231 ldap://rhel7ds.brq.redhat.com:1189} 55d5a848000104cf0
 000 55d5a84b000504cf0000


Search didn't return RUV for second replica, as it was in 389-ds-base-1.3.4.0-5.el7. 
Marking as VERIFIED.

Comment 17 errata-xmlrpc 2015-11-19 11:42:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2351.html


Note You need to log in before you can comment on or make changes to this bug.