Bug 1240845
| Summary: | cleanallruv should completely clean changelog | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Noriko Hosoi <nhosoi> | |
| Component: | 389-ds-base | Assignee: | Noriko Hosoi <nhosoi> | |
| Status: | CLOSED ERRATA | QA Contact: | Viktor Ashirov <vashirov> | |
| Severity: | urgent | Docs Contact: | ||
| Priority: | urgent | |||
| Version: | 7.0 | CC: | gparente, mreynolds, msauton, nkinder, rmeggins, tbordaz | |
| Target Milestone: | rc | Keywords: | ZStream | |
| Target Release: | --- | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | 389-ds-base-1.3.4.0-6.el7 | Doc Type: | Bug Fix | |
| Doc Text: |
After the cleanAllRUV task finished, the change log still contained entries from the cleaned rid. Now, cleanAllRUV cleans the change log completely as expected.
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 1260001 1270002 (view as bug list) | Environment: | ||
| Last Closed: | 2015-11-19 11:42:55 UTC | Type: | --- | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1133060, 1260001, 1270002 | |||
|
Description
Noriko Hosoi
2015-07-07 21:55:18 UTC
Fixed upstream: 1.3.4/1.3.5 Hi Mark, could you please provide steps to verify? Thanks! (In reply to Viktor Ashirov from comment #2) > Hi Mark, > could you please provide steps to verify? > > Thanks! Verification steps [1] Setup 2 way MMR: replica A(rid 1) & replica B(rid 2) [2] Get replication working - send an update from each side [3] Stop Replica B [4] Remove repl agreement on Replica A that points to Replica B [5] Run cleanallruv, and wait for it finish [6] Kill replica A (kill -9 PID) [7] Start replica A [8] Run a search for the database RUV, and make sure you don't see replica B's RUV element (rid 2): ldapsearch -xLLL -D "cn=directory manager" -W -b dc=example,dc=com \ '(&(nsuniqueid=ffffffff-ffffffff-ffffffff-ffffffff)(objectclass=nstombstone))' nsds50ruv Build tested: 389-ds-base-1.3.4.0-13.el7.x86_64
[1] Setup 2 way MMR: replica A(rid 1) & replica B(rid 2)
I had M1 (rid 1231) and M2 (rid 1232)
[2] Get replication working - send an update from each side
[3] Stop Replica B
# stop-dirsrv M2
[4] Remove repl agreement on Replica A that points to Replica B
[5] Run cleanallruv, and wait for it finish
[20/Aug/2015:12:26:25 +0200] NSMMReplicationPlugin - CleanAllRUV Task (rid 1232): Initiating CleanAllRUV Task...
[20/Aug/2015:12:26:25 +0200] NSMMReplicationPlugin - CleanAllRUV Task (rid 1232): Retrieving maxcsn...
[20/Aug/2015:12:26:25 +0200] NSMMReplicationPlugin - CleanAllRUV Task (rid 1232): Found maxcsn (00000000000000000000)
[20/Aug/2015:12:26:25 +0200] NSMMReplicationPlugin - CleanAllRUV Task (rid 1232): Cleaning rid (1232)...
[20/Aug/2015:12:26:25 +0200] NSMMReplicationPlugin - CleanAllRUV Task (rid 1232): Waiting to process all the updates from the deleted replica...
[20/Aug/2015:12:26:25 +0200] NSMMReplicationPlugin - CleanAllRUV Task (rid 1232): Waiting for all the replicas to be online...
[20/Aug/2015:12:26:25 +0200] NSMMReplicationPlugin - CleanAllRUV Task (rid 1232): Waiting for all the replicas to receive all the deleted replica updates...
[20/Aug/2015:12:26:25 +0200] NSMMReplicationPlugin - CleanAllRUV Task (rid 1232): Sending cleanAllRUV task to all the replicas...
[20/Aug/2015:12:26:25 +0200] NSMMReplicationPlugin - CleanAllRUV Task (rid 1232): Cleaning local ruv's...
[20/Aug/2015:12:26:26 +0200] NSMMReplicationPlugin - CleanAllRUV Task (rid 1232): Waiting for all the replicas to be cleaned...
[20/Aug/2015:12:26:26 +0200] NSMMReplicationPlugin - CleanAllRUV Task (rid 1232): Waiting for all the replicas to finish cleaning...
[20/Aug/2015:12:26:26 +0200] NSMMReplicationPlugin - CleanAllRUV Task (rid 1232): Successfully cleaned rid(1232).
[6] Kill replica A (kill -9 PID)
# kill -9 $(pgrep ns-slapd)
[7] Start replica A
# start-dirsrv M1
[8] Run a search for the database RUV, and make sure you don't see replica B's RUV element (rid 2):
# ldapsearch -H ldap://localhost:1189 -xLLL -D "cn=directory manager" -w Secret123 -b dc=example,dc=com '(&(nsuniqueid=ffffffff-ffffffff-ffffffff-ffffffff)(objectclass=nstombstone))' nsds50ruv
dn: cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config
nsds50ruv: {replicageneration} 55d5a81c000004cf0000
nsds50ruv: {replica 1231 ldap://rhel7ds.brq.redhat.com:1189} 55d5a848000104cf0
000 55d5a84b000504cf0000
Search didn't return RUV for second replica, as it was in 389-ds-base-1.3.4.0-5.el7.
Marking as VERIFIED.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-2351.html |