Bug 830355 - [RFE] improve cleanruv functionality
[RFE] improve cleanruv functionality
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: 389-ds-base (Show other bugs)
6.4
Unspecified Unspecified
high Severity unspecified
: rc
: ---
Assigned To: Rich Megginson
Sankar Ramalingam
: FutureFeature
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-06-08 17:51 EDT by Nathan Kinder
Modified: 2013-02-21 03:17 EST (History)
4 users (show)

See Also:
Fixed In Version: 389-ds-base-1.2.11.15-7.el6
Doc Type: Release Note
Doc Text:
New CLEANALLRUV Operation Obsolete elements in the Database Replica Update Vector (RUV) can be removed with the CLEANRUV operation, which removes them on a single supplier/master. Red Hat Enterprise Linux 6.4 adds a new CLEANALLRUV operation which can remove obsolete RUV data from all replicas and needs to be run on a single supplier/master only.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-02-21 03:17:57 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Nathan Kinder 2012-06-08 17:51:38 EDT
This bug is created as a clone of upstream ticket:
https://fedorahosted.org/389/ticket/337

If you have many masters, and you need to disable one then you suppose to run the CLEANRUV task afterwards.  The problem is that other masters could still be sending updates for the old ruv, which then adds the RUV back to the "just cleaned" replica.

The current solution is to try and run the CLEANRUV task simultaneously on all the replicas, which is practically impossible for very large deployments.
Comment 1 RHEL Product and Program Management 2012-07-10 03:10:31 EDT
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.
Comment 2 RHEL Product and Program Management 2012-07-10 19:01:11 EDT
This request was erroneously removed from consideration in Red Hat Enterprise Linux 6.4, which is currently under development.  This request will be evaluated for inclusion in Red Hat Enterprise Linux 6.4.
Comment 3 Jenny Galipeau 2012-07-18 14:19:05 EDT
See upstream ticket for implementation details
Comment 6 Sankar Ramalingam 2012-12-18 03:17:58 EST
cleanallruv acceptance tests failed for cleanallruv abort task.
                                                                  
----------------- Starting Test cleanallruv_03 -------------------------
Running - cleanallruv_03 to verify Running cleanallruv abort task before cleanallruv task is fired, run cleanallruv task when one of the replica is down and abort task before bringing the server up

cn=H1Task,cn=cleanallruv,cn=tasks,cn=config cn: H1Task objectClass: extensibleObject objectClass: top replica-base-dn: dc=cleanallruv,dc=com replica-id: 1235 nstaskcurrentitem: 8 nstasktotalitems: 1 nstasklog:: Q2xlYW5pbmcgcmlkICgxMjM1KS4uLgpXYWl0aW5nIHRvIHByb2Nlc3MgYWxsIHRoZ SB1cGRhdGVzIGZyb20gdGhlIGRlbGV0ZWQgcmVwbGljYS4uLgpXYWl0aW5nIGZvciBhbGwgdGhlI HJlcGxpY2FzIHRvIGJlIG9ubGluZS4uLgpSZXBsaWNhIG5vdCBvbmxpbmUgKGFnbXQ9ImNuPTIxO TIxX3RvXzIxOTA2IiAoZGVsbC1wZTI4MDAtMDE6MjE5MDYpKQpOb3QgYWxsIHJlcGxpY2FzIG9ub GluZSwgcmV0cnlpbmcgaW4gMTAgc2Vjb25kcy4uLgpSZXBsaWNhIG5vdCBvbmxpbmUgKGFnbXQ9I mNuPTIxOTIxX3RvXzIxOTA2IiAoZGVsbC1wZTI4MDAtMDE6MjE5MDYpKQpOb3QgYWxsIHJlcGxpY 2FzIG9ubGluZSwgcmV0cnlpbmcgaW4gMjAgc2Vjb25kcy4uLgpUYXNrIGFib3J0ZWQgZm9yIHJpZ CgxMjM1KS4= nstaskstatus: Task aborted for rid(1235). nstaskexitcode: 0
CleanAllRUV task with task name - H1Task for replica id - 1235 failed
Test result for CleanReplAgrmt, Checking whether cleanallruv task is completed for Replica ID-1235 on M1-21921, Actual_Result=1, Expected_Result=0
TestCase [cleanallruv03] result-> [FAIL]
Comment 7 mreynolds 2012-12-18 09:00:39 EST
The abort task has been modified, for this particular test you need to add this attr/value to the task entry:

replica-certify-all: off
Comment 8 Sankar Ramalingam 2013-01-07 06:32:24 EST
(In reply to comment #7)
> The abort task has been modified, for this particular test you need to add
> this attr/value to the task entry:
> 
> replica-certify-all: off

Mark, can you provide more information on where to add this attribute?
Comment 9 mreynolds 2013-01-07 10:24:42 EST
(In reply to comment #8)
> (In reply to comment #7)
> > The abort task has been modified, for this particular test you need to add
> > this attr/value to the task entry:
> > 
> > replica-certify-all: off
> 
> Mark, can you provide more information on where to add this attribute?

You add it to the task entry:

 dn: cn=abort 222, cn=abort cleanallruv, cn=tasks, cn=config
 objectclass: extensibleObject
 cn: abort 222
 replica-base-dn: dc=example,dc=com
 replica-id: 222
 replica-certify-all: off
Comment 10 Sankar Ramalingam 2013-01-21 04:49:28 EST
dn: cn=cleanallruv,cn=tasks,cn=config objectClass: top objectClass: extensibleObject cn: cleanallruv dn: cn=H1Task,cn=cleanallruv,cn=tasks,cn=config cn: H1Task objectClass: 

CleanAllRUV task with task name - H1Task for replica id - 1235 failed
Test result for CleanReplAgrmt, Checking whether the task is completed for Replica ID-1235 on M1-21921, Actual_Result=1, Expected_Result=1
TestCase [cleanallruv03] result-> [PASS]
Running cleanallruv abort task before starting H1
ldap_add: Operations error
ldap_add: additional info: Invalid value for "replica-certify-all", the value must be "yes" or "no".
adding new entry cn=H1Abort,cn=abort cleanallruv,cn=tasks,cn=config

------------------------------------

It says, the value must be yes/no. I am re-running the tests with "no" and I will update it once I get the results.
Comment 11 Sankar Ramalingam 2013-01-21 05:42:28 EST
replica-certify-all: no
Successfully completed Running CleanAllRUV Abort task on M1
Sleeping for 60 secs to make sure the task is completed on all masters/consumers
Expecting the cleanallruv task  should be cleaned up
dn: cn=abort cleanallruv,cn=tasks,cn=config objectClass: top objectClass: 

CleanAllRUV task with task name - H1Abort for replica id - 1235 failed
Test result for CleanReplAgrmt, Checking whether cleanallruv abort task is completed for Replica ID-1235 on M1-21921, Actual_Result=1, Expected_Result=0
TestCase [cleanallruv03] result-> [FAIL]
Comment 12 Sankar Ramalingam 2013-01-21 06:14:07 EST
Running cleanallruv abort task before starting H1
adding new entry cn=H1Abort,cn=abort cleanallruv,cn=tasks,cn=config

replica-certify-all: yes
Successfully completed Running CleanAllRUV Abort task on M1
Sleeping for 60 secs to make sure the task is completed on all masters/consumers
Expecting the cleanallruv task  should be cleaned up
dn: cn=abort cleanallruv,cn=tasks,cn=config objectClass: top objectClass: 

CleanAllRUV task with task name - H1Abort for replica id - 1235 failed
Test result for CleanReplAgrmt, Checking whether cleanallruv abort task is completed for Replica ID-1235 on M1-21921, Actual_Result=1, Expected_Result=0
TestCase [cleanallruv03] result-> [FAIL]
Comment 13 mreynolds 2013-01-21 09:26:40 EST
How are you checking if the task is complete?  

The actual task entry does not get removed until after a server restart.  So checking for the task entry under cn=config is not the correct proceedure.

You should really check the error log, or the task entry itself for: "Successfully aborted task"
Comment 14 Jenny Galipeau 2013-01-22 10:28:14 EST
Closing the feature bug as VERIFIED, please open a specific bug for any issue(s) you are seeing, if the issue ends up not being a test script issue
Comment 16 errata-xmlrpc 2013-02-21 03:17:57 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2013-0503.html

Note You need to log in before you can comment on or make changes to this bug.