RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 918708 - RUV is not getting updated for both Master and consumer
Summary: RUV is not getting updated for both Master and consumer
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: 389-ds-base
Version: 7.0
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: rc
: ---
Assignee: Rich Megginson
QA Contact: Sankar Ramalingam
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-03-06 18:15 UTC by Nathan Kinder
Modified: 2020-09-13 20:21 UTC (History)
3 users (show)

Fixed In Version: 389-ds-base-1.3.1.2-1.el7
Doc Type: Bug Fix
Doc Text:
Cause: Changing a replica type to another type. E.g. Master -> Hub Consequence: The replication RUV is updated correctly after making this change. Fix: When the replica type is changed the replication RUV gets updated, the changelog RUV is cleaned, and the replication agreements are notified. Result: The replication RUV reflects the correct replication settings
Clone Of:
Environment:
Last Closed: 2014-06-13 10:18:40 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github 389ds 389-ds-base issues 532 0 None None None 2020-09-13 20:21:03 UTC

Description Nathan Kinder 2013-03-06 18:15:52 UTC
This bug is created as a clone of upstream ticket:
https://fedorahosted.org/389/ticket/532

Hi,

Ruv is not getting updated properly when changing the replica role from the Admin server GUI. Please find the reproducer details below.

Reproducer:
===========
Step-1:  Create two instances named as INST1 and INST2
Step-2: Register the above two instances with the Admin server 
Step-3:  Assign the Hub role to INST1 from the admin server GUI. Configuration tab -> replication -> userRoot -> enable the replication and assign the HUB role  -> Save
Step-4:  Assign the consumer role to INST2 from the admin server GUI. Configuration tab -> replication -> userRoot -> enable the replication and assign the consumer role -> Save
Step-5:  Change the role for INST1 from Hub to Master from Admin Server GUI. Configuarion tab-> replication  -> userRoot > enable the Single Master check box and provide the appropriate replica id -> Save
 
Check the RUV

RUV:
=====
  dn: nsuniqueid=ffffffff-ffffffff-ffffffff-ffffffff,dc=asiapacific,dc=hpqcorp,dc=net
objectClass: top
objectClass: nsTombstone
objectClass: extensibleobject
nsds50ruv: {replicageneration} 50b5024c0000ffff0000
dc: asiapacific
=====

Step-6: Create a replication agreement between the INST1 and INST2. Checked the RUV.There is no change here in the RUV

RUV:
=====
  dn: nsuniqueid=ffffffff-ffffffff-ffffffff-ffffffff,dc=asiapacific,dc=hpqcorp,dc=net
objectClass: top
objectClass: nsTombstone
objectClass: extensibleobject
nsds50ruv: {replicageneration} 50b5024c0000ffff0000
dc: asiapacific
=====

Step-7: Add an entry to the INST1 and check the RUV

RUV in INST1:
======
dn: nsuniqueid=ffffffff-ffffffff-ffffffff-ffffffff,dc=asiapacific,dc=hpqcorp,dc=net
objectClass: top
objectClass: nsTombstone
objectClass: extensibleobject
nsds50ruv: {replicageneration} 50b5024c0000ffff0000
nsds50ruv: {replica 65535 ldap://dirsrv12.asiapacific.hpqcorp.net:2
389} 50b506150000ffff0000 50b506150000ffff0000
dc: asiapacific
nsruvReplicaLastModified: {replica 65535 ldap://dirsrv12.asiapacifi
c.hpqcorp.net:2389} 50b50615

As you can see here, RUV in the INST1 which is a supplier is having the replica id 65535 even after given a proper id during role change and below in the INST2  RUV  which is a consumer here it’s max CSN is not getting updated in the RUV. Could you please give your insight to this problem?. Please note that it is being reproduced in the latest version of 389 directory server i.e. 1.2.15


RUV in INST2:
========
dn: nsuniqueid=ffffffff-ffffffff-ffffffff-ffffffff,dc=asiapacific,dc=hpqcorp,d
c=net
objectClass: top
objectClass: nsTombstone
objectClass: extensibleobject
nsds50ruv: {replicageneration} 50b5024c0000ffff0000
nsds50ruv: {replica 65535 ldap://dirsrv12.asiapacific.hpqcorp.net:2
389} 50b506150000ffff0000
dc: asiapacific
nsruvReplicaLastModified: {replica 65535 ldap://dirsrv12.asiapacifi
c.hpqcorp.net:2389} 50b50616

Comment 1 Rich Megginson 2013-10-01 23:26:46 UTC
moving all ON_QA bugs to MODIFIED in order to add them to the errata (can't add bugs in the ON_QA state to an errata).  When the errata is created, the bugs should be automatically moved back to ON_QA.

Comment 3 Sankar Ramalingam 2014-02-25 14:56:07 UTC
Admin console related bugs cannot be verified with RHEL7 release, since its not supported. 

The clone of this bug will be verified in RHEL7.1 release. For RHEL7 release, marking the bug as verified as Sanity only.

Build tested - 389-ds-base-1.3.1.6-19.el7.x86_64

Comment 4 Ludek Smid 2014-06-13 10:18:40 UTC
This request was resolved in Red Hat Enterprise Linux 7.0.

Contact your manager or support representative in case you have further questions about the request.


Note You need to log in before you can comment on or make changes to this bug.