Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
this is more a suggestion or a rfe than a bug
in rhel7.3 we have replication topology plugin managing the agreements.
In one customer we have an inconsistency. A replica has been deleted but it's confusing that at listing the agreements, there's still an agreement from a node to a deleted replica. The customer could think the replica is still there but it's not under "cn=masters" except that it is listed in
ipa-replica-manage list -v hostname
because it's doing this query:
06/Dec/2016:12:04:51.121311405 +0300] conn=243934 op=0 BIND dn="cn=directory manager" method=128 version=3
[06/Dec/2016:12:04:51.121383022 +0300] conn=243934 op=0 RESULT err=0 tag=97 nentries=0 etime=0 dn="cn=directory manager"
[06/Dec/2016:12:04:51.121914294 +0300] conn=243934 op=1 SRCH base="cn=mapping tree,cn=config" scope=2 filter="(|(&(objectClass=nsds5ReplicationAgreement)(nsDS5ReplicaRoot=dc=xxxx,dc=yyyy))(objectClass=nsDSWindowsReplicationAgreement))" attrs=ALL
[06/Dec/2016:12:04:51.122883897 +0300] conn=243934 op=1 RESULT err=0 tag=101 nentries=4 etime=0
I would propose, either to add a flag showing that the agreement is managed by topology plugin by querying:
objectClass: ipaReplTopoManagedAgreement
ipaReplTopoManagedAgreementState: managed agreement - controlled by topology plugin
or just NOT showing it at all.
In any case, the customer confusion was interesting to be able to detect that there's an agreement that had to be manually deleted. So, in a way the information was useful to troubleshoot but very confusing to customer who thought the node was still there.
Version-Release number of selected component (if applicable): ipa-server-4.4.0-12.el7.x86_64
How reproducible:
this customer is the one that had a lot of conflicts in cn=topology and we have to manipulate some entries manually.
OK, closing.
Agreement on the cleaning part. But this case is more a result of a bug. It doesn't make much sense to document the same steps which automatic method tries to do. Maybe error message should be changed.