Hide Forgot
Description of problem: this is more a suggestion or a rfe than a bug in rhel7.3 we have replication topology plugin managing the agreements. In one customer we have an inconsistency. A replica has been deleted but it's confusing that at listing the agreements, there's still an agreement from a node to a deleted replica. The customer could think the replica is still there but it's not under "cn=masters" except that it is listed in ipa-replica-manage list -v hostname because it's doing this query: 06/Dec/2016:12:04:51.121311405 +0300] conn=243934 op=0 BIND dn="cn=directory manager" method=128 version=3 [06/Dec/2016:12:04:51.121383022 +0300] conn=243934 op=0 RESULT err=0 tag=97 nentries=0 etime=0 dn="cn=directory manager" [06/Dec/2016:12:04:51.121914294 +0300] conn=243934 op=1 SRCH base="cn=mapping tree,cn=config" scope=2 filter="(|(&(objectClass=nsds5ReplicationAgreement)(nsDS5ReplicaRoot=dc=xxxx,dc=yyyy))(objectClass=nsDSWindowsReplicationAgreement))" attrs=ALL [06/Dec/2016:12:04:51.122883897 +0300] conn=243934 op=1 RESULT err=0 tag=101 nentries=4 etime=0 I would propose, either to add a flag showing that the agreement is managed by topology plugin by querying: objectClass: ipaReplTopoManagedAgreement ipaReplTopoManagedAgreementState: managed agreement - controlled by topology plugin or just NOT showing it at all. In any case, the customer confusion was interesting to be able to detect that there's an agreement that had to be manually deleted. So, in a way the information was useful to troubleshoot but very confusing to customer who thought the node was still there. Version-Release number of selected component (if applicable): ipa-server-4.4.0-12.el7.x86_64 How reproducible: this customer is the one that had a lot of conflicts in cn=topology and we have to manipulate some entries manually.
OK, closing. Agreement on the cleaning part. But this case is more a result of a bug. It doesn't make much sense to document the same steps which automatic method tries to do. Maybe error message should be changed.