Hide Forgot
This bug is created as a clone of upstream ticket: https://fedorahosted.org/389/ticket/47667 We should allow nsDS5ReplicaBindDN to handle group DNs. This would allow one to have a group of replication users in the replicated tree, which simplifies certain use-cases. FreeIPA uses GSSAPI for replication, so the replication users exist in the replicated tree. Each replica uses a separate replication user. The DNA plug-in is also used by IPA, which utilizes the replication bind DNs for authorization when receiving a range transfer request extended operation. Being able to use a group DN simplifies the configuration changes that need to be made when a new replica is added. Consider the following 3 master replication topology: A <-> B <-> C If a new master D is added that is only connected to master C, we would need to update nsDS5ReplicaBindDN on A, B, and C to allow DNA range transfers between D and all other masters with the current behavior. If a group of replication users from the replicated tree is used instead, we could simply add the new replication bind DN to the group and it would be replicated out to the rest of the topology.
This looks like an RFE to me. I hope we need to have good test coverage for this. Please correct me if I am wrong.
Hi, May I have a design doc link for this RFE please. Thanks, Ami
(In reply to Amita Sharma from comment #4) > Hi, > > May I have a design doc link for this RFE please. > > Thanks, > Ami [Design memo by Ludwig] The fix adds a new attribute to the ndsdReplica object: nsDS5ReplicaBindDNGroup: <dn> When this attr is set at startup or when the replica object is modified the group is expanded and its members and all mambers of its subgroups are added to a hash of replcabinddns. this is in parallel to the normal hash od replicabind dn specified using the existing attr nsDS5ReplicaBindDN. Since groups can change, the list of bingdns based on groups has to be rebuilt when the spcified groups change. This check and the rebuilding of the group has a performance cost and will be done only in a specified interval, the interval can be configured by nsDS5ReplicaBindDNGroupCheckInterval. This attr takes the following values: -1 no dynamic check at runtime, admin must take care that groups are stable or restart to get changes accounted for 0 everytime a binddn is verified the groupdns are rebuilt n only if n seconds have passed since last rebuild it is done again
Hello, could you provide me with more information as to how this works internally (does it try to bind with every member of the group) and probably another use case? I'm not sure I understand the feature enough. Proper design document would help too. Thanks, Milan
The server binding does not know or does not need to know about the group feature, it is only on the receiving side. Before this feature was implemented, in a replication agreement a dn to bind was defined and in a replica allowed dns to bind were defined, there could have been more than one. So a server maintained a list of binddns for a replica allowed to bind. What is new now is that not only single dns can be specified, but also groups. Internally all the individual bind dns are collected into a list of allowed binddns and thios list is extended with all the members of the allowed binddngroups - then processing is as before.
Thanks for reply Ludwig, you are correct, case 4 is working fine :: dn: cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config objectClass: top objectClass: nsds5replica objectClass: extensibleobject cn: replica nsDS5ReplicaRoot: dc=example,dc=com nsDS5ReplicaId: 1 nsDS5ReplicaType: 3 nsDS5Flags: 1 nsds5replicabinddngroup: cn=QA Managers,ou=Groups,dc=example,dc=com creatorsName: cn=directory manager modifiersName: cn=Multimaster Replication Plugin,cn=plugins,cn=config createTimestamp: 20150129114708Z modifyTimestamp: 20150129132259Z nsState:: AQAAAAAAAAAWNMpUAAAAAAAAAAAAAAAAAQAAAAAAAAADAAAAAAAAAA== nsDS5ReplicaName: 870dcf04-a7ac11e4-bb039697-342b433a nsds5replicabinddngroupcheckinterval: 0 numSubordinates: 2 [root@dhcp201-126 export]# ldapadd -x -h localhost -p 30100 -D "cn=Directory Manager" -w Secret123 << EOF > dn: uid=ami1,dc=example,dc=com > cn: ams1 > sn: ams1 > givenname: ams1 > objectclass: top > objectclass: person > objectclass: organizationalPerson > objectclass: inetOrgPerson > uid: ami1 > mail: ams1@example.com > userpassword: Secret123 > EOF adding new entry "uid=ami1,dc=example,dc=com" [root@dhcp201-126 export]# ldapsearch -x -h localhost -p 30102 -D "cn=Directory Manager" -w Secret123 -b "uid=ami1,dc=example,dc=com" # extended LDIF # # LDAPv3 # base <uid=ami1,dc=example,dc=com> with scope subtree # filter: (objectclass=*) # requesting: ALL # # ami1, example.com dn: uid=ami1,dc=example,dc=com cn: ams1 sn: ams1 givenName: ams1 objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson uid: ami1 mail: ams1@example.com userPassword:: e1NTSEF9SXcwakk2SWYxNC9sdDBIbkJkc0RYem81TE9SbmI2RzhzREJRanc9PQ= No error message. So basic functionality is tested and VERIFIED. I will log bug for other Issues. Thanks, Ami
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-0416.html