Bug 641245

Summary: Winsync syncronization may fail on some object.
Product: [Retired] 389 Reporter: serejka
Component: Sync ServiceAssignee: Rich Megginson <rmeggins>
Status: CLOSED WORKSFORME QA Contact: Chandrasekar Kannan <ckannan>
Severity: medium Docs Contact:
Priority: low    
Version: 1.2.10CC: benl, edewata, nhosoi, nkinder, rmeggins, shaines
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-03-04 23:05:46 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description serejka 2010-10-08 07:08:56 UTC
Description of problem:
When you have a rather big scope on AD objects with lots of OU and nested groups, then full resync may not add some groups to DS. DS replication plugin will not create local copy of group if it has members which are not yet being syncronized and you need to some several full resync tasks.

The problem will appears only when you creates sync agreement like this:

dn: cn=1,cn=replica,cn=dc\3Dcompute\2Cdc\3Dmd\2Cdc\3Dmeteorf\2Cdc\3Dru,cn=mapp
 ing tree,cn=config
objectClass: top
objectClass: nsDSWindowsReplicationAgreement
description: 1
cn: 1
nsds7WindowsReplicaSubtree: ou=compute,dc=example,dc=com
nsds7DirectoryReplicaSubtree: cn=compute, dc=compute,dc=example,dc=com
nsds7NewWinUserSyncEnabled: on
nsds7NewWinGroupSyncEnabled: on
nsds7WindowsDomain: md.meteorf.ru
nsDS5ReplicaRoot: dc=compute,dc=md,dc=meteorf,dc=ru
nsDS5ReplicaHost: 10.1.11.12
nsDS5ReplicaPort: 389
nsDS5ReplicaBindDN: cn=dssync,cn=Users,dc=example,dc=com
nsDS5ReplicaBindMethod: SIMPLE
nsDS5ReplicaCredentials: 
creatorsName: cn=admin
modifiersName: cn=Multimaster Replication Plugin,cn=plugins,cn=config
createTimestamp: 20101007233232Z
modifyTimestamp: 20101008064232Z

Brief view says that it's ok, but there is an issue:
nsds7DirectoryReplicaSubtree: cn=compute, dc=compute,dc=example,dc=com
contains space between "cn=compute," and "dc=compute,dc=example,dc=com"

and when full resync starts, then next part of code will generate wrong entries for "uniqueMember" and will not create local copy:

#ldap/servers/plugins/replication/windows_protocol_util.c
============================================================================
static int
map_entry_dn_inbound(Slapi_Entry *e, Slapi_DN **dn, const Repl_Agmt *ra)
{
.......
if (NULL == new_dn)
        {
                char *new_dn_string = NULL;
                slapi_log_error(SLAPI_LOG_REPL, repl_plugin_name, "%s: map_entry_dn_inbound: creating user:%s\n",agmt_get_long_name(ra),username);   
                if (username)
                {
                        const char *suffix = slapi_sdn_get_dn(windows_private_get_directory_subtree(ra));
                        char *container_str = NULL;

                        container_str = extract_container(slapi_entry_get_sdn_const(e), windows_private_get_windows_subtree(ra));
                        /* Local DNs for users and groups are different */
                        if (is_user)
                        {
                                new_dn_string = PR_smprintf("uid=%s,%s%s",username,container_str,suffix);
.....
 } else
                        {
                                new_dn_string = PR_smprintf("cn=%s,%s%s",username,container_str,suffix);
=============================================================================
this is because:
const char *suffix = slapi_sdn_get_dn(windows_private_get_directory_subtree(ra));
will return 
" ,dc=compute,dc=example,dc=com" 
with lead space, instead of
",dc=compute,dc=example,dc=com" 
spaceless entry.

As a result you may see via error logfile:
==========================================================================
Windows sync entry: Adding new local entry dn: cn=grp1,cn=compute,dc=compute,dc=example,dc=com
objectClass: top
objectClass: groupofuniquenames
objectClass: ntGroup
objectClass: posixGroup
ntGroupDeleteGroup: true
cn: grp1
description:: MTAzMSDQk9Cc0KYgINCQ0KHQntCe0Jgg0YHRg9GJ0LXRgdGC0LLRg9GO0YnQsNG
 PINGB0LjRgdGC0LXQvNCw
uniqueMember: uid=user1,OU=HMC,cn=compute, dc=compute,dc=example,dc=com
uniqueMember: uid=user2,OU=HMC,cn=compute, dc=compute,dc=example,dc=com
uniqueMember: uid=user3,OU=HMC,cn=compute,dc=compute,dc=example,dc=com
uniqueMember: uid=user4,OU=HMC,cn=compute, dc=compute,dc=example,dc=com
uniqueMember: uid=user5,OU=HMC,cn=compute, dc=compute,dc=example,dc=com
ntUserDomainId: grp1
ntGroupType: -2147483646
gidNumber: 1031
ntUniqueId: 42be53fedc54554bb4252d19f017549d


=> send_ldap_result 21::uniqueMember: value #0 invalid per syntax
uniqueMember: value #1 invalid per syntax
uniqueMember: value #3 invalid per syntax
uniqueMember: value #4 invalid per syntax
============================================================================
Entries: 
uniqueMember: uid=user1,OU=HMC,cn=compute, dc=compute,dc=example,dc=com
uniqueMember: uid=user2,OU=HMC,cn=compute, dc=compute,dc=example,dc=com
uniqueMember: uid=user4,OU=HMC,cn=compute, dc=compute,dc=example,dc=com
uniqueMember: uid=user5,OU=HMC,cn=compute, dc=compute,dc=example,dc=com
contains space between "cn=compute" and "dc=compute......"
This is because such entries are not yes created in directory server and plugin trying to fake them via "map_entry_dn_inbound".

Entry:
uniqueMember: uid=user3,OU=HMC,cn=compute,dc=compute,dc=example,dc=com
is correct, because object has already been created in DS tree and "map_entry_dn_inbound" gets it via  DS.

Sorry that i do not provide you a patch to solve this, as I'm not architecturer of DS and I don't know where this should be fixed either in "map_entry_dn_inbound" or in "slapi_sdn_get_dn" or maybe somewhere else.

Comment 1 Martin Kosek 2012-01-04 13:30:32 UTC
Upstream ticket:
https://fedorahosted.org/389/ticket/79

Comment 2 Nathan Kinder 2013-03-04 23:05:46 UTC
The upstream ticket mentioned in comment#1 was closed as WORSFORME.  Closing this bug with the same status.  If more details can be provided to reproduce, please reopen the upstream TRAC ticket.