RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 699460 - windows sync can lose old multi-valued attribute values when a new value is added
Summary: windows sync can lose old multi-valued attribute values when a new value is a...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: 389-ds-base
Version: 6.1
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: ---
Assignee: Rich Megginson
QA Contact:
URL:
Whiteboard:
Depends On: 695779
Blocks: 699458 701557
TreeView+ depends on / blocked
 
Reported: 2011-04-25 18:12 UTC by Nathan Kinder
Modified: 2011-09-16 21:33 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 695779
Environment:
Last Closed: 2011-05-06 18:20:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Nathan Kinder 2011-04-25 18:12:02 UTC
+++ This bug was initially created as a clone of Bug #695779 +++

Description of problem:
A group is sync'd between AD and 389.  Members come over fine.  Certain LDAP clients, when adding a new "uniqueMember" in 389, cause all old members to be immediately deleted on AD (and shortly after deleted from 389 when the dirsync runs).

Some LDAP clients are "dumb" and simply delete the attr and re-add it with all the new values (rather than adding only the ones that are missing).  windows_map_mods_for_replay calls mod_already_made on each mod that is about to be sync'd.  The "dumb" clients have a mod list like [ "delete": "member", "add": "member": [ $existingDN1, $existingDN2, ..., $newDN ] ].  mod_already_made prunes out the adds that already exist in AD, leaving only the new value.  Now we have: [ "delete": "member", "add": "member": [ $newDN ] ].
If I do a ldapmodify by hand and only add the new values (changetype: modify, add: uniqueMember) it works fine.
windows_map_mods_for_replay needs some logic to not prune "add" values from a just deleted attribute.


Version-Release number of selected component (if applicable):
1.2.7.5


How reproducible:
Use any python ldap client (e.g. luma) to add a uniqueMember to a group that is sync'd between AD and 389.  Watch the immediate removal of all old members from the group in AD (with only the new remaining) and the eventual removal of all old group members from 389 as well (when the dirsync hits).


Actual results:
Old members (values) are deleted.


Expected results:
Old values are preserved.


Additional info:
See /usr/lib64/python2.6/site-packages/ldap/modlist.py (python-ldap), modifyModlist.  Right near the end it does:

      if replace_attr_value:
        modlist.append((ldap.MOD_DELETE,attrtype,None))
        modlist.append((ldap.MOD_ADD,attrtype,new_value))

--- Additional comment from nkinder on 2011-04-21 17:24:21 EDT ---

For changes that are sent from DS->AD, we keep a changelog of operations to replay.  What we are dealing with here is essentially 2 mods in one modify operation.  Here's what the operation looks like in LDIF update format:

===========================================
dn: cn=group,dc=example,dc=com
changetype: modify
delete: uniquemember
-
add: uniquemember
uniquemember: uid=user,dc=example,dc=com
===========================================

This is saying to delete all uniquemember values, then add a single value.  It is the same thing as a replace modify operation.  This operation should be played back as is to AD, but we try to skip sending mods that have already been made (this is done to prevent looping mods).  This individual processing of mods to skip stuff already in AD is causing problems since it it done prior to sending any of the mods.  If the first mod makes a change that affects the results of a second mod in the same modify operation, the results could be incorrect.

It seems like we need to process through all of the mods in a single modify operation to determine what exactly needs to be sent to AD.  We may need to create some sort of "resulting entry" in memory to show what the AD entry should look like to determine what mods to make.

--- Additional comment from nkinder on 2011-04-21 18:48:20 EDT ---

Created attachment 494021 [details]
Patch

--- Additional comment from rmeggins on 2011-04-21 18:59:21 EDT ---

Comment on attachment 494021 [details]
Patch

Ok - but note that this changes the server code and the slapi api too.  You could do it without any of those changes by doing 
LDAPMod *modary[2] = {slapi_mod_get_ldapmod_byref(smod), NULL};
slapi_entry_apply_mods(ad_entry, modary);

--- Additional comment from nkinder on 2011-04-25 10:58:11 EDT ---

(In reply to comment #3)
> Comment on attachment 494021 [details]
> Patch
> 
> Ok - but note that this changes the server code and the slapi api too.  You
> could do it without any of those changes by doing 
> LDAPMod *modary[2] = {slapi_mod_get_ldapmod_byref(smod), NULL};
> slapi_entry_apply_mods(ad_entry, modary);

Yes, I added slapi_entry_apply_mod(), as it seemed useful to expose via slapi.  If you feel we should not expse that function, I can use slapi_entry_apply_mods() instead.

--- Additional comment from rmeggins on 2011-04-25 12:05:29 EDT ---

(In reply to comment #4)
> (In reply to comment #3)
> > Comment on attachment 494021 [details]
> > Patch
> > 
> > Ok - but note that this changes the server code and the slapi api too.  You
> > could do it without any of those changes by doing 
> > LDAPMod *modary[2] = {slapi_mod_get_ldapmod_byref(smod), NULL};
> > slapi_entry_apply_mods(ad_entry, modary);
> 
> Yes, I added slapi_entry_apply_mod(), as it seemed useful to expose via slapi. 
> If you feel we should not expse that function, I can use
> slapi_entry_apply_mods() instead.

I guess since we have to release 389-ds-base too, we can go ahead and do this.

--- Additional comment from nkinder on 2011-04-25 13:47:42 EDT ---

Pushed to master.  Thanks to Rich for his review!

Counting objects: 19, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (10/10), done.
Writing objects: 100% (10/10), 1.70 KiB, done.
Total 10 (delta 8), reused 0 (delta 0)
To ssh://git.fedorahosted.org/git/389/ds.git
   11c8bf1..fb7aee0  master -> master

--- Additional comment from nkinder on 2011-04-25 13:58:18 EDT ---

Pushed to 389-ds-base-1.2.8 branch.

Counting objects: 19, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (10/10), done.
Writing objects: 100% (10/10), 1.69 KiB, done.
Total 10 (delta 8), reused 0 (delta 0)
Auto packing the repository for optimum performance.
To ssh://git.fedorahosted.org/git/389/ds.git
   c51c77b..2be27d3  128-local -> 389-ds-base-1.2.8

Comment 3 Chandrasekar Kannan 2011-04-26 17:17:57 UTC
qa_ack+ as QE tests for winsync and replication will start soon here and we can cover these...

Comment 5 Amita Sharma 2011-05-04 07:21:37 UTC
Bug is verified Successfully with below steps:

1. Created a group at AD - add few members to it.
2. Checked they are replicated to RHDS.
3. Then added a new member to that group from DS console.
4. Checked AD, It got reflected properly without deleting the old members of
the group.

Marking the bug as VERIFIED.

Comment 7 Chandrasekar Kannan 2011-09-16 21:33:44 UTC
ds-replication is no longer a component of rhel. folding back to 389-ds-base.


Note You need to log in before you can comment on or make changes to this bug.