Bug 1186352 - Schema Compatibility plugin cache is not cleared during DS re-initialization
Summary: Schema Compatibility plugin cache is not cleared during DS re-initialization
Status: ASSIGNED
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: slapi-nis
Version: 7.4
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: rc
: ---
Assignee: Alexander Bokovoy
QA Contact: Namita Soman
URL:
Whiteboard:
Keywords:
Depends On:
Blocks: 1205796
TreeView+ depends on / blocked
 
Reported: 2015-01-27 13:49 UTC by Kaleem
Modified: 2019-04-28 08:54 UTC (History)
6 users (show)

(edit)
When you restore an Identity Management (IdM) server from backup and re-initalize the restored data to other replicas, the Schema Compatibility plug-in can still maintain a cache of the old data from before performing the restore and re-initialization. Consequently, the replicas might behave unexpectedly. For example, if you attempt to add a user that was originally added after performing the backup, and thus removed during the restore and re-initialization steps, the operation might fail with an error, because the Schema Compatibility cache contains a conflicting user entry. To work around this problem, restart the IdM replicas after re-intializing them from the master server. This clears the Schema Compatibility cache and ensures that the replicas behave as expected in the described situation.
Clone Of:
(edit)
Last Closed:


Attachments (Terms of Use)
contains error_log and dse.ldif (130.00 KB, application/x-tar)
2015-01-27 15:05 UTC, Kaleem
no flags Details

Description Kaleem 2015-01-27 13:49:50 UTC
Description of problem:
Not able to add a user (testuser4) which existed on Replica before re-initialize from restored master. Same scenario working on restored master.

Version-Release number of selected component (if applicable):
ipa-server-4.1.0-16.el7.x86_64

How reproducible:
Always

Steps to Reproduce:
(1) Install Master
(2) Install replica
(3) Added a user(testuser1) on Master
(4) Added a user (testuser2) on Replica
(5) Took backup on Master
(6) Added a user(testuser3) on Master
(7) Added a user (testuser4) on Replica
(8) Restored from backup taken in step (5)
(9) Users testuser1 and testuser2 found on Master
(10) Did re-initialize on Replica to fetch restored data
(11) Got users testuser1 and testuser2 on Replica too.
(12) Tried to add user testuser3 again on Master. Added successfully
(13) Tried to add user testuser4 again on Replica, Failed

Actual results:

[root@dhcp207-58 ~]# ipa user-find
---------------
3 users matched
---------------
  User login: admin
  Last name: Administrator
  Home directory: /home/admin
  Login shell: /bin/bash
  UID: 139000000
  GID: 139000000
  Account disabled: False
  Password: True
  Kerberos keys available: True

  User login: testuser1
  First name: testuser1
  Last name: testuser1
  Home directory: /home/testuser1
  Login shell: /bin/sh
  Email address: testuser1@testrelm.test
  UID: 139000003
  GID: 139000003
  Account disabled: False
  Password: True
  Kerberos keys available: True

  User login: testuser2
  First name: testuser2
  Last name: testuser2
  Home directory: /home/testuser2
  Login shell: /bin/sh
  Email address: testuser2@testrelm.test
  UID: 139100500
  GID: 139100500
  Account disabled: False
  Password: True
  Kerberos keys available: True
----------------------------
Number of entries returned 3
----------------------------
[root@dhcp207-58 ~]# echo dummy123@ipa.com |           ipa user-add testuser4                         --first 'testuser4'                         --last  'testuser4'                         --password
ipa: ERROR: user with name "testuser4" already exists
[root@dhcp207-58 ~]#

Expected results:

User (testuser4) should have been added.


Additional info:
(1)After restarting ipa service, able to add user
(2)Following shown in /var/log/dirsrv/slapd-TESTRELM-TEST/errors

  [27/Jan/2015:15:26:13 +051800] - SLAPI_PLUGIN_BE_TXN_PRE_ADD_FN plugin failed: 19

Comment 2 Ludwig 2015-01-27 14:31:49 UTC
the error 19 returned by the plugin is LDAP_CONSTRAINT_VIOLATION
which could be returned by uid uniqueness plugin, 

could you upload the error log and the config file dse.ldif

Comment 3 Kaleem 2015-01-27 15:05:49 UTC
Created attachment 984715 [details]
contains error_log and dse.ldif

Comment 4 thierry bordaz 2015-01-27 16:40:47 UTC
I reproduced the problem and wonder if it is not related to some schema-compat caching:

test case on Master (ipa-backup, ipa-restore <backupd_dir>)
  on replica (ipa-replica-manage re-initialize --from <master_host>)

After the re-init on Replica:
# testuser4, users, compat, idm.lab.bos.redhat.com
dn: uid=testuser4,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# testuser3, users, compat, idm.lab.bos.redhat.com
dn: uid=testuser3,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# testuser2, users, compat, idm.lab.bos.redhat.com
dn: uid=testuser2,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# testuser1, users, compat, idm.lab.bos.redhat.com
dn: uid=testuser1,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# admin, users, compat, idm.lab.bos.redhat.com
dn: uid=admin,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# admin, users, accounts, idm.lab.bos.redhat.com
dn: uid=admin,cn=users,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# sudo, sysaccounts, etc, idm.lab.bos.redhat.com
dn: uid=sudo,cn=sysaccounts,cn=etc,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# testuser1, users, accounts, idm.lab.bos.redhat.com
dn: uid=testuser1,cn=users,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# testuser2, users, accounts, idm.lab.bos.redhat.com
dn: uid=testuser2,cn=users,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com



On master:
# testuser2, users, compat, idm.lab.bos.redhat.com
dn: uid=testuser2,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# testuser1, users, compat, idm.lab.bos.redhat.com
dn: uid=testuser1,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# admin, users, compat, idm.lab.bos.redhat.com
dn: uid=admin,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# admin, users, accounts, idm.lab.bos.redhat.com
dn: uid=admin,cn=users,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# sudo, sysaccounts, etc, idm.lab.bos.redhat.com
dn: uid=sudo,cn=sysaccounts,cn=etc,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# testuser1, users, accounts, idm.lab.bos.redhat.com
dn: uid=testuser1,cn=users,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# testuser2, users, accounts, idm.lab.bos.redhat.com
dn: uid=testuser2,cn=users,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com


It is looking like the online init does not clear some schema-compat entries.

The error 19 (constraint violation) is returned by schema-compat:

[26/Jan/2015:23:38:41 -0500] schema-compat-plugin - searching from "dc=idm,dc=lab,dc=bos,dc=redhat,dc=com" for "(&(objectClass=posixAccount)(uid=testuser3))" with scope 2 (sub)
[26/Jan/2015:23:38:41 -0500] schema-compat-plugin - search matched uid=testuser3,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com
[26/Jan/2015:23:38:41 -0500] NSUniqueAttr - SEARCH entry dn=uid=testuser3,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com
[26/Jan/2015:23:38:41 -0500] NSUniqueAttr - SEARCH complete result=19
[26/Jan/2015:23:38:41 -0500] NSUniqueAttr - SEARCH result = 19
[26/Jan/2015:23:38:41 -0500] NSUniqueAttr - ADD result 19
[26/Jan/2015:23:38:41 -0500] - SLAPI_PLUGIN_BE_TXN_PRE_ADD_FN plugin failed: 19

Comment 5 Ludwig 2015-01-27 17:00:34 UTC
the error comes from the uid uniqueness plugin, but that the testuser4 is still present in the comapt tree should not happen

Comment 6 Martin Kosek 2015-01-28 09:48:49 UTC
Ludwig, Thierry - thanks for investigation. Can either of you please summarize what is the problem in the end, what is the impact and how can we fix it? (and where? 389-DS or IPA?).

Comment 7 Alexander Bokovoy 2015-01-28 10:48:00 UTC
One idea we had is that perhaps slapi-nis needs to listen to the backend state -- whether it is online or offline. It would mean registering a handler with slapi_register_backend_state_change() and then check for SLAPI_BE_STATE_* states against subtrees we are listening to.

For each subtree check if the backend that is changing its state to offline or deleted is handling the parent of the subtree and if so, invalidate cached entries from this subtree.

For each subtree check if the backend that is changing its state to online is handling the parent of the subtree and if so, trigger populating the cache.

There probably would be details and corner cases (AD users, for example, aren't related to any backend though their group membership is) but overall it looks like backend state handling is missing.

Comment 8 Petr Vobornik 2015-02-09 15:19:59 UTC
waiting for the summary mentioned in comment 6

Comment 9 Ludwig 2015-02-10 08:39:32 UTC
sorry, I thought Alexanders update #7 would be enough.

To me the problem is:

on the replica there is user3 and it is also in the compat tree.
the replica is reinitialized with data not containing user3, but it is not removed from the comapt tree cache
when trying to add user3 again on the replica it fails because the uiduniqueness check fails since teh user is still there in the compat tree.
A restart of DS fixes this, but a long term fix would have to be in slapi-nis plugin, see update #7


Note You need to log in before you can comment on or make changes to this bug.