RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1186352 - Schema Compatibility plugin cache is not cleared during DS re-initialization
Summary: Schema Compatibility plugin cache is not cleared during DS re-initialization
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: slapi-nis
Version: ---
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: rc
: ---
Assignee: Alexander Bokovoy
QA Contact: Namita Soman
URL:
Whiteboard:
Depends On:
Blocks: 1205796
TreeView+ depends on / blocked
 
Reported: 2015-01-27 13:49 UTC by Kaleem
Modified: 2023-01-30 07:36 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
When you restore an Identity Management (IdM) server from backup and re-initalize the restored data to other replicas, the Schema Compatibility plug-in can still maintain a cache of the old data from before performing the restore and re-initialization. Consequently, the replicas might behave unexpectedly. For example, if you attempt to add a user that was originally added after performing the backup, and thus removed during the restore and re-initialization steps, the operation might fail with an error, because the Schema Compatibility cache contains a conflicting user entry. To work around this problem, restart the IdM replicas after re-intializing them from the master server. This clears the Schema Compatibility cache and ensures that the replicas behave as expected in the described situation.
Clone Of:
Environment:
Last Closed: 2020-10-22 11:35:00 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
contains error_log and dse.ldif (130.00 KB, application/x-tar)
2015-01-27 15:05 UTC, Kaleem
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker FREEIPA-9381 0 None None None 2023-01-30 07:36:58 UTC

Description Kaleem 2015-01-27 13:49:50 UTC
Description of problem:
Not able to add a user (testuser4) which existed on Replica before re-initialize from restored master. Same scenario working on restored master.

Version-Release number of selected component (if applicable):
ipa-server-4.1.0-16.el7.x86_64

How reproducible:
Always

Steps to Reproduce:
(1) Install Master
(2) Install replica
(3) Added a user(testuser1) on Master
(4) Added a user (testuser2) on Replica
(5) Took backup on Master
(6) Added a user(testuser3) on Master
(7) Added a user (testuser4) on Replica
(8) Restored from backup taken in step (5)
(9) Users testuser1 and testuser2 found on Master
(10) Did re-initialize on Replica to fetch restored data
(11) Got users testuser1 and testuser2 on Replica too.
(12) Tried to add user testuser3 again on Master. Added successfully
(13) Tried to add user testuser4 again on Replica, Failed

Actual results:

[root@dhcp207-58 ~]# ipa user-find
---------------
3 users matched
---------------
  User login: admin
  Last name: Administrator
  Home directory: /home/admin
  Login shell: /bin/bash
  UID: 139000000
  GID: 139000000
  Account disabled: False
  Password: True
  Kerberos keys available: True

  User login: testuser1
  First name: testuser1
  Last name: testuser1
  Home directory: /home/testuser1
  Login shell: /bin/sh
  Email address: testuser1
  UID: 139000003
  GID: 139000003
  Account disabled: False
  Password: True
  Kerberos keys available: True

  User login: testuser2
  First name: testuser2
  Last name: testuser2
  Home directory: /home/testuser2
  Login shell: /bin/sh
  Email address: testuser2
  UID: 139100500
  GID: 139100500
  Account disabled: False
  Password: True
  Kerberos keys available: True
----------------------------
Number of entries returned 3
----------------------------
[root@dhcp207-58 ~]# echo dummy123 |           ipa user-add testuser4                         --first 'testuser4'                         --last  'testuser4'                         --password
ipa: ERROR: user with name "testuser4" already exists
[root@dhcp207-58 ~]#

Expected results:

User (testuser4) should have been added.


Additional info:
(1)After restarting ipa service, able to add user
(2)Following shown in /var/log/dirsrv/slapd-TESTRELM-TEST/errors

  [27/Jan/2015:15:26:13 +051800] - SLAPI_PLUGIN_BE_TXN_PRE_ADD_FN plugin failed: 19

Comment 2 Ludwig 2015-01-27 14:31:49 UTC
the error 19 returned by the plugin is LDAP_CONSTRAINT_VIOLATION
which could be returned by uid uniqueness plugin, 

could you upload the error log and the config file dse.ldif

Comment 3 Kaleem 2015-01-27 15:05:49 UTC
Created attachment 984715 [details]
contains error_log and dse.ldif

Comment 4 thierry bordaz 2015-01-27 16:40:47 UTC
I reproduced the problem and wonder if it is not related to some schema-compat caching:

test case on Master (ipa-backup, ipa-restore <backupd_dir>)
  on replica (ipa-replica-manage re-initialize --from <master_host>)

After the re-init on Replica:
# testuser4, users, compat, idm.lab.bos.redhat.com
dn: uid=testuser4,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# testuser3, users, compat, idm.lab.bos.redhat.com
dn: uid=testuser3,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# testuser2, users, compat, idm.lab.bos.redhat.com
dn: uid=testuser2,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# testuser1, users, compat, idm.lab.bos.redhat.com
dn: uid=testuser1,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# admin, users, compat, idm.lab.bos.redhat.com
dn: uid=admin,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# admin, users, accounts, idm.lab.bos.redhat.com
dn: uid=admin,cn=users,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# sudo, sysaccounts, etc, idm.lab.bos.redhat.com
dn: uid=sudo,cn=sysaccounts,cn=etc,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# testuser1, users, accounts, idm.lab.bos.redhat.com
dn: uid=testuser1,cn=users,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# testuser2, users, accounts, idm.lab.bos.redhat.com
dn: uid=testuser2,cn=users,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com



On master:
# testuser2, users, compat, idm.lab.bos.redhat.com
dn: uid=testuser2,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# testuser1, users, compat, idm.lab.bos.redhat.com
dn: uid=testuser1,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# admin, users, compat, idm.lab.bos.redhat.com
dn: uid=admin,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# admin, users, accounts, idm.lab.bos.redhat.com
dn: uid=admin,cn=users,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# sudo, sysaccounts, etc, idm.lab.bos.redhat.com
dn: uid=sudo,cn=sysaccounts,cn=etc,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# testuser1, users, accounts, idm.lab.bos.redhat.com
dn: uid=testuser1,cn=users,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com

# testuser2, users, accounts, idm.lab.bos.redhat.com
dn: uid=testuser2,cn=users,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com


It is looking like the online init does not clear some schema-compat entries.

The error 19 (constraint violation) is returned by schema-compat:

[26/Jan/2015:23:38:41 -0500] schema-compat-plugin - searching from "dc=idm,dc=lab,dc=bos,dc=redhat,dc=com" for "(&(objectClass=posixAccount)(uid=testuser3))" with scope 2 (sub)
[26/Jan/2015:23:38:41 -0500] schema-compat-plugin - search matched uid=testuser3,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com
[26/Jan/2015:23:38:41 -0500] NSUniqueAttr - SEARCH entry dn=uid=testuser3,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com
[26/Jan/2015:23:38:41 -0500] NSUniqueAttr - SEARCH complete result=19
[26/Jan/2015:23:38:41 -0500] NSUniqueAttr - SEARCH result = 19
[26/Jan/2015:23:38:41 -0500] NSUniqueAttr - ADD result 19
[26/Jan/2015:23:38:41 -0500] - SLAPI_PLUGIN_BE_TXN_PRE_ADD_FN plugin failed: 19

Comment 5 Ludwig 2015-01-27 17:00:34 UTC
the error comes from the uid uniqueness plugin, but that the testuser4 is still present in the comapt tree should not happen

Comment 6 Martin Kosek 2015-01-28 09:48:49 UTC
Ludwig, Thierry - thanks for investigation. Can either of you please summarize what is the problem in the end, what is the impact and how can we fix it? (and where? 389-DS or IPA?).

Comment 7 Alexander Bokovoy 2015-01-28 10:48:00 UTC
One idea we had is that perhaps slapi-nis needs to listen to the backend state -- whether it is online or offline. It would mean registering a handler with slapi_register_backend_state_change() and then check for SLAPI_BE_STATE_* states against subtrees we are listening to.

For each subtree check if the backend that is changing its state to offline or deleted is handling the parent of the subtree and if so, invalidate cached entries from this subtree.

For each subtree check if the backend that is changing its state to online is handling the parent of the subtree and if so, trigger populating the cache.

There probably would be details and corner cases (AD users, for example, aren't related to any backend though their group membership is) but overall it looks like backend state handling is missing.

Comment 8 Petr Vobornik 2015-02-09 15:19:59 UTC
waiting for the summary mentioned in comment 6

Comment 9 Ludwig 2015-02-10 08:39:32 UTC
sorry, I thought Alexanders update #7 would be enough.

To me the problem is:

on the replica there is user3 and it is also in the compat tree.
the replica is reinitialized with data not containing user3, but it is not removed from the comapt tree cache
when trying to add user3 again on the replica it fails because the uiduniqueness check fails since teh user is still there in the compat tree.
A restart of DS fixes this, but a long term fix would have to be in slapi-nis plugin, see update #7

Comment 20 Florence Blanc-Renaud 2020-02-14 15:42:06 UTC
Thank you taking your time and submitting this request for Red Hat Enterprise Linux 7. Unfortunately, this bug cannot be kept even as a stretch goal and was postponed to RHEL8.

Comment 23 Petr Čech 2020-10-22 11:35:00 UTC
This BZ has been evaluated multiple times over the last several years and we assessed that it is a valuable request to keep in the backlog and address it at some point in future. Time showed that we did not have such capacity, nor have it now nor will have in the foreseeable future. In such a situation keeping it in the backlog is misleading and setting the wrong expectation that we will be able to address it. Unfortunately we will not. To reflect this we are closing this BZ. If you disagree with the decision please reopen or open a new support case and create a new BZ. However this does not guarantee that the request will not be closed during the triage as we are currently applying much more rigor to what we actually can accomplish in the foreseeable future. Contributions and collaboration in the upstream community and CentOS Stream is always welcome!
Thank you for understanding.
Red Hat Enterprise Linux Identity Management Team


Note You need to log in before you can comment on or make changes to this bug.