Hide Forgot
Description of problem: Not able to add a user (testuser4) which existed on Replica before re-initialize from restored master. Same scenario working on restored master. Version-Release number of selected component (if applicable): ipa-server-4.1.0-16.el7.x86_64 How reproducible: Always Steps to Reproduce: (1) Install Master (2) Install replica (3) Added a user(testuser1) on Master (4) Added a user (testuser2) on Replica (5) Took backup on Master (6) Added a user(testuser3) on Master (7) Added a user (testuser4) on Replica (8) Restored from backup taken in step (5) (9) Users testuser1 and testuser2 found on Master (10) Did re-initialize on Replica to fetch restored data (11) Got users testuser1 and testuser2 on Replica too. (12) Tried to add user testuser3 again on Master. Added successfully (13) Tried to add user testuser4 again on Replica, Failed Actual results: [root@dhcp207-58 ~]# ipa user-find --------------- 3 users matched --------------- User login: admin Last name: Administrator Home directory: /home/admin Login shell: /bin/bash UID: 139000000 GID: 139000000 Account disabled: False Password: True Kerberos keys available: True User login: testuser1 First name: testuser1 Last name: testuser1 Home directory: /home/testuser1 Login shell: /bin/sh Email address: testuser1 UID: 139000003 GID: 139000003 Account disabled: False Password: True Kerberos keys available: True User login: testuser2 First name: testuser2 Last name: testuser2 Home directory: /home/testuser2 Login shell: /bin/sh Email address: testuser2 UID: 139100500 GID: 139100500 Account disabled: False Password: True Kerberos keys available: True ---------------------------- Number of entries returned 3 ---------------------------- [root@dhcp207-58 ~]# echo dummy123 | ipa user-add testuser4 --first 'testuser4' --last 'testuser4' --password ipa: ERROR: user with name "testuser4" already exists [root@dhcp207-58 ~]# Expected results: User (testuser4) should have been added. Additional info: (1)After restarting ipa service, able to add user (2)Following shown in /var/log/dirsrv/slapd-TESTRELM-TEST/errors [27/Jan/2015:15:26:13 +051800] - SLAPI_PLUGIN_BE_TXN_PRE_ADD_FN plugin failed: 19
the error 19 returned by the plugin is LDAP_CONSTRAINT_VIOLATION which could be returned by uid uniqueness plugin, could you upload the error log and the config file dse.ldif
Created attachment 984715 [details] contains error_log and dse.ldif
I reproduced the problem and wonder if it is not related to some schema-compat caching: test case on Master (ipa-backup, ipa-restore <backupd_dir>) on replica (ipa-replica-manage re-initialize --from <master_host>) After the re-init on Replica: # testuser4, users, compat, idm.lab.bos.redhat.com dn: uid=testuser4,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com # testuser3, users, compat, idm.lab.bos.redhat.com dn: uid=testuser3,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com # testuser2, users, compat, idm.lab.bos.redhat.com dn: uid=testuser2,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com # testuser1, users, compat, idm.lab.bos.redhat.com dn: uid=testuser1,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com # admin, users, compat, idm.lab.bos.redhat.com dn: uid=admin,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com # admin, users, accounts, idm.lab.bos.redhat.com dn: uid=admin,cn=users,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com # sudo, sysaccounts, etc, idm.lab.bos.redhat.com dn: uid=sudo,cn=sysaccounts,cn=etc,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com # testuser1, users, accounts, idm.lab.bos.redhat.com dn: uid=testuser1,cn=users,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com # testuser2, users, accounts, idm.lab.bos.redhat.com dn: uid=testuser2,cn=users,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com On master: # testuser2, users, compat, idm.lab.bos.redhat.com dn: uid=testuser2,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com # testuser1, users, compat, idm.lab.bos.redhat.com dn: uid=testuser1,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com # admin, users, compat, idm.lab.bos.redhat.com dn: uid=admin,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com # admin, users, accounts, idm.lab.bos.redhat.com dn: uid=admin,cn=users,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com # sudo, sysaccounts, etc, idm.lab.bos.redhat.com dn: uid=sudo,cn=sysaccounts,cn=etc,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com # testuser1, users, accounts, idm.lab.bos.redhat.com dn: uid=testuser1,cn=users,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com # testuser2, users, accounts, idm.lab.bos.redhat.com dn: uid=testuser2,cn=users,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com It is looking like the online init does not clear some schema-compat entries. The error 19 (constraint violation) is returned by schema-compat: [26/Jan/2015:23:38:41 -0500] schema-compat-plugin - searching from "dc=idm,dc=lab,dc=bos,dc=redhat,dc=com" for "(&(objectClass=posixAccount)(uid=testuser3))" with scope 2 (sub) [26/Jan/2015:23:38:41 -0500] schema-compat-plugin - search matched uid=testuser3,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com [26/Jan/2015:23:38:41 -0500] NSUniqueAttr - SEARCH entry dn=uid=testuser3,cn=users,cn=compat,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com [26/Jan/2015:23:38:41 -0500] NSUniqueAttr - SEARCH complete result=19 [26/Jan/2015:23:38:41 -0500] NSUniqueAttr - SEARCH result = 19 [26/Jan/2015:23:38:41 -0500] NSUniqueAttr - ADD result 19 [26/Jan/2015:23:38:41 -0500] - SLAPI_PLUGIN_BE_TXN_PRE_ADD_FN plugin failed: 19
the error comes from the uid uniqueness plugin, but that the testuser4 is still present in the comapt tree should not happen
Ludwig, Thierry - thanks for investigation. Can either of you please summarize what is the problem in the end, what is the impact and how can we fix it? (and where? 389-DS or IPA?).
One idea we had is that perhaps slapi-nis needs to listen to the backend state -- whether it is online or offline. It would mean registering a handler with slapi_register_backend_state_change() and then check for SLAPI_BE_STATE_* states against subtrees we are listening to. For each subtree check if the backend that is changing its state to offline or deleted is handling the parent of the subtree and if so, invalidate cached entries from this subtree. For each subtree check if the backend that is changing its state to online is handling the parent of the subtree and if so, trigger populating the cache. There probably would be details and corner cases (AD users, for example, aren't related to any backend though their group membership is) but overall it looks like backend state handling is missing.
waiting for the summary mentioned in comment 6
sorry, I thought Alexanders update #7 would be enough. To me the problem is: on the replica there is user3 and it is also in the compat tree. the replica is reinitialized with data not containing user3, but it is not removed from the comapt tree cache when trying to add user3 again on the replica it fails because the uiduniqueness check fails since teh user is still there in the compat tree. A restart of DS fixes this, but a long term fix would have to be in slapi-nis plugin, see update #7
Thank you taking your time and submitting this request for Red Hat Enterprise Linux 7. Unfortunately, this bug cannot be kept even as a stretch goal and was postponed to RHEL8.
This BZ has been evaluated multiple times over the last several years and we assessed that it is a valuable request to keep in the backlog and address it at some point in future. Time showed that we did not have such capacity, nor have it now nor will have in the foreseeable future. In such a situation keeping it in the backlog is misleading and setting the wrong expectation that we will be able to address it. Unfortunately we will not. To reflect this we are closing this BZ. If you disagree with the decision please reopen or open a new support case and create a new BZ. However this does not guarantee that the request will not be closed during the triage as we are currently applying much more rigor to what we actually can accomplish in the foreseeable future. Contributions and collaboration in the upstream community and CentOS Stream is always welcome! Thank you for understanding. Red Hat Enterprise Linux Identity Management Team