Bug 1680245
| Summary: | Error in memberof-plugin cause failures in ipa group-mod | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Lukas Slebodnik <lslebodn> |
| Component: | 389-ds-base | Assignee: | mreynolds |
| Status: | CLOSED ERRATA | QA Contact: | RHDS QE <ds-qe-bugs> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 7.7 | CC: | aadhikar, dpal, jomurphy, lkrispen, mkosek, mreynolds, nkinder, pasik, rmeggins, spichugi, tbordaz, vashirov |
| Target Milestone: | rc | Keywords: | Regression, TestBlocker |
| Target Release: | 7.7 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | 389-ds-base-1.3.9.1-3.el7 | Doc Type: | Bug Fix |
| Doc Text: |
Cause: When there are already heavily nested group memberships, and you perform a modrdn, it is possible that entry was already a member of a group.
Consequence: The modrdn operation is incorrectly rejected
Fix: Do not reject the operation if the entry is already a member of the group the memberOf plugin is trying to update
Result: Modrdn operations are not rejected if the memberOf plugin has already applied the update.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-08-06 12:59:17 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Lukas Slebodnik
2019-02-23 10:56:19 UTC
This issue is being addressed in: https://pagure.io/389-ds-base/issue/50236 --> this is the cause of the error 1, which then triggered ticket 50238 https://pagure.io/389-ds-base/issue/50238 --> this is the cause of the error 32 following the modrdn failure Both of these have fixes that are currently under review... Fails for me on different place :: [ 00:18:16 ] :: [ BEGIN ] :: Verify expected error message for renaming a user private group. :: actually running 'verifyErrorMsg "ipa group-mod --rename=new_group1 supercr1" "ipa: ERROR: Server is unwilling to perform: Renaming a managed entry is not allowed. It needs to be manually unlinked first."' :: [ 00:18:16 ] :: [ LOG ] :: Executing: ipa group-mod --rename=new_group1 supercr1 ipa: ERROR: Server is unwilling to perform: Renaming a managed entry is not allowed. It needs to be manually unlinked first. :: [ 00:18:17 ] :: [ LOG ] :: "ipa group-mod --rename=new_group1 supercr1" failed as expected. :: [ 00:18:17 ] :: [ FAIL ] :: ERROR: Message not as expected. GOT: ipa: ERROR: This entry already exists EXP: ipa: ERROR: Server is unwilling to perform: Renaming a managed entry is not allowed. It needs to be manually unlinked first. :: [ 00:18:17 ] :: [ FAIL ] :: Verify expected error message for renaming a user private group. (Expected 0, got 1) error: [16/Mar/2019:00:18:28.419544603 +0200] - ERR - conn=407 op=1 - urp_fixup_add_cenotaph - failed to add cenotaph, err= 21 [16/Mar/2019:00:18:34.486770286 +0200] - ERR - conn=413 op=1 - urp_fixup_add_cenotaph - failed to add cenotaph, err= 21 [16/Mar/2019:00:18:42.119650205 +0200] - ERR - conn=421 op=1 - urp_fixup_add_cenotaph - failed to add cenotaph, err= 21 [16/Mar/2019:00:18:48.927058123 +0200] - ERR - conn=428 op=1 - urp_fixup_add_cenotaph - failed to add cenotaph, err= 21 [16/Mar/2019:00:18:56.472345265 +0200] - ERR - conn=436 op=3 - urp_fixup_add_cenotaph - failed to add cenotaph, err= 21 [16/Mar/2019:00:19:42.229239329 +0200] - ERR - conn=488 op=1 - urp_fixup_add_cenotaph - failed to add cenotaph, err= 21 [16/Mar/2019:00:19:46.719156919 +0200] - ERR - conn=492 op=1 - urp_fixup_add_cenotaph - failed to add cenotaph, err= 21 access: [16/Mar/2019:00:18:16.197612220 +0200] conn=394 fd=111 slot=111 connection from 10.35.59.6 to 10.35.59.6 [16/Mar/2019:00:18:16.201108077 +0200] conn=394 op=0 BIND dn="" method=sasl version=3 mech=GSS-SPNEGO [16/Mar/2019:00:18:16.209145954 +0200] conn=394 op=0 RESULT err=0 tag=97 nentries=0 etime=0.0008282899 dn="uid=admin,cn=users,cn=accounts,dc=testrelm,dc=test" [16/Mar/2019:00:18:16.211594317 +0200] conn=394 op=1 SRCH base="cn=ipaconfig,cn=etc,dc=testrelm,dc=test" scope=0 filter="(objectClass=*)" attrs=ALL [16/Mar/2019:00:18:16.212509482 +0200] conn=394 op=1 RESULT err=0 tag=101 nentries=1 etime=0.0001004267 [16/Mar/2019:00:18:16.213875178 +0200] conn=394 op=2 SRCH base="cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test" scope=0 filter="(objectClass=*)" attrs="memberofindirect cn ipaExternalMember memberOf * memberindirect member gidNumber description aci" [16/Mar/2019:00:18:16.214267342 +0200] conn=394 op=2 RESULT err=0 tag=101 nentries=1 etime=0.0000479146 [16/Mar/2019:00:18:16.215067304 +0200] conn=394 op=3 SRCH base="dc=testrelm,dc=test" scope=2 filter="(&(member=*)(memberOf=cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test))" attrs="member" [16/Mar/2019:00:18:16.215305377 +0200] conn=394 op=3 RESULT err=0 tag=101 nentries=0 etime=0.0000318126 notes=P pr_idx=0 pr_cookie=-1 [16/Mar/2019:00:18:16.216022001 +0200] conn=394 op=4 SRCH base="dc=testrelm,dc=test" scope=2 filter="(|(member=cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test)(memberUser=cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test)(memberHost=cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test))" attrs="" [16/Mar/2019:00:18:16.216234124 +0200] conn=394 op=4 RESULT err=0 tag=101 nentries=0 etime=0.0000296314 notes=P pr_idx=0 pr_cookie=-1 [16/Mar/2019:00:18:16.218123422 +0200] conn=394 op=5 UNBIND [16/Mar/2019:00:18:16.218140959 +0200] conn=394 op=5 fd=111 closed - U1 [16/Mar/2019:00:18:16.967493062 +0200] conn=395 fd=111 slot=111 connection from 10.35.59.6 to 10.35.59.6 [16/Mar/2019:00:18:16.970847585 +0200] conn=395 op=0 BIND dn="" method=sasl version=3 mech=GSS-SPNEGO [16/Mar/2019:00:18:16.978669232 +0200] conn=395 op=0 RESULT err=0 tag=97 nentries=0 etime=0.0008083337 dn="uid=admin,cn=users,cn=accounts,dc=testrelm,dc=test" [16/Mar/2019:00:18:16.983850679 +0200] conn=395 op=1 MODRDN dn="cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test" newrdn="cn=new_group1" newsuperior="(null)" [16/Mar/2019:00:18:16.985220682 +0200] conn=395 op=1 RESULT err=53 tag=109 nentries=0 etime=0.0001457309 [16/Mar/2019:00:18:16.986742577 +0200] conn=395 op=2 UNBIND [16/Mar/2019:00:18:16.986759148 +0200] conn=395 op=2 fd=111 closed - U1 [16/Mar/2019:00:18:17.646398833 +0200] conn=396 fd=111 slot=111 connection from 10.35.59.6 to 10.35.59.6 [16/Mar/2019:00:18:17.649866189 +0200] conn=396 op=0 BIND dn="" method=sasl version=3 mech=GSS-SPNEGO [16/Mar/2019:00:18:17.658261445 +0200] conn=396 op=0 RESULT err=0 tag=97 nentries=0 etime=0.0008666498 dn="uid=admin,cn=users,cn=accounts,dc=testrelm,dc=test" [16/Mar/2019:00:18:17.663309849 +0200] conn=396 op=1 MODRDN dn="cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test" newrdn="cn=new_group1" newsuperior="(null)" [16/Mar/2019:00:18:17.664447318 +0200] conn=396 op=1 RESULT err=68 tag=109 nentries=0 etime=0.0001222326 [16/Mar/2019:00:18:17.665908047 +0200] conn=396 op=2 UNBIND [16/Mar/2019:00:18:17.665925476 +0200] conn=396 op=2 fd=111 closed - U1 [16/Mar/2019:00:18:18.373950198 +0200] conn=397 fd=111 slot=111 connection from 10.35.59.6 to 10.35.59.6 [16/Mar/2019:00:18:18.377231476 +0200] conn=397 op=0 BIND dn="" method=sasl version=3 mech=GSS-SPNEGO [16/Mar/2019:00:18:18.387028607 +0200] conn=397 op=0 RESULT err=0 tag=97 nentries=0 etime=0.0010049090 dn="uid=admin,cn=users,cn=accounts,dc=testrelm,dc=test" [16/Mar/2019:00:18:18.389113987 +0200] conn=397 op=1 SRCH base="cn=ipaconfig,cn=etc,dc=testrelm,dc=test" scope=0 filter="(objectClass=*)" attrs=ALL [16/Mar/2019:00:18:18.390057169 +0200] conn=397 op=1 RESULT err=0 tag=101 nentries=1 etime=0.0001033036 [16/Mar/2019:00:18:18.391470173 +0200] conn=397 op=2 SRCH base="cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test" scope=0 filter="(objectClass=*)" attrs="memberofindirect cn ipaExternalMember memberOf * memberindirect member gidNumber description aci" [16/Mar/2019:00:18:18.391887440 +0200] conn=397 op=2 RESULT err=0 tag=101 nentries=1 etime=0.0000519436 [16/Mar/2019:00:18:18.392701207 +0200] conn=397 op=3 SRCH base="dc=testrelm,dc=test" scope=2 filter="(&(member=*)(memberOf=cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test))" attrs="member" [16/Mar/2019:00:18:18.392962301 +0200] conn=397 op=3 RESULT err=0 tag=101 nentries=0 etime=0.0000355271 notes=P pr_idx=0 pr_cookie=-1 [16/Mar/2019:00:18:18.393685237 +0200] conn=397 op=4 SRCH base="dc=testrelm,dc=test" scope=2 filter="(|(member=cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test)(memberUser=cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test)(memberHost=cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test))" attrs="" [16/Mar/2019:00:18:18.393917036 +0200] conn=397 op=4 RESULT err=0 tag=101 nentries=0 etime=0.0000331677 notes=P pr_idx=0 pr_cookie=-1 [16/Mar/2019:00:18:18.395908231 +0200] conn=397 op=5 UNBIND [16/Mar/2019:00:18:18.395925660 +0200] conn=397 op=5 fd=111 closed - U1 Well there are two issues [16/Mar/2019:00:18:16.983850679 +0200] conn=395 op=1 MODRDN dn="cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test" newrdn="cn=new_group1" newsuperior="(null)" [16/Mar/2019:00:18:16.985220682 +0200] conn=395 op=1 RESULT err=53 tag=109 nentries=0 etime=0.0001457309 This above kind of seems expected, but I need to look into it more, but this below: [16/Mar/2019:00:18:17.663309849 +0200] conn=396 op=1 MODRDN dn="cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test" newrdn="cn=new_group1" newsuperior="(null)" [16/Mar/2019:00:18:17.664447318 +0200] conn=396 op=1 RESULT err=68 tag=109 nentries=0 etime=0.0001222326 This looks like entry cache corruption which is exactly what we thought we just "fixed". Is there a system I can look at where this is reproducible? Test passed for me after removing patch 0006-Ticket-50260-backend-txn-plugins-can-corrupt-entry-c.patch from spec file. Lukas, If you could set up a fresh install (with the fix for 50260 in place), and send me the exact command to reproduce the error that would be great!!! Thanks, Mark Hi Lukas, Actually looks like I reproduce the issue now in my CI test!! One quick question, the error 53 (unwilling to perform) that is the expected error message correct? The problem is that on the second attempt the error changes to ALREADY_EXISTS? Thanks again, Mark (In reply to mreynolds from comment #14) > Hi Lukas, > > Actually looks like I reproduce the issue now in my CI test!! One quick > question, the error 53 (unwilling to perform) that is the expected error > message correct? The problem is that on the second attempt the error > changes to ALREADY_EXISTS? > I am fine with workaround which I found (removing 0006-Ticket-50260-backend-txn-plugins-can-corrupt-entry-c.patch) I let QEs to prepare reproducer mentioned in comment 2. Just a note there is another BZ1689585 which make it more difficult to reproduce on el7.7. They might need rhel7.6 + 389-ds from rhel7.7 (In reply to Lukas Slebodnik from comment #15) > (In reply to mreynolds from comment #14) > > Hi Lukas, > > > > Actually looks like I reproduce the issue now in my CI test!! One quick > > question, the error 53 (unwilling to perform) that is the expected error > > message correct? The problem is that on the second attempt the error > > changes to ALREADY_EXISTS? > > > > I am fine with workaround which I found (removing > 0006-Ticket-50260-backend-txn-plugins-can-corrupt-entry-c.patch) Well I found the root cause of the last failure. The fix for 50260 was previously not checking all the hashtables in the cache. I have fixed this, and it is now under review... > > I let QEs to prepare reproducer mentioned in comment 2. > > Just a note there is another BZ1689585 which make it more difficult to > reproduce on el7.7. > They might need rhel7.6 + 389-ds from rhel7.7 I was planning on adding this fix to the next 7.6 batch update, but I'm not sure what's is going on with the other bug (BZ1689585) - not enough info to know it's related to this or not (looks like CS issue, but I could be wrong) ============================================================================ test session starts ============================================================================
platform linux -- Python 3.6.3, pytest-4.4.0, py-1.8.0, pluggy-0.9.0 -- /opt/rh/rh-python36/root/usr/bin/python3
cachedir: .pytest_cache
metadata: {'Python': '3.6.3', 'Platform': 'Linux-3.10.0-1031.el7.x86_64-x86_64-with-redhat-7.7-Maipo', 'Packages': {'pytest': '4.4.0', 'py': '1.8.0', 'pluggy': '0.9.0'}, 'Plugins': {'metadata': '1.8.0', 'html': '1.20.0'}}
389-ds-base: 1.3.9.1-7.el7
nss: 3.43.0-6.el7
nspr: 4.21.0-1.el7
openldap: 2.4.44-21.el7_6
cyrus-sasl: 2.1.26-23.el7
FIPS: disabled
rootdir: /root
plugins: metadata-1.8.0, html-1.20.0
collected 6 items
.......
================================================================== 6 passed in 314.89 seconds ==================================================================
Marking it as VERIFIED.
(In reply to Akshay Adhikari from comment #17) > ============================================================================ > test session starts > ============================================================================ > platform linux -- Python 3.6.3, pytest-4.4.0, py-1.8.0, pluggy-0.9.0 -- > /opt/rh/rh-python36/root/usr/bin/python3 > cachedir: .pytest_cache > metadata: {'Python': '3.6.3', 'Platform': > 'Linux-3.10.0-1031.el7.x86_64-x86_64-with-redhat-7.7-Maipo', 'Packages': > {'pytest': '4.4.0', 'py': '1.8.0', 'pluggy': '0.9.0'}, 'Plugins': > {'metadata': '1.8.0', 'html': '1.20.0'}} > 389-ds-base: 1.3.9.1-7.el7 > nss: 3.43.0-6.el7 > nspr: 4.21.0-1.el7 > openldap: 2.4.44-21.el7_6 > cyrus-sasl: 2.1.26-23.el7 > FIPS: disabled > rootdir: /root > plugins: metadata-1.8.0, html-1.20.0 > collected 6 items > > > ....... > > ================================================================== 6 passed > in 314.89 seconds > ================================================================== > > Marking it as VERIFIED. The upstream test code is in: 389-ds-base/dirsrvtests/tests/suites/memberof_plugin/regression_test.py Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2152 |