RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1680245 - Error in memberof-plugin cause failures in ipa group-mod
Summary: Error in memberof-plugin cause failures in ipa group-mod
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: 389-ds-base
Version: 7.7
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 7.7
Assignee: mreynolds
QA Contact: RHDS QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-02-23 10:56 UTC by Lukas Slebodnik
Modified: 2020-09-13 22:18 UTC (History)
12 users (show)

Fixed In Version: 389-ds-base-1.3.9.1-3.el7
Doc Type: Bug Fix
Doc Text:
Cause: When there are already heavily nested group memberships, and you perform a modrdn, it is possible that entry was already a member of a group. Consequence: The modrdn operation is incorrectly rejected Fix: Do not reject the operation if the entry is already a member of the group the memberOf plugin is trying to update Result: Modrdn operations are not rejected if the memberOf plugin has already applied the update.
Clone Of:
Environment:
Last Closed: 2019-08-06 12:59:17 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github 389ds 389-ds-base issues 3295 0 None None None 2020-09-13 22:18:41 UTC
Github 389ds 389-ds-base issues 3297 0 None None None 2020-09-13 22:18:48 UTC
Red Hat Product Errata RHBA-2019:2152 0 None None None 2019-08-06 12:59:29 UTC

Description Lukas Slebodnik 2019-02-23 10:56:19 UTC
Description of problem:
There are failures when calling ipa group-mod after recent upgrade of 389-ds-base

Version-Release number of selected component (if applicable):
sh$ rpm -q 389-ds-base
389-ds-base-1.3.9.1-1.el7.x86_64

How reproducible:
Deterministic


Additional info:
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
::   Rename a Child group when groups are Nested
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

:: [ 12:35:29 ] :: [   LOG    ] :: Executing: ipa group-mod --rename=animalkingdompark animalkingdom
:: [ 12:35:29 ] :: [  BEGIN   ] :: Renaming animalkingdom to animalkingdompark :: actually running 'ipa group-mod --rename=animalkingdompark animalkingdom'
ipa: ERROR: Operations error: 
:: [ 12:35:29 ] :: [   FAIL   ] :: Renaming animalkingdom to animalkingdompark (Expected 0, got 1)
:: [ 12:35:29 ] :: [  BEGIN   ] ::  Renamed group should now be returned by group-find command. :: actually running 'findGroup animalkingdompark'
----------------
0 groups matched
----------------
----------------------------
Number of entries returned 0
----------------------------
:: [ 12:35:30 ] :: [   LOG    ] :: WARNING: Failed to find group.
:: [ 12:35:30 ] :: [   FAIL   ] ::  Renamed group should now be returned by group-find command. (Expected 0, got 1)
Member groups: epcot, animalkingdompark Indirect Member groups: japan, germany, dinasaurs, fish

And journald contains following errors:
Feb 23 12:35:24 kvm-01-guest15.testrelm.test ns-slapd[25475]: [23/Feb/2019:12:35:24.979071525 +0200] - ERR - conn=488 op=1 - urp_fixup_add_cenotaph - failed to add cenotaph, err= 21
Feb 23 12:35:26 kvm-01-guest15.testrelm.test ns-slapd[25475]: GSSAPI server step 1
Feb 23 12:35:26 kvm-01-guest15.testrelm.test ns-slapd[25475]: GSSAPI server step 1
Feb 23 12:35:27 kvm-01-guest15.testrelm.test ns-slapd[25475]: GSSAPI server step 1
Feb 23 12:35:29 kvm-01-guest15.testrelm.test ns-slapd[25475]: GSSAPI server step 1
Feb 23 12:35:29 kvm-01-guest15.testrelm.test ns-slapd[25475]: [23/Feb/2019:12:35:29.713299225 +0200] - ERR - memberof-plugin - memberof_postop_modrdn - Update failed for (cn=animalkingdom,cn=groups,cn=accounts,dc=testrelm,dc=test), error (1)

Comment 5 mreynolds 2019-02-23 12:14:47 UTC
This issue is being addressed in:

https://pagure.io/389-ds-base/issue/50236   --> this is the cause of the error 1, which then triggered ticket 50238

https://pagure.io/389-ds-base/issue/50238   --> this is the cause of the error 32 following the modrdn failure

Both of these have fixes that are currently under review...

Comment 10 Lukas Slebodnik 2019-03-15 23:20:54 UTC
Fails for me on different place

:: [ 00:18:16 ] :: [  BEGIN   ] :: Verify expected error message for renaming a user private group. :: actually running 'verifyErrorMsg "ipa group-mod --rename=new_group1 supercr1" "ipa: ERROR: Server is unwilling to perform: Renaming a managed entry is not allowed. It needs to be manually unlinked first."'
:: [ 00:18:16 ] :: [   LOG    ] :: Executing: ipa group-mod --rename=new_group1 supercr1
ipa: ERROR: Server is unwilling to perform: Renaming a managed entry is not allowed. It needs to be manually unlinked first.
:: [ 00:18:17 ] :: [   LOG    ] :: "ipa group-mod --rename=new_group1 supercr1" failed as expected.
:: [ 00:18:17 ] :: [   FAIL   ] :: ERROR: Message not as expected. GOT: ipa: ERROR: This entry already exists  EXP: ipa: ERROR: Server is unwilling to perform: Renaming a managed entry is not allowed. It needs to be manually unlinked first. 
:: [ 00:18:17 ] :: [   FAIL   ] :: Verify expected error message for renaming a user private group. (Expected 0, got 1)

error:
[16/Mar/2019:00:18:28.419544603 +0200] - ERR - conn=407 op=1 - urp_fixup_add_cenotaph - failed to add cenotaph, err= 21
[16/Mar/2019:00:18:34.486770286 +0200] - ERR - conn=413 op=1 - urp_fixup_add_cenotaph - failed to add cenotaph, err= 21
[16/Mar/2019:00:18:42.119650205 +0200] - ERR - conn=421 op=1 - urp_fixup_add_cenotaph - failed to add cenotaph, err= 21
[16/Mar/2019:00:18:48.927058123 +0200] - ERR - conn=428 op=1 - urp_fixup_add_cenotaph - failed to add cenotaph, err= 21
[16/Mar/2019:00:18:56.472345265 +0200] - ERR - conn=436 op=3 - urp_fixup_add_cenotaph - failed to add cenotaph, err= 21
[16/Mar/2019:00:19:42.229239329 +0200] - ERR - conn=488 op=1 - urp_fixup_add_cenotaph - failed to add cenotaph, err= 21
[16/Mar/2019:00:19:46.719156919 +0200] - ERR - conn=492 op=1 - urp_fixup_add_cenotaph - failed to add cenotaph, err= 21


access:
[16/Mar/2019:00:18:16.197612220 +0200] conn=394 fd=111 slot=111 connection from 10.35.59.6 to 10.35.59.6
[16/Mar/2019:00:18:16.201108077 +0200] conn=394 op=0 BIND dn="" method=sasl version=3 mech=GSS-SPNEGO
[16/Mar/2019:00:18:16.209145954 +0200] conn=394 op=0 RESULT err=0 tag=97 nentries=0 etime=0.0008282899 dn="uid=admin,cn=users,cn=accounts,dc=testrelm,dc=test"
[16/Mar/2019:00:18:16.211594317 +0200] conn=394 op=1 SRCH base="cn=ipaconfig,cn=etc,dc=testrelm,dc=test" scope=0 filter="(objectClass=*)" attrs=ALL
[16/Mar/2019:00:18:16.212509482 +0200] conn=394 op=1 RESULT err=0 tag=101 nentries=1 etime=0.0001004267
[16/Mar/2019:00:18:16.213875178 +0200] conn=394 op=2 SRCH base="cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test" scope=0 filter="(objectClass=*)" attrs="memberofindirect cn ipaExternalMember memberOf * memberindirect member gidNumber description aci"
[16/Mar/2019:00:18:16.214267342 +0200] conn=394 op=2 RESULT err=0 tag=101 nentries=1 etime=0.0000479146
[16/Mar/2019:00:18:16.215067304 +0200] conn=394 op=3 SRCH base="dc=testrelm,dc=test" scope=2 filter="(&(member=*)(memberOf=cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test))" attrs="member"
[16/Mar/2019:00:18:16.215305377 +0200] conn=394 op=3 RESULT err=0 tag=101 nentries=0 etime=0.0000318126 notes=P pr_idx=0 pr_cookie=-1
[16/Mar/2019:00:18:16.216022001 +0200] conn=394 op=4 SRCH base="dc=testrelm,dc=test" scope=2 filter="(|(member=cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test)(memberUser=cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test)(memberHost=cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test))" attrs=""
[16/Mar/2019:00:18:16.216234124 +0200] conn=394 op=4 RESULT err=0 tag=101 nentries=0 etime=0.0000296314 notes=P pr_idx=0 pr_cookie=-1
[16/Mar/2019:00:18:16.218123422 +0200] conn=394 op=5 UNBIND
[16/Mar/2019:00:18:16.218140959 +0200] conn=394 op=5 fd=111 closed - U1
[16/Mar/2019:00:18:16.967493062 +0200] conn=395 fd=111 slot=111 connection from 10.35.59.6 to 10.35.59.6
[16/Mar/2019:00:18:16.970847585 +0200] conn=395 op=0 BIND dn="" method=sasl version=3 mech=GSS-SPNEGO
[16/Mar/2019:00:18:16.978669232 +0200] conn=395 op=0 RESULT err=0 tag=97 nentries=0 etime=0.0008083337 dn="uid=admin,cn=users,cn=accounts,dc=testrelm,dc=test"
[16/Mar/2019:00:18:16.983850679 +0200] conn=395 op=1 MODRDN dn="cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test" newrdn="cn=new_group1" newsuperior="(null)"
[16/Mar/2019:00:18:16.985220682 +0200] conn=395 op=1 RESULT err=53 tag=109 nentries=0 etime=0.0001457309
[16/Mar/2019:00:18:16.986742577 +0200] conn=395 op=2 UNBIND
[16/Mar/2019:00:18:16.986759148 +0200] conn=395 op=2 fd=111 closed - U1
[16/Mar/2019:00:18:17.646398833 +0200] conn=396 fd=111 slot=111 connection from 10.35.59.6 to 10.35.59.6
[16/Mar/2019:00:18:17.649866189 +0200] conn=396 op=0 BIND dn="" method=sasl version=3 mech=GSS-SPNEGO
[16/Mar/2019:00:18:17.658261445 +0200] conn=396 op=0 RESULT err=0 tag=97 nentries=0 etime=0.0008666498 dn="uid=admin,cn=users,cn=accounts,dc=testrelm,dc=test"
[16/Mar/2019:00:18:17.663309849 +0200] conn=396 op=1 MODRDN dn="cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test" newrdn="cn=new_group1" newsuperior="(null)"
[16/Mar/2019:00:18:17.664447318 +0200] conn=396 op=1 RESULT err=68 tag=109 nentries=0 etime=0.0001222326
[16/Mar/2019:00:18:17.665908047 +0200] conn=396 op=2 UNBIND
[16/Mar/2019:00:18:17.665925476 +0200] conn=396 op=2 fd=111 closed - U1
[16/Mar/2019:00:18:18.373950198 +0200] conn=397 fd=111 slot=111 connection from 10.35.59.6 to 10.35.59.6
[16/Mar/2019:00:18:18.377231476 +0200] conn=397 op=0 BIND dn="" method=sasl version=3 mech=GSS-SPNEGO
[16/Mar/2019:00:18:18.387028607 +0200] conn=397 op=0 RESULT err=0 tag=97 nentries=0 etime=0.0010049090 dn="uid=admin,cn=users,cn=accounts,dc=testrelm,dc=test"
[16/Mar/2019:00:18:18.389113987 +0200] conn=397 op=1 SRCH base="cn=ipaconfig,cn=etc,dc=testrelm,dc=test" scope=0 filter="(objectClass=*)" attrs=ALL
[16/Mar/2019:00:18:18.390057169 +0200] conn=397 op=1 RESULT err=0 tag=101 nentries=1 etime=0.0001033036
[16/Mar/2019:00:18:18.391470173 +0200] conn=397 op=2 SRCH base="cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test" scope=0 filter="(objectClass=*)" attrs="memberofindirect cn ipaExternalMember memberOf * memberindirect member gidNumber description aci"
[16/Mar/2019:00:18:18.391887440 +0200] conn=397 op=2 RESULT err=0 tag=101 nentries=1 etime=0.0000519436
[16/Mar/2019:00:18:18.392701207 +0200] conn=397 op=3 SRCH base="dc=testrelm,dc=test" scope=2 filter="(&(member=*)(memberOf=cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test))" attrs="member"
[16/Mar/2019:00:18:18.392962301 +0200] conn=397 op=3 RESULT err=0 tag=101 nentries=0 etime=0.0000355271 notes=P pr_idx=0 pr_cookie=-1
[16/Mar/2019:00:18:18.393685237 +0200] conn=397 op=4 SRCH base="dc=testrelm,dc=test" scope=2 filter="(|(member=cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test)(memberUser=cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test)(memberHost=cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test))" attrs=""
[16/Mar/2019:00:18:18.393917036 +0200] conn=397 op=4 RESULT err=0 tag=101 nentries=0 etime=0.0000331677 notes=P pr_idx=0 pr_cookie=-1
[16/Mar/2019:00:18:18.395908231 +0200] conn=397 op=5 UNBIND
[16/Mar/2019:00:18:18.395925660 +0200] conn=397 op=5 fd=111 closed - U1

Comment 11 mreynolds 2019-03-16 02:49:19 UTC
Well there are two issues


[16/Mar/2019:00:18:16.983850679 +0200] conn=395 op=1 MODRDN
dn="cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test"
newrdn="cn=new_group1" newsuperior="(null)"
[16/Mar/2019:00:18:16.985220682 +0200] conn=395 op=1 RESULT err=53 tag=109
nentries=0 etime=0.0001457309

This above kind of seems expected, but I need to look into it more, but this below:

[16/Mar/2019:00:18:17.663309849 +0200] conn=396 op=1 MODRDN
dn="cn=supercr1,cn=groups,cn=accounts,dc=testrelm,dc=test"
newrdn="cn=new_group1" newsuperior="(null)"
[16/Mar/2019:00:18:17.664447318 +0200] conn=396 op=1 RESULT err=68 tag=109
nentries=0 etime=0.0001222326

This looks like entry cache corruption which is exactly what we thought we just "fixed".   

Is there a system I can look at where this is reproducible?

Comment 12 Lukas Slebodnik 2019-03-16 17:21:08 UTC
Test passed for me after removing patch
0006-Ticket-50260-backend-txn-plugins-can-corrupt-entry-c.patch
from spec file.

Comment 13 mreynolds 2019-03-17 12:38:56 UTC
Lukas,

If you could set up a fresh install (with the fix for 50260 in place), and send me the exact command to reproduce the error that would be great!!!  

Thanks,
Mark

Comment 14 mreynolds 2019-03-17 13:07:03 UTC
Hi Lukas,

Actually looks like I reproduce the issue now in my CI test!!  One quick question, the error 53 (unwilling to perform) that is the expected error message correct?  The problem is that on the second attempt the error changes to ALREADY_EXISTS?

Thanks again,
Mark

Comment 15 Lukas Slebodnik 2019-03-18 09:13:37 UTC
(In reply to mreynolds from comment #14)
> Hi Lukas,
> 
> Actually looks like I reproduce the issue now in my CI test!!  One quick
> question, the error 53 (unwilling to perform) that is the expected error
> message correct?  The problem is that on the second attempt the error
> changes to ALREADY_EXISTS?
> 

I am fine with workaround which I found (removing 0006-Ticket-50260-backend-txn-plugins-can-corrupt-entry-c.patch)

I let QEs to prepare reproducer mentioned in comment 2.

Just a note there is another BZ1689585 which make it more difficult to reproduce on el7.7.
They might need rhel7.6 + 389-ds from rhel7.7

Comment 16 mreynolds 2019-03-18 13:50:16 UTC
(In reply to Lukas Slebodnik from comment #15)
> (In reply to mreynolds from comment #14)
> > Hi Lukas,
> > 
> > Actually looks like I reproduce the issue now in my CI test!!  One quick
> > question, the error 53 (unwilling to perform) that is the expected error
> > message correct?  The problem is that on the second attempt the error
> > changes to ALREADY_EXISTS?
> > 
> 
> I am fine with workaround which I found (removing
> 0006-Ticket-50260-backend-txn-plugins-can-corrupt-entry-c.patch)

Well I found the root cause of the last failure.  The fix for 50260 was previously not checking all the hashtables in the cache.  I have fixed this, and it is now under review...

> 
> I let QEs to prepare reproducer mentioned in comment 2.
> 
> Just a note there is another BZ1689585 which make it more difficult to
> reproduce on el7.7.
> They might need rhel7.6 + 389-ds from rhel7.7

I was planning on adding this fix to the next 7.6 batch update, but I'm not sure what's is going on with the other bug (BZ1689585) - not enough info to know it's related to this or not (looks like CS issue, but I could be wrong)

Comment 17 Akshay Adhikari 2019-05-22 09:13:24 UTC
============================================================================ test session starts ============================================================================
platform linux -- Python 3.6.3, pytest-4.4.0, py-1.8.0, pluggy-0.9.0 -- /opt/rh/rh-python36/root/usr/bin/python3
cachedir: .pytest_cache
metadata: {'Python': '3.6.3', 'Platform': 'Linux-3.10.0-1031.el7.x86_64-x86_64-with-redhat-7.7-Maipo', 'Packages': {'pytest': '4.4.0', 'py': '1.8.0', 'pluggy': '0.9.0'}, 'Plugins': {'metadata': '1.8.0', 'html': '1.20.0'}}
389-ds-base: 1.3.9.1-7.el7
nss: 3.43.0-6.el7
nspr: 4.21.0-1.el7
openldap: 2.4.44-21.el7_6
cyrus-sasl: 2.1.26-23.el7
FIPS: disabled
rootdir: /root
plugins: metadata-1.8.0, html-1.20.0
collected 6 items                                                                                                                                                           

.......

================================================================== 6 passed in 314.89 seconds ==================================================================

Marking it as VERIFIED.

Comment 18 Akshay Adhikari 2019-05-22 09:18:56 UTC
(In reply to Akshay Adhikari from comment #17)
> ============================================================================
> test session starts
> ============================================================================
> platform linux -- Python 3.6.3, pytest-4.4.0, py-1.8.0, pluggy-0.9.0 --
> /opt/rh/rh-python36/root/usr/bin/python3
> cachedir: .pytest_cache
> metadata: {'Python': '3.6.3', 'Platform':
> 'Linux-3.10.0-1031.el7.x86_64-x86_64-with-redhat-7.7-Maipo', 'Packages':
> {'pytest': '4.4.0', 'py': '1.8.0', 'pluggy': '0.9.0'}, 'Plugins':
> {'metadata': '1.8.0', 'html': '1.20.0'}}
> 389-ds-base: 1.3.9.1-7.el7
> nss: 3.43.0-6.el7
> nspr: 4.21.0-1.el7
> openldap: 2.4.44-21.el7_6
> cyrus-sasl: 2.1.26-23.el7
> FIPS: disabled
> rootdir: /root
> plugins: metadata-1.8.0, html-1.20.0
> collected 6 items                                                           
> 
> 
> .......
> 
> ================================================================== 6 passed
> in 314.89 seconds
> ==================================================================
> 
> Marking it as VERIFIED.

The upstream test code is in: 389-ds-base/dirsrvtests/tests/suites/memberof_plugin/regression_test.py

Comment 21 errata-xmlrpc 2019-08-06 12:59:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2152


Note You need to log in before you can comment on or make changes to this bug.