Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
As I already asked in the support case we need the following information to proceed:
- core file
- log files
there is a sosreport in the case but unfortunately it was generated with the default (silent) debug_level. We keep a document upstream https://docs.pagure.org/SSSD.sssd/users/reporting_bugs.html that might be of some help.
Hi,
the core dump looks similiar to the one from https://bugzilla.redhat.com/show_bug.cgi?id=1734040.
I guess it is not strictly 'realm permit' which triggers the issue but the restart triggered by 'realm permit' to re-read the changes permission list. Can you ask the customer if there is a segfault after just calling 'systemctl restart sssd' as well?
Additionally I wonder if the customer can try to add
ldap_search_base = dc=_localtion_,dc=company,dc=com
to each [domain/...] section in their sssd.conf where _location_ is replaced by the first domain component.
bye,
Sumit
Hi, jfyi I send the following comment to the mailing list:
Hi,
it would be good to see some before and after debug logs.
If ldap_sasl_authid is not set SSSD tries to determine it from the
keytab with a priority as given in the sssd-ldap man page:
hostname@REALM
netbiosname$@REALM
host/hostname@REALM
*$@REALM
host/*@REALM
host/*
For a domain other than AMER.COMPANY.COM all patters with '@REALM' would
not match since the realm in the keytab will be AMER.COMPANY.COM. The
last entry would match 'host/amerhost1.COM' but maybe there
is another matching entry before in the keytab which matches first? The
logs would show which principal was selected with ldap_sasl_authid set.
What is a but puzzling is that by default
'host/amerhost1.COM' is a service principal and AD does not
allow service principals for authentication. So I assume that you either
added 'host/amerhost1.COM' to the userPrincipalName
attribute of the host object or configured AD to allow service
principals for authentication.
The second thing which is puzzling, if the wrong principal was chosen
for authentication, authentication will just fail and the backend should
switch into offline mode.
And finally, according to the case you've opened the crash happened in
the process which handles the AMER.COMPANY.COM domain in not in one of
the others which might have chosen a wrong principal.
So, if you can attach to the case the logs with 'debug_level=9' in all
[domain/...] sections of sssd.conf once with ldap_sasl_authid set and
once without if might help to understand why SSSD fails without
ldap_sasl_authid set.
bye,
Sumit