RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1725168 - sssd-proxy crashes resolving groups with no members
Summary: sssd-proxy crashes resolving groups with no members
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: sssd
Version: 8.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 8.0
Assignee: SSSD Maintainers
QA Contact: sssd-qe
URL:
Whiteboard:
Depends On: 1724717
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-06-28 14:58 UTC by Jakub Hrozek
Modified: 2020-06-24 10:57 UTC (History)
14 users (show)

Fixed In Version: sssd-2.2.0-3.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1724717
Environment:
Last Closed: 2019-11-05 22:34:25 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github SSSD sssd issues 5006 0 None closed Logins fail after upgrade to 2.2.0 2020-08-31 09:34:18 UTC
Red Hat Product Errata RHSA-2019:3651 0 None None None 2019-11-05 22:34:44 UTC

Description Jakub Hrozek 2019-06-28 14:58:40 UTC
+++ This bug was initially created as a clone of Bug #1724717 +++

I've been using the configuration I'm on for several Fedora releases on several machines. After 2.2.0 I can't log into the console as my usual user, but I can as root.

I'm using sssd_krb5 for auth, and sssd_proxy to local users with idmapd.conf for consisten uid/gid.

Reverting to 2.1.0 fixes the issue.

Let me know what I can provide to help troubleshoot.

--- Additional comment from Sumit Bose on 2019-06-27 16:19:03 UTC ---

Hi,

please try

    sss_debuglevel 9

new try to log in as the user and collect the logs from /var/log/sssd after the attempt, attach them to the ticket and finally 

    sss_debuglevel 0

bye,
Sumit

--- Additional comment from Gwyn Ciesla on 2019-06-27 16:35:55 UTC ---

I set the debuglevel, updated to 2.2.0, and logged out, and was able to log back in successfully. It's been intermittent, but at least I have the debuglevel set so I can get the logs next time it fails. I'll let you know.

--- Additional comment from Gwyn Ciesla on 2019-06-28 13:18 UTC ---

It happened this morning. I'm attaching the logs. I then had to downgrade sssd again and remove /var/lib/sssd/cache* to log in.

--- Additional comment from Jakub Hrozek on 2019-06-28 14:05:18 UTC ---

From the logs it looks like SSSD is crashing quite often, e.g.:

(Fri Jun 28 08:02:54 2019) [sssd[be[BAMBOO]]] [sysdb_store_user] (0x0400): User "zabbix@bamboo" has been stored
(Fri Jun 28 08:02:54 2019) [sssd[be[BAMBOO]]] [get_initgr_groups_process] (0x0200): The initgroups call returned 'NOTFOUND'. Assume the user is only member of its primary group (973)
(Fri Jun 28 08:02:54 2019) [sssd[be[BAMBOO]]] [get_initgr_groups_process] (0x0100): User [zabbix] appears to be member of 1 groups
(Fri Jun 28 08:02:54 2019) [sssd[be[BAMBOO]]] [get_gr_gid] (0x0400): Searching group by gid (973)
(Fri Jun 28 08:02:54 2019) [sssd[be[BAMBOO]]] [handle_getgr_result] (0x0200): Group found: (zabbix, 973)
(Fri Jun 28 08:02:54 2019) [sssd[be[BAMBOO]]] [ldb] (0x4000): Added timed event "ldb_kv_callback": 0x55b4eaefd1c0

(Fri Jun 28 08:02:54 2019) [sssd[be[BAMBOO]]] [ldb] (0x4000): Added timed event "ldb_kv_timeout": 0x55b4eaefd290

(Fri Jun 28 08:02:54 2019) [sssd[be[BAMBOO]]] [ldb] (0x4000): Running timer event 0x55b4eaefd1c0 "ldb_kv_callback"

(Fri Jun 28 08:02:54 2019) [sssd[be[BAMBOO]]] [ldb] (0x4000): Destroying timer event 0x55b4eaefd290 "ldb_kv_timeout"

(Fri Jun 28 08:02:54 2019) [sssd[be[BAMBOO]]] [ldb] (0x4000): Destroying timer event 0x55b4eaefd1c0 "ldb_kv_callback"

---> here

(Fri Jun 28 08:02:54 2019) [sssd[be[BAMBOO]]] [server_setup] (0x0400): CONFDB: /var/lib/sss/db/config.ldb

--> server_setup() is already a new instance.

Do you have e.g. abrt to systemd-coredump to share a core file? Could share your sssd.conf (feel free to sanitize hostnames etc..)

--- Additional comment from Gwyn Ciesla on 2019-06-28 14:18 UTC ---



--- Additional comment from Gwyn Ciesla on 2019-06-28 14:18 UTC ---



--- Additional comment from Gwyn Ciesla on 2019-06-28 14:19:13 UTC ---

Let me know if these help.

--- Additional comment from Lukas Slebodnik on 2019-06-28 14:49:09 UTC ---

The regression in proxy provider is fixed in https://pagure.io/SSSD/sssd/pull-request/4036

--- Additional comment from Jakub Hrozek on 2019-06-28 14:52:22 UTC ---

Upstream ticket:
https://pagure.io/SSSD/sssd/issue/4037

--- Additional comment from Jakub Hrozek on 2019-06-28 14:54:23 UTC ---

* master: e1b678c0cce73494d986610920b03956c1dbb62a

Comment 1 Jakub Hrozek 2019-06-28 14:59:44 UTC
* master: e1b678c0cce73494d986610920b03956c1dbb62a

Comment 2 Jakub Hrozek 2019-06-28 15:03:20 UTC
To reproduce, set up a domain like this:

id_provider = proxy
proxy_lib_name = files
enumerate = true
ignore_group_members = False
debug_level=9

then run "id $user" for a user from passwd, sssd_be will crash resolving the primary group of the user.

Comment 4 Niranjan Mallapadi Raghavender 2019-08-02 13:50:05 UTC

Reproducer:
===========

Version:
libsss_nss_idmap-2.2.0-1.el8.x86_64
sssd-winbind-idmap-2.2.0-1.el8.x86_64
sssd-nfs-idmap-2.2.0-1.el8.x86_64
sssd-krb5-common-2.2.0-1.el8.x86_64
sssd-ipa-2.2.0-1.el8.x86_64
sssd-tools-2.2.0-1.el8.x86_64
sssd-polkit-rules-2.2.0-1.el8.x86_64
libsss_idmap-2.2.0-1.el8.x86_64
python3-sssdconfig-2.2.0-1.el8.noarch
sssd-libwbclient-2.2.0-1.el8.x86_64
libsss_autofs-2.2.0-1.el8.x86_64
sssd-common-2.2.0-1.el8.x86_64
sssd-common-pac-2.2.0-1.el8.x86_64
sssd-ad-2.2.0-1.el8.x86_64
sssd-krb5-2.2.0-1.el8.x86_64
python3-sss-2.2.0-1.el8.x86_64
sssd-2.2.0-1.el8.x86_64
sssd-kcm-2.2.0-1.el8.x86_64
sssd-proxy-2.2.0-1.el8.x86_64
libsss_certmap-2.2.0-1.el8.x86_64
sssd-client-2.2.0-1.el8.x86_64
libsss_sudo-2.2.0-1.el8.x86_64
sssd-dbus-2.2.0-1.el8.x86_64
sssd-ldap-2.2.0-1.el8.x86_64
libsss_simpleifp-2.2.0-1.el8.x86_64

1. Setup a kerber server 
2. Add a user test1 in kerberos database 
3. Create a local user (/etc/passswd) test1

4. Configure sssd.conf as below:

$ Cat /etc/sssd/sssd.conf


[sssd]
services = nss, pam
domains = LOCAL

[nss]
homedir_substring = /home

[domain/LOCAL]
id_provider = proxy
proxy_lib_name = files
enumerate = true
ignore_group_members = False
debug_level=9
cache_credentials = True
auth_provider = krb5
krb5_server = ci-vm-10-0-144-177.hosted.upshift.rdu2.redhat.com
krb5_realm = EXAMPLE.TEST
krb5_validate = true


5. Restart sssd
6. issue command: "id test1"

sssd_be crashes:
Aug 02 09:43:31 ci-vm-10-0-144-177.hosted.upshift.rdu2.redhat.com sssd[be[LOCAL]][26786]: Starting up
Aug 02 09:43:31 ci-vm-10-0-144-177.hosted.upshift.rdu2.redhat.com systemd-coredump[26785]: Process 26779 (sssd_be) of user 0 dumped core.
                                                                                           
                                                                                           Stack trace of thread 26779:
                                                                                           #0  0x00007fb4f8519f5b save_group (libsss_proxy.so)
                                                                                           #1  0x00007fb4f851a9f3 get_gr_gid.isra.4 (libsss_proxy.so)
                                                                                           #2  0x00007fb4f851b742 proxy_account_info_handler_send (libsss_proxy.so)
                                                                                           #3  0x00005643f54c6e6c dp_req_send (sssd_be)
                                                                                           #4  0x00005643f54c9b6e dp_get_account_info_send (sssd_be)
                                                                                           #5  0x00007fb50dfb3a52 _sbus_sss_invoke_in_uusss_out_qus_step (libsss_iface.so)
                                                                                           #6  0x00007fb50d95a279 tevent_common_invoke_timer_handler (libtevent.so.0)
                                                                                           #7  0x00007fb50d95a41e tevent_common_loop_timer_delay (libtevent.so.0)
                                                                                           #8  0x00007fb50d95b959 epoll_event_loop_once (libtevent.so.0)
                                                                                           #9  0x00007fb50d95985b std_event_loop_once (libtevent.so.0)
                                                                                           #10 0x00007fb50d954a55 _tevent_loop_once (libtevent.so.0)
                                                                                           #11 0x00007fb50d954cfb tevent_common_loop_wait (libtevent.so.0)
                                                                                           #12 0x00007fb50d9597eb std_event_loop_wait (libtevent.so.0)
                                                                                           #13 0x00007fb5108e7ec7 server_loop (libsss_util.so)
                                                                                           #14 0x00005643f54b93bb main (sssd_be)
                                                                                           #15 0x00007fb50ce2c873 __libc_start_main (libc.so.6)
                                                                                           #16 0x00005643f54b957e _start (sssd_be)



7. Update sssd to 2.2.0-5

[root@ci-vm-10-0-144-177 packages]# rpm -qa | grep sss
libsss_nss_idmap-2.2.0-1.el8.x86_64
sssd-winbind-idmap-2.2.0-1.el8.x86_64
sssd-nfs-idmap-2.2.0-1.el8.x86_64
sssd-client-2.2.0-5.el8.x86_64
sssd-dbus-2.2.0-5.el8.x86_64
python3-sss-2.2.0-5.el8.x86_64
sssd-2.2.0-5.el8.x86_64
sssd-polkit-rules-2.2.0-5.el8.x86_64
sssd-libwbclient-2.2.0-1.el8.x86_64
libsss_autofs-2.2.0-1.el8.x86_64
python3-sssdconfig-2.2.0-5.el8.noarch
sssd-common-2.2.0-5.el8.x86_64
sssd-common-pac-2.2.0-5.el8.x86_64
sssd-ad-2.2.0-5.el8.x86_64
sssd-ldap-2.2.0-5.el8.x86_64
sssd-proxy-2.2.0-5.el8.x86_64
sssd-ipa-2.2.0-5.el8.x86_64
sssd-tools-2.2.0-5.el8.x86_64
sssd-kcm-2.2.0-5.el8.x86_64
libsss_certmap-2.2.0-1.el8.x86_64
libsss_sudo-2.2.0-1.el8.x86_64
libsss_idmap-2.2.0-5.el8.x86_64
sssd-krb5-common-2.2.0-5.el8.x86_64
sssd-krb5-2.2.0-5.el8.x86_64
libsss_simpleifp-2.2.0-5.el8.x86_64
[root@ci-vm-10-0-144-177 packages]# 

8. Run id test1
$
[root@ci-vm-10-0-144-177 packages]# ps -ef | grep sssd
root     27738     1  0 09:47 ?        00:00:00 /usr/sbin/sssd -i --logger=files
root     27740 27738  0 09:47 ?        00:00:00 /usr/libexec/sssd/sssd_be --domain LOCAL --uid 0 --gid 0 --logger=files
root     27741 27738  0 09:47 ?        00:00:00 /usr/libexec/sssd/sssd_nss --uid 0 --gid 0 --logger=files
root     27742 27738  0 09:47 ?        00:00:00 /usr/libexec/sssd/sssd_pam --uid 0 --gid 0 --logger=files
root     27746 24466  0 09:47 pts/1    00:00:00 grep --color=auto sssd
[root@ci-vm-10-0-144-177 packages]#  

9. sssd doesn't crash. when id test1 user is run.

Comment 6 errata-xmlrpc 2019-11-05 22:34:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:3651


Note You need to log in before you can comment on or make changes to this bug.