Bug 1724717 - sssd-proxy crashes resolving groups with no members
Summary: sssd-proxy crashes resolving groups with no members
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Fedora
Classification: Fedora
Component: sssd
Version: 30
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Jakub Hrozek
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks: 1725168
TreeView+ depends on / blocked
 
Reported: 2019-06-27 15:43 UTC by Gwyn Ciesla
Modified: 2020-05-02 19:10 UTC (History)
9 users (show)

Fixed In Version: sssd-2.2.0-3.fc30 sssd-2.2.0-3.fc29
Clone Of:
: 1725168 (view as bug list)
Environment:
Last Closed: 2019-07-12 00:58:48 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
sssd logs (501.25 KB, application/gzip)
2019-06-28 13:18 UTC, Gwyn Ciesla
no flags Details
Core dump (578.14 KB, application/x-lz4)
2019-06-28 14:18 UTC, Gwyn Ciesla
no flags Details
Sanitized sssd.conf (307 bytes, text/plain)
2019-06-28 14:18 UTC, Gwyn Ciesla
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github SSSD sssd issues 5006 0 None closed Logins fail after upgrade to 2.2.0 2020-07-15 13:53:13 UTC

Description Gwyn Ciesla 2019-06-27 15:43:46 UTC
I've been using the configuration I'm on for several Fedora releases on several machines. After 2.2.0 I can't log into the console as my usual user, but I can as root.

I'm using sssd_krb5 for auth, and sssd_proxy to local users with idmapd.conf for consisten uid/gid.

Reverting to 2.1.0 fixes the issue.

Let me know what I can provide to help troubleshoot.

Comment 1 Sumit Bose 2019-06-27 16:19:03 UTC
Hi,

please try

    sss_debuglevel 9

new try to log in as the user and collect the logs from /var/log/sssd after the attempt, attach them to the ticket and finally 

    sss_debuglevel 0

bye,
Sumit

Comment 2 Gwyn Ciesla 2019-06-27 16:35:55 UTC
I set the debuglevel, updated to 2.2.0, and logged out, and was able to log back in successfully. It's been intermittent, but at least I have the debuglevel set so I can get the logs next time it fails. I'll let you know.

Comment 3 Gwyn Ciesla 2019-06-28 13:18:19 UTC
Created attachment 1585634 [details]
sssd logs

It happened this morning. I'm attaching the logs. I then had to downgrade sssd again and remove /var/lib/sssd/cache* to log in.

Comment 4 Jakub Hrozek 2019-06-28 14:05:18 UTC
From the logs it looks like SSSD is crashing quite often, e.g.:

(Fri Jun 28 08:02:54 2019) [sssd[be[BAMBOO]]] [sysdb_store_user] (0x0400): User "zabbix@bamboo" has been stored
(Fri Jun 28 08:02:54 2019) [sssd[be[BAMBOO]]] [get_initgr_groups_process] (0x0200): The initgroups call returned 'NOTFOUND'. Assume the user is only member of its primary group (973)
(Fri Jun 28 08:02:54 2019) [sssd[be[BAMBOO]]] [get_initgr_groups_process] (0x0100): User [zabbix] appears to be member of 1 groups
(Fri Jun 28 08:02:54 2019) [sssd[be[BAMBOO]]] [get_gr_gid] (0x0400): Searching group by gid (973)
(Fri Jun 28 08:02:54 2019) [sssd[be[BAMBOO]]] [handle_getgr_result] (0x0200): Group found: (zabbix, 973)
(Fri Jun 28 08:02:54 2019) [sssd[be[BAMBOO]]] [ldb] (0x4000): Added timed event "ldb_kv_callback": 0x55b4eaefd1c0

(Fri Jun 28 08:02:54 2019) [sssd[be[BAMBOO]]] [ldb] (0x4000): Added timed event "ldb_kv_timeout": 0x55b4eaefd290

(Fri Jun 28 08:02:54 2019) [sssd[be[BAMBOO]]] [ldb] (0x4000): Running timer event 0x55b4eaefd1c0 "ldb_kv_callback"

(Fri Jun 28 08:02:54 2019) [sssd[be[BAMBOO]]] [ldb] (0x4000): Destroying timer event 0x55b4eaefd290 "ldb_kv_timeout"

(Fri Jun 28 08:02:54 2019) [sssd[be[BAMBOO]]] [ldb] (0x4000): Destroying timer event 0x55b4eaefd1c0 "ldb_kv_callback"

---> here

(Fri Jun 28 08:02:54 2019) [sssd[be[BAMBOO]]] [server_setup] (0x0400): CONFDB: /var/lib/sss/db/config.ldb

--> server_setup() is already a new instance.

Do you have e.g. abrt to systemd-coredump to share a core file? Could share your sssd.conf (feel free to sanitize hostnames etc..)

Comment 5 Gwyn Ciesla 2019-06-28 14:18:33 UTC
Created attachment 1585662 [details]
Core dump

Comment 6 Gwyn Ciesla 2019-06-28 14:18:55 UTC
Created attachment 1585663 [details]
Sanitized sssd.conf

Comment 7 Gwyn Ciesla 2019-06-28 14:19:13 UTC
Let me know if these help.

Comment 8 Lukas Slebodnik 2019-06-28 14:49:09 UTC
The regression in proxy provider is fixed in https://pagure.io/SSSD/sssd/pull-request/4036

Comment 9 Jakub Hrozek 2019-06-28 14:52:22 UTC
Upstream ticket:
https://pagure.io/SSSD/sssd/issue/4037

Comment 10 Jakub Hrozek 2019-06-28 14:54:23 UTC
* master: e1b678c0cce73494d986610920b03956c1dbb62a

Comment 11 Jakub Hrozek 2019-06-28 15:19:22 UTC
You can try a test build from here:
https://copr.fedorainfracloud.org/coprs/jhrozek/sssd-proxycrash/

Comment 12 Gwyn Ciesla 2019-06-28 15:27:51 UTC
Thank you, I've installed it. I'll let you know how it goes.

Comment 13 Marc Dionne 2019-07-05 13:51:09 UTC
(In reply to Jakub Hrozek from comment #11)
> You can try a test build from here:
> https://copr.fedorainfracloud.org/coprs/jhrozek/sssd-proxycrash/

The packages from your copr fix the crashes I've been seeing.

I have a similar config and trying to log in with a krb5 password causes a crash in /usr/libexec/sssd/sssd_be, with this call stack, which looks like the same issue:

  Stack trace of thread 18272:
  #0  0x00007f56f6a177d4 n/a (libsss_proxy.so)
  #1  0x00007f56f6a181c6 n/a (libsss_proxy.so)
  #2  0x00007f56f6a1aa33 proxy_account_info_handler_send (libsss_proxy.so)
  #3  0x00005563a757573a dp_req_send (sssd_be)
  #4  0x00005563a75783fe dp_get_account_info_send (sssd_be)
  #5  0x00007f5704debc44 n/a (libsss_iface.so)
  #6  0x00007f5704d92e3d tevent_common_invoke_timer_handler (libtevent.so.0)
  #7  0x00007f5704d92fe0 tevent_common_loop_timer_delay (libtevent.so.0)
  #8  0x00007f5704d941ac n/a (libtevent.so.0)
  #9  0x00007f5704d9241b n/a (libtevent.so.0)
  #10 0x00007f5704d8d538 _tevent_loop_once (libtevent.so.0)
  #11 0x00007f5704d8d7db tevent_common_loop_wait (libtevent.so.0)
  #12 0x00007f5704d923ab n/a (libtevent.so.0)
  #13 0x00007f5704ea1737 server_loop (libsss_util.so)
  #14 0x00005563a7567c62 main (sssd_be)
  #15 0x00007f5704bd1f33 __libc_start_main (libc.so.6)
  #16 0x00005563a7567e1e _start (sssd_be)

Can we expect to see this fix in the fedora package in the near future?

Thanks,
Marc

Comment 14 Fedora Update System 2019-07-05 17:33:01 UTC
FEDORA-2019-bc337e43c1 has been submitted as an update to Fedora 30. https://bodhi.fedoraproject.org/updates/FEDORA-2019-bc337e43c1

Comment 15 Fedora Update System 2019-07-06 04:18:53 UTC
sssd-2.2.0-3.fc30 has been pushed to the Fedora 30 testing repository. If problems still persist, please make note of it in this bug report.
See https://fedoraproject.org/wiki/QA:Updates_Testing for
instructions on how to install test updates.
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2019-bc337e43c1

Comment 16 Fedora Update System 2019-07-06 06:41:29 UTC
sssd-2.2.0-3.fc29 has been pushed to the Fedora 29 testing repository. If problems still persist, please make note of it in this bug report.
See https://fedoraproject.org/wiki/QA:Updates_Testing for
instructions on how to install test updates.
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2019-cf0463b5e1

Comment 17 Fedora Update System 2019-07-12 00:58:48 UTC
sssd-2.2.0-3.fc30 has been pushed to the Fedora 30 stable repository. If problems still persist, please make note of it in this bug report.

Comment 18 Fedora Update System 2019-07-23 02:34:49 UTC
sssd-2.2.0-3.fc29 has been pushed to the Fedora 29 stable repository. If problems still persist, please make note of it in this bug report.


Note You need to log in before you can comment on or make changes to this bug.