RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1283769 - sssd-nss segfault on restart
Summary: sssd-nss segfault on restart
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: sssd
Version: 6.7
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: SSSD Maintainers
QA Contact: Namita Soman
URL:
Whiteboard:
: 1504121 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-11-19 19:06 UTC by Orion Poplawski
Modified: 2021-03-11 14:26 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-11-13 21:24:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github SSSD sssd issues 3927 0 None closed sssd-nss segfault on restart 2021-01-31 11:18:32 UTC

Description Orion Poplawski 2015-11-19 19:06:59 UTC
Description of problem:

My system is very busy copied a large (3+TB) file from one sata disk to a USB disk.  sssd is being affected:

Nov 19 09:27:24 saga sssd: Killing service [default], not responding to pings!
Nov 19 09:28:06 saga sssd[be[default]]: Shutting down
Nov 19 09:28:06 saga sssd[be[default]]: Starting up
Nov 19 10:19:06 saga sssd: Killing service [default], not responding to pings!
Nov 19 10:20:06 saga sssd: [default][10121] is not responding to SIGTERM. Sending SIGKILL.
Nov 19 10:20:06 saga sssd[be[default]]: Starting up
Nov 19 10:20:11 saga kernel: sssd_nss[16928]: segfault at 94 ip 00007f29e6a34fbc sp 00007ffd00ed24a0 error 4 in libdbus-1.so.3.4.0[7f29e6a10000+40000]
Nov 19 10:20:12 saga abrt[344]: Saved core dump of pid 16928 (/usr/libexec/sssd/sssd_nss) to /var/spool/abrt/ccpp-2015-11-19-10:20:11-16928 (1462272 bytes)
Nov 19 10:20:12 saga sssd[nss]: Starting up
Nov 19 10:41:06 saga sssd: Killing service [default], not responding to pings!
Nov 19 10:41:42 saga sssd: Killing service [nss], not responding to pings!
Nov 19 10:42:06 saga sssd: [default][341] is not responding to SIGTERM. Sending SIGKILL.
Nov 19 10:42:06 saga sssd[be[default]]: Starting up
Nov 19 10:42:12 saga sssd[nss]: Shutting down
Nov 19 10:42:12 saga sssd[nss]: Starting up
Nov 19 11:07:32 saga sssd: Killing service [nss], not responding to pings!
Nov 19 11:09:12 saga sssd: [nss][9468] is not responding to SIGTERM. Sending SIGKILL.
Nov 19 11:09:12 saga sssd[nss]: Starting up

# cat sssd.log
(Thu Nov 19 10:20:06 2015) [sssd] [mt_svc_sigkill] (0x0010): [default][10121] is not responding to SIGTERM. Sending SIGKILL.
(Thu Nov 19 10:42:06 2015) [sssd] [mt_svc_sigkill] (0x0010): [default][341] is not responding to SIGTERM. Sending SIGKILL.
(Thu Nov 19 11:08:32 2015) [sssd] [mt_svc_sigkill] (0x0010): [nss][9468] is not responding to SIGTERM. Sending SIGKILL.

# cat sssd_nss.log
(Thu Nov 19 11:09:47 2015) [sssd[nss]] [dp_id_callback] (0x0010): The Monitor returned an error [org.freedesktop.DBus.Error.NoReply]
(Thu Nov 19 11:09:47 2015) [sssd[nss]] [id_callback] (0x0010): The Monitor returned an error [org.freedesktop.DBus.Error.NoReply]

Program terminated with signal 11, Segmentation fault.
#0  dbus_watch_handle (watch=0x90, flags=2) at dbus-watch.c:650
650       if (watch->fd < 0 || watch->flags == 0)
(gdb) bt
#0  dbus_watch_handle (watch=0x90, flags=2) at dbus-watch.c:650
#1  0x00007f29e70bbdbc in sbus_watch_handler (ev=<value optimized out>,
    fde=<value optimized out>, flags=<value optimized out>, data=<value optimized out>)
    at src/sbus/sssd_dbus_common.c:94
#2  0x00007f29e39e7ebe in epoll_event_loop (ev=<value optimized out>,
    location=<value optimized out>) at ../tevent_epoll.c:736
#3  epoll_event_loop_once (ev=<value optimized out>, location=<value optimized out>)
    at ../tevent_epoll.c:931
#4  0x00007f29e39e62e6 in std_event_loop_once (ev=0xe1f3e0,
    location=0x7f29e70dae80 "src/util/server.c:668") at ../tevent_standard.c:112
#5  0x00007f29e39e249d in _tevent_loop_once (ev=0xe1f3e0,
    location=0x7f29e70dae80 "src/util/server.c:668") at ../tevent.c:530
#6  0x00007f29e39e251b in tevent_common_loop_wait (ev=0xe1f3e0,
    location=0x7f29e70dae80 "src/util/server.c:668") at ../tevent.c:634
#7  0x00007f29e39e6256 in std_event_loop_wait (ev=0xe1f3e0,
    location=0x7f29e70dae80 "src/util/server.c:668") at ../tevent_standard.c:138
#8  0x00007f29e70c28d3 in server_loop (main_ctx=0xe20750) at src/util/server.c:668
#9  0x0000000000405fc8 in main (argc=6, argv=<value optimized out>)
    at src/responder/nss/nsssrv.c:610
(gdb) print watch
$1 = (DBusWatch *) 0x90
(gdb) print *watch
Cannot access memory at address 0x90
(gdb) up
#1  0x00007f29e70bbdbc in sbus_watch_handler (ev=<value optimized out>,
    fde=<value optimized out>, flags=<value optimized out>, data=<value optimized out>)
    at src/sbus/sssd_dbus_common.c:94
94                  dbus_watch_handle(watch->dbus_write_watch, DBUS_WATCH_WRITABLE);
(gdb) print watch
$2 = (struct sbus_watch_ctx *) 0xe32a40
(gdb) print *watch
$3 = {prev = 0x0, next = 0x0, conn = 0xe299f0, fde = 0xe2c300, fd = 13, dbus_read_watch = 0x0,
  dbus_write_watch = 0x90}

Version-Release number of selected component (if applicable):
sssd-1.12.4-47.el6_7.4.x86_64

Comment 2 Jakub Hrozek 2015-11-26 16:35:22 UTC
Upstream ticket:
https://fedorahosted.org/sssd/ticket/2886

Comment 3 Lukas Slebodnik 2015-11-27 15:28:14 UTC
We can try to reproduce but it might a difficult task.

It looks like an use after free due to asynchronous operations.  Could you try to reproduce with valgrind?

Add following line to "[nss]" section
command = valgrind -v --log-file=/var/log/sssd/valgrind_nss_%p.log /usr/libexec/sssd/sssd_nss --uid 0 --gid 0 --debug-to-files


You will need to have enabled installed package valgrind and SELinux in permissive mode. The crash might not be caught by abrt becuase valgind handle crash itself. But you should be able to see errors in valgrind output /var/log/sssd/valgrind_nss_*.log

Comment 4 Orion Poplawski 2015-12-02 18:09:02 UTC
Okay, I have this running.  May be very hard for me to reproduce as well as I've only seen this once so far.  I'll let you know if I find anything.

Comment 6 Jakub Hrozek 2016-08-10 12:32:24 UTC
At the moment, I'm giving Conditional NAK to this bug.

The reason is that a) there is no reliable reproducer available and b) there is a workaround available of either moving the cache to tmpfs or increasing the 'timeout' option in the [domain] section.

In addition, more recent releases, like RHEL-7.3 or newer changed the way the memor hierarchy of requests, which will prevent crashes like this in the future.

Comment 9 Orion Poplawski 2017-07-17 16:07:28 UTC
Looks like I'm seeing this occasionally on  EL7.3 with 1.14.0-43.el7_3.18 as well on a busy VM guest.  Will try increasing timeout.

Comment 10 Jakub Hrozek 2017-11-13 21:24:48 UTC
Because RHEL-6 moved to production phase 3, only urgent fixes are permitted from now on.

And given this bug should be solved in RHEL-7 (*) and there should be a workaround of increasing the timeout, I'm going to close this bug report as WONTFIX.

Thank you for filing the bug nonetheless.

(*) I see that comment #9 indicates otherwise, but then I would prefer to track the RHEL-7 issue separately.

Comment 11 Jakub Hrozek 2017-11-13 21:26:12 UTC
*** Bug 1504121 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.