RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1579703 - crash in nss_protocol_fill_netgrent. sssd_nss[19234]: segfault at 80 ip 000055612688c2a0 sp 00007ffddf9b9cd0 error 4 in sssd_nss[55612687e000+39000] [rhel-7.5.z]
Summary: crash in nss_protocol_fill_netgrent. sssd_nss[19234]: segfault at 80 ip 00005...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: sssd
Version: 7.4
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: rc
: ---
Assignee: SSSD Maintainers
QA Contact: sssd-qe
URL:
Whiteboard:
Depends On: 1538555
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-05-18 07:24 UTC by Oneata Mircea Teodor
Modified: 2021-09-09 14:07 UTC (History)
20 users (show)

Fixed In Version: sssd-1.16.0-19.el7_5.2
Doc Type: Bug Fix
Doc Text:
The sssd_nss module calls a free function when the lifetime of a netgroup representation expires. Additionally, administrators can manually expire netgroups using the sssd_cache utility. However, after this manual expiration, SSSD called the function again when the lifetime expired. As a consequence, the function was called twice and resulted in a double-free memory error. With this update, the free function is no longer called when the administrator uses the sssd_cache utility. Instead it removes the netgroup from the list of known netgroups and later, when the netgroup reaches its lifetime, it is removed from memory. As a result, the double-free error no longer occurs.
Clone Of: 1538555
Environment:
Last Closed: 2018-06-26 16:49:19 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github SSSD sssd issues 4698 0 None closed Make nss netgroup requests more robust 2020-07-26 17:10:02 UTC
Github SSSD sssd issues 4740 0 None closed nss_clear_netgroup_hash_table(): only remove entries from the hash table, do not free them 2020-07-26 17:10:01 UTC
Red Hat Product Errata RHBA-2018:1986 0 None None None 2018-06-26 16:49:53 UTC

Description Oneata Mircea Teodor 2018-05-18 07:24:48 UTC
This bug has been copied from bug #1538555 and has been proposed to be backported to 7.5 z-stream (EUS).

Comment 3 Amith 2018-06-07 07:55:41 UTC
Verified the bug on SSSD Version: sssd-1.16.0-19.el7_5.5.x86_64

Steps followed during verification:

1. Reproduce the bug by installing an older SSSD version : sssd-1.16.0-19.el7.x86_64 in the client system.

2. Add large number of netgroups to your 389-ds ldap server, lets say around 10000.

3. Configure sssd.conf as follows :
[sssd]
services = nss, pam
config_file_version = 2
reconnection_retries = 5
sbus_timeout = 30
domains = LDAP
debug_level = 1

[nss]
filter_users = root
filter_groups = root
debug_level = 9

[pam]
reconnection_retries = 5
offline_credentials_expiration = 0
offline_failed_login_attempts = 0
offline_failed_login_delay = 5
debug_level = 1

[sudo]
[autofs]
[ssh]

[domain/LDAP]
id_provider = ldap
auth_provider = ldap
chpass_provider = ldap
cache_credentials = true
enumerate = false
ldap_schema = rfc2307
ldap_uri = ldaps://SERVER
ldap_search_base = dc=example,dc=com
ldap_id_use_start_tls = true
ldap_tls_cacertdir = /etc/openldap/certs
ldap_tls_reqcert = allow
debug_level = 1

4. Execute continuous netgroup lookup on one terminal. Following is a test script which will execute lookup in the background:
function lookup1()
{
for i in {1..3000}; do
getent netgroup Testqe$i
sleep 1
done
}

function lookup2()
{
for i in {3001..6000}; do
getent netgroup Testqe$i
sleep 1
done
}
lookup1 &
lookup2 &
 
5. On another terminal, run "sss_cache -E" and monitor the pid of sssd_nss. With the old sssd build, SSSD_NSS should restart/crash. You can see the process restart by checking the status in a loop. Following is a test script which checks the same:
function chk_crash()
{
NSS_PR1=`pidof sssd_nss`
for i in {1..100}; do
echo "Test attempt number: $i"
sss_cache -E
sleep 3
NSS_PR2=`pidof sssd_nss`
if [ $NSS_PR1 -eq $NSS_PR2 ]; then 
   echo "Pid of nss is $NSS_PR1,Test works fine."
else
   echo "Initial nss pid was $NSS_PR1, now it is $NSS_PR2. SSSD_NSS restarted, test failed"; exit
fi
done
}
chk_crash

6. Install the latest build and repeat steps 4,5. We don't see any issues with nss process. I ran the loop for 100 iterations and found sssd_nss steady.

Comment 8 errata-xmlrpc 2018-06-26 16:49:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1986


Note You need to log in before you can comment on or make changes to this bug.