RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 709260 - sssd is constantly trying to find local user in configured domain
Summary: sssd is constantly trying to find local user in configured domain
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: sssd
Version: 6.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Stephen Gallagher
QA Contact: Chandrasekar Kannan
URL:
Whiteboard:
Depends On:
Blocks: 709342 709352
TreeView+ depends on / blocked
 
Reported: 2011-05-31 08:32 UTC by Kemot1000
Modified: 2015-01-04 23:48 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-05-31 11:57:07 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Kemot1000 2011-05-31 08:32:19 UTC
Description of problem:
I noticed when I run sssd in debug mode that it is constantly trying to check one of my local users in domain. Every few seconds I get this entries:
sssd -i -d 9


(Tue May 31 09:53:04 2011) [sssd[nss]] [sss_dp_get_reply] (4): Got reply (0, 0, Success) from Data Provider
(Tue May 31 09:53:04 2011) [sssd[nss]] [sss_ncache_check_str] (8): Checking negative cache for [NCE/USER/DOMAINNAME/locallinuxuser]
(Tue May 31 09:53:04 2011) [sssd[nss]] [nss_cmd_initgroups_search] (4): Requesting info for [locallinuxuser@DOMAINNAME]
(Tue May 31 09:53:04 2011) [sssd[nss]] [ldb] (9): tevent: Added timed event "ltdb_callback": 0x13f1500

(Tue May 31 09:53:04 2011) [sssd[nss]] [ldb] (9): tevent: Added timed event "ltdb_timeout": 0x13f0e50

(Tue May 31 09:53:04 2011) [sssd[nss]] [ldb] (9): tevent: Destroying timer event 0x13f0e50 "ltdb_timeout"

(Tue May 31 09:53:04 2011) [sssd[nss]] [ldb] (9): tevent: Ending timer event 0x13f1500 "ltdb_callback"

(Tue May 31 09:53:04 2011) [sssd[nss]] [sss_ncache_set_str] (6): Adding [NCE/USER/DOMAINNAME/locallinuxuser] to negative cache
(Tue May 31 09:53:04 2011) [sssd[nss]] [nss_cmd_initgroups_search] (2): No results for initgroups call
(Tue May 31 09:53:04 2011) [sssd[nss]] [client_recv] (5): Client disconnected!
(Tue May 31 09:53:09 2011) [sssd] [service_check_alive] (4): Checking service DOMAINNAME(18349) is still alive
(Tue May 31 09:53:09 2011) [sssd] [service_send_ping] (4): Pinging DOMAINNAME
(Tue May 31 09:53:09 2011) [sssd] [sbus_add_timeout] (8): 0x1453e30
(Tue May 31 09:53:09 2011) [sssd] [service_check_alive] (4): Checking service nss(18350) is still alive
(Tue May 31 09:53:09 2011) [sssd] [service_send_ping] (4): Pinging nss
(Tue May 31 09:53:09 2011) [sssd] [sbus_add_timeout] (8): 0x14543a0
(Tue May 31 09:53:09 2011) [sssd] [service_check_alive] (4): Checking service pam(18351) is still alive
(Tue May 31 09:53:09 2011) [sssd] [service_send_ping] (4): Pinging pam
(Tue May 31 09:53:09 2011) [sssd] [sbus_add_timeout] (8): 0x1451670
(Tue May 31 09:53:09 2011) [sssd[be[DOMAINNAME]]] [sbus_dispatch] (9): dbus conn: 1A87DC0
(Tue May 31 09:53:09 2011) [sssd[be[DOMAINNAME]]] [sbus_dispatch] (9): Dispatching.
(Tue May 31 09:53:09 2011) [sssd[be[DOMAINNAME]]] [sbus_message_handler] (9): Received SBUS method [ping]
(Tue May 31 09:53:09 2011) [sssd[nss]] [sbus_dispatch] (9): dbus conn: 13F1860
(Tue May 31 09:53:09 2011) [sssd[nss]] [sbus_dispatch] (9): Dispatching.
(Tue May 31 09:53:09 2011) [sssd[nss]] [sbus_message_handler] (9): Received SBUS method [ping]
(Tue May 31 09:53:09 2011) [sssd[pam]] [sbus_dispatch] (9): dbus conn: 1D75D80
(Tue May 31 09:53:09 2011) [sssd[pam]] [sbus_dispatch] (9): Dispatching.
(Tue May 31 09:53:09 2011) [sssd[pam]] [sbus_message_handler] (9): Received SBUS method [ping]
(Tue May 31 09:53:09 2011) [sssd] [sbus_remove_timeout] (8): 0x1453e30
(Tue May 31 09:53:09 2011) [sssd] [sbus_dispatch] (9): dbus conn: 1452EA0
(Tue May 31 09:53:09 2011) [sssd] [sbus_dispatch] (9): Dispatching.
(Tue May 31 09:53:09 2011) [sssd] [ping_check] (4): Service DOMAINNAME replied to ping
(Tue May 31 09:53:09 2011) [sssd] [sbus_remove_timeout] (8): 0x14543a0
(Tue May 31 09:53:09 2011) [sssd] [sbus_dispatch] (9): dbus conn: 1454740
(Tue May 31 09:53:09 2011) [sssd] [sbus_dispatch] (9): Dispatching.
(Tue May 31 09:53:09 2011) [sssd] [ping_check] (4): Service nss replied to ping
(Tue May 31 09:53:09 2011) [sssd] [sbus_remove_timeout] (8): 0x1451670
(Tue May 31 09:53:09 2011) [sssd] [sbus_dispatch] (9): dbus conn: 14544D0
(Tue May 31 09:53:09 2011) [sssd] [sbus_dispatch] (9): Dispatching.
(Tue May 31 09:53:09 2011) [sssd] [ping_check] (4): Service pam replied to ping
(Tue May 31 09:53:14 2011) [sssd[nss]] [get_client_cred] (9): Client creds: euid[0] egid[0] pid[19878].
(Tue May 31 09:53:14 2011) [sssd[nss]] [accept_fd_handler] (4): Client connected!
(Tue May 31 09:53:14 2011) [sssd[nss]] [sss_cmd_get_version] (5): Received client version [1].
(Tue May 31 09:53:14 2011) [sssd[nss]] [sss_cmd_get_version] (5): Offered version [1].
(Tue May 31 09:53:14 2011) [sssd[nss]] [nss_cmd_initgroups] (4): Requesting info for [locallinuxuser] from [<ALL>]
(Tue May 31 09:53:14 2011) [sssd[nss]] [sss_ncache_check_str] (8): Checking negative cache for [NCE/USER/DOMAINNAME/locallinuxuser]
(Tue May 31 09:53:14 2011) [sssd[nss]] [nss_cmd_initgroups_search] (2): User [DOMAINNAME] does not exist in [locallinuxuser]! (negative cache)
(Tue May 31 09:53:14 2011) [sssd[nss]] [nss_cmd_initgroups_search] (2): No matching domain found for [locallinuxuser], fail!
(Tue May 31 09:53:14 2011) [sssd[nss]] [client_recv] (5): Client disconnected!

First of this is strange entry:

(Tue May 31 09:53:14 2011) [sssd[nss]] [nss_cmd_initgroups_search] (2): User [DOMAINNAME] does not exist in [locallinuxuser]! (negative cache)

shouldn't it be?:

(Tue May 31 09:53:14 2011) [sssd[nss]] [nss_cmd_initgroups_search] (2): User [locallinuxuser] does not exist in [DOMAINNAME]! (negative cache)


Why sssd is trying to authenticate user on domain even if this is a local user? 
I used minimum_uid on pam_krb5.so in the previous configuration but now I do not use this module since as I understand it is not needed and pam_sss.so doesn't have similar option. I setup min_id = 1000 in sssd.conf but seems not to be working. 


Version-Release number of selected component (if applicable):
sssd-client-1.5.1-34.el6.x86_64
sssd-1.5.1-34.el6.x86_64
krb5-libs-1.9-9.el6.x86_64


How reproducible:
setup local account with useradd
run sssd in interactive debug mode

Additional info:
[root@localhost ~]# cat /etc/passwd |grep locallinuxuser 
locallinuxuser:x:500:101::/home/locallinuxuser:/bin/bash

sssd.conf
[domain/DOMAINNAME]
description = Domain
enumerate = true
min_id = 1000
#debug_level = 9

id_provider = ldap
#auth_provider = ldap
auth_provider = krb5
ldap_uri = ldap://activedirectoryserver1.example.com,ldap://activedirectoryserver2.example.com
ldap_schema = rfc2307bis
ldap_user_search_base = DC=example,DC=com
ldap_group_search_base = DC=example,DC=com
ldap_default_bind_dn = CN=binduser,OU=Users,DC=example,DC=com
ldap_default_authtok_type = password
ldap_default_authtok = PASSWORD
ldap_user_object_class = user
ldap_user_name = sAMAccountName
ldap_user_uid_number = uidNumber
ldap_user_gid_number = gidNumber
ldap_user_home_directory = unixHomeDirectory
ldap_user_shell = loginShell
ldap_user_principal = userPrincipalName
ldap_user_gecos = cn

ldap_group_object_class = group
ldap_group_name = name
ldap_group_member = member
ldap_group_gid_number = gidNumber
ldap_force_upper_case_realm = True

#krb5
krb5_server = 192.168.0.2
krb5_realm = EXAMPLE.COM
krb5_validate = true
krb5_keytab = /etc/krb5.keytab
krb5_store_password_if_offline = true
cache_credentials = true

Comment 2 Stephen Gallagher 2011-05-31 11:57:07 UTC
This is not a bug in SSSD. The design of initgroups() in glibc requires that ALL name-service plugins are checked whenever initgroups() is requested of any user. Unlike the getpwuid/getpwnam() routines, which stops after the first time it encounters the requested user, initgroups() will always search through all of them.

If this is causing undue stress on your LDAP server, you can add the 
filter_users = locallinuxuser

And it will be permanently stored in the negative cache.

Comment 3 Kemot1000 2011-05-31 12:22:23 UTC
Hi, 

Thanks for your response. I added localinuxuser to sssd.conf (filter_users = root,locallinuxuser). 
Stopped sssd then removed everything from /var/lib/sss/db folder just in case. Started again and I still get:
(Tue May 31 14:14:47 2011) [sssd[nss]] [sss_ncache_check_str] (8): Checking negative cache for [NCE/USER/DOMAINNAME/localinuxuser]

(Tue May 31 14:14:47 2011) [sssd[nss]] [nss_cmd_initgroups_search] (2): User [DOMAINNAME] does not exist in [localinuxuser]! (negative cache)

Also this is strange:
User [DOMAINNAME] does not exist in [localinuxuser]


Part of sssd -i -d 9 output:
(Tue May 31 14:14:47 2011) [sssd[nss]] [get_client_cred] (9): Client creds: euid[0] egid[0] pid[16717].
(Tue May 31 14:14:47 2011) [sssd[nss]] [accept_fd_handler] (4): Client connected!
(Tue May 31 14:14:47 2011) [sssd[nss]] [sss_cmd_get_version] (5): Received client version [1].
(Tue May 31 14:14:47 2011) [sssd[nss]] [sss_cmd_get_version] (5): Offered version [1].
(Tue May 31 14:14:47 2011) [sssd[nss]] [nss_cmd_initgroups] (4): Requesting info for [localinuxuser] from [<ALL>]
(Tue May 31 14:14:47 2011) [sssd[nss]] [sss_ncache_check_str] (8): Checking negative cache for [NCE/USER/DOMAINNAME/localinuxuser]
(Tue May 31 14:14:47 2011) [sssd[nss]] [nss_cmd_initgroups_search] (2): User [DOMAINNAME] does not exist in [localinuxuser]! (negative cache)
(Tue May 31 14:14:47 2011) [sssd[nss]] [nss_cmd_initgroups_search] (2): No matching domain found for [localinuxuser], fail!
(Tue May 31 14:14:47 2011) [sssd[nss]] [client_recv] (5): Client disconnected!

Comment 4 Stephen Gallagher 2011-05-31 12:38:57 UTC
(In reply to comment #3)
> Hi, 
> 
> Thanks for your response. I added localinuxuser to sssd.conf (filter_users =
> root,locallinuxuser). 
> Stopped sssd then removed everything from /var/lib/sss/db folder just in case.
> Started again and I still get:
> (Tue May 31 14:14:47 2011) [sssd[nss]] [sss_ncache_check_str] (8): Checking
> negative cache for [NCE/USER/DOMAINNAME/localinuxuser]
> 
> (Tue May 31 14:14:47 2011) [sssd[nss]] [nss_cmd_initgroups_search] (2): User
> [DOMAINNAME] does not exist in [localinuxuser]! (negative cache)
> 

Yes, this is expected. We still get the request, this is just the notification that it's been detected in the negative cache, which means that it won't continue on to LDAP (it gets rejected locally).

This is the best that we can do, given the way that glibc functions.

> Also this is strange:
> User [DOMAINNAME] does not exist in [localinuxuser]
> 

Whoops, that's a bug in the log message. It's harmless, though. We just reversed the two variables being printed. The processing is occurring properly.

Comment 5 Stephen Gallagher 2011-05-31 12:43:15 UTC
I opened https://bugzilla.redhat.com/show_bug.cgi?id=709342 to fix the typo.

Comment 6 Kemot1000 2011-05-31 12:59:54 UTC
Thanks for your help


Note You need to log in before you can comment on or make changes to this bug.