Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 997013

Summary: sssd not allowed to exec "/usr/libexec/sssd/sssd_be" if selinux set to "enforcing"
Product: Red Hat Enterprise Linux 6 Reporter: Thomas Schweikle <tschweikle>
Component: sssdAssignee: Jakub Hrozek <jhrozek>
Status: CLOSED DUPLICATE QA Contact: Kaushik Banerjee <kbanerje>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 6.4CC: grajaiya, jgalipea, lslebodn, mkosek, mzidek, okos, pbrezina, tschweikle
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-08-19 17:02:09 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Thomas Schweikle 2013-08-14 13:26:34 UTC
Description of problem:
if selinux is enabled and sssd is used to connect to a ldap-server sssd will fail to connect to the server with messages
(Wed Aug 14 14:59:37 2013) [sssd[be[LDAP]]] [be_process_init] (0x0020): No selinux module provided for [LDAP] !!
(Wed Aug 14 14:59:37 2013) [sssd[be[LDAP]]] [be_process_init] (0x0020): No host info module provided for [LDAP] !!
(Wed Aug 14 14:59:37 2013) [sssd[be[LDAP]]] [be_process_init] (0x0020): Subdomains are not supported for [LDAP] !!
(Wed Aug 14 14:59:47 2013) [sssd[be[LDAP]]] [sdap_async_sys_connect_send] (0x0020): connect failed [13][Permission denied].
(Wed Aug 14 14:59:47 2013) [sssd[be[LDAP]]] [sss_ldap_init_sys_connect_done] (0x0020): sdap_async_sys_connect request failed.
(Wed Aug 14 14:59:47 2013) [sssd[be[LDAP]]] [sdap_sys_connect_done] (0x0020): sdap_async_connect_call request failed.
(Wed Aug 14 14:59:47 2013) [sssd[be[LDAP]]] [fo_resolve_service_send] (0x0020): No available servers for service 'LDAP'
(Wed Aug 14 14:59:47 2013) [sssd[be[LDAP]]] [sdap_id_op_connect_done] (0x0020): Failed to connect, going offline (5 [Input/output error])

As soon as you set selinux to off or permissive it will work. selinux prints into it's logs (set to "permissive"):
type=SYSCALL msg=audit(1376485134.531:760): arch=c000003e syscall=42 success=no exit=-13 a0=15 a1=19a6c80 a2=80 a3=0 items=0 ppid=11287 pid=11288 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=54 comm="sssd_be" exe="/usr/libexec/sssd/sssd_be" subj=unconfined_u:system_r:sssd_t:s0 key=(null)
type=AVC msg=audit(1376485187.816:761): avc:  denied  { name_connect } for  pid=11306 comm="sssd_be" dest=7389 scontext=unconfined_u:system_r:sssd_t:s0 tcontext=system_u:object_r:port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1376485187.816:761): arch=c000003e syscall=42 success=no exit=-13 a0=15 a1=856bf0 a2=80 a3=0 items=0 ppid=11305 pid=11306 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=54 comm="sssd_be" exe="/usr/libexec/sssd/sssd_be" subj=unconfined_u:system_r:sssd_t:s0 key=(null)
type=AVC msg=audit(1376485487.816:768): avc:  denied  { name_connect } for  pid=11306 comm="sssd_be" dest=7389 scontext=unconfined_u:system_r:sssd_t:s0 tcontext=system_u:object_r:port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1376485487.816:768): arch=c000003e syscall=42 success=no exit=-13 a0=14 a1=856bf0 a2=80 a3=0 items=0 ppid=11305 pid=11306 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=54 comm="sssd_be" exe="/usr/libexec/sssd/sssd_be" subj=unconfined_u:system_r:sssd_t:s0 key=(null)
type=AVC msg=audit(1376485577.406:4): avc:  denied  { name_connect } for  pid=976 comm="sssd_be" dest=7389 scontext=system_u:system_r:sssd_t:s0 tcontext=system_u:object_r:port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1376485577.406:4): arch=c000003e syscall=42 success=no exit=-115 a0=15 a1=f8e1d0 a2=80 a3=0 items=0 ppid=975 pid=976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sssd_be" exe="/usr/libexec/sssd/sssd_be" subj=system_u:system_r:sssd_t:s0 key=(null)
type=AVC msg=audit(1376486484.496:23): avc:  denied  { name_connect } for  pid=976 comm="sssd_be" dest=7389 scontext=system_u:system_r:sssd_t:s0 tcontext=system_u:object_r:port_t:s0 tclass=tcp_socket
type=SYSCALL msg=audit(1376486484.496:23): arch=c000003e syscall=42 success=no exit=-115 a0=15 a1=f8e1d0 a2=80 a3=0 items=0 ppid=975 pid=976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="sssd_be" exe="/usr/libexec/sssd/sssd_be" subj=system_u:system_r:sssd_t:s0 key=(null)

looks a lot like "/usr/libexec/sssd/sssd_be" isn't allowed to execute with selinux set to "enforcing".

Version-Release number of selected component (if applicable):
selinux-policy-3.7.19-195.0.1.el6_4.12.noarch
selinux-policy-targeted-3.7.19-195.0.1.el6_4.12.noarch
libselinux-utils-2.0.94-5.3.el6_4.1.x86_64
sssd-tools-1.9.2-82.7.el6_4.x86_64
libselinux-2.0.94-5.3.el6_4.1.x86_64
sssd-client-1.9.2-82.7.el6_4.x86_64
sssd-1.9.2-82.7.el6_4.x86_64

How reproducible:
Install el6.4, upgrade to the latest available packages, configure sssd to use ldap, enable selinux (set to "enforcing"), then do "getent passwd". You'll only see local users. Change selinux to "permissive" or "dissabled", restart selinux, do "getent passwd" again. You'll now have the whole list of users available via ldap.

Steps to Reproduce:
1. Install EL 6.4, Upgrade to latest available packages
2. configure sssd to use ldap, set selinux to "enforcing"
3. do "getent passwd"

Actual results:
only local users seen, because of sssd connection errors trying to access the ldap server.

Expected results:
Whole list of local and remote users available

Additional info:
This bug was there in 2011. It is now there again. Have a look at:
https://bugzilla.redhat.com/show_bug.cgi?id=746665
https://bugzilla.redhat.com/show_bug.cgi?id=746265
http://rhn.redhat.com/errata/RHBA-2011-1511.html

Comment 2 Michal Zidek 2013-08-14 14:48:09 UTC
Hello,

how does the output of command ls -Z look like? It should be like this
$ ls -Z /usr/sbin/sssd 
-rwxr-xr-x. root root system_u:object_r:sssd_exec_t:s0 /usr/sbin/sssd

$ ls -Z /usr/libexec/sssd/sssd*
-rwxr-xr-x. root root system_u:object_r:bin_t:s0       /usr/libexec/sssd/sssd_autofs
-rwxr-xr-x. root root system_u:object_r:bin_t:s0       /usr/libexec/sssd/sssd_be
-rwxr-xr-x. root root system_u:object_r:bin_t:s0       /usr/libexec/sssd/sssd_nss
-rwxr-xr-x. root root system_u:object_r:bin_t:s0       /usr/libexec/sssd/sssd_pac
-rwxr-xr-x. root root system_u:object_r:bin_t:s0       /usr/libexec/sssd/sssd_pam
-rwxr-xr-x. root root system_u:object_r:bin_t:s0       /usr/libexec/sssd/sssd_ssh
-rwxr-xr-x. root root system_u:object_r:bin_t:s0       /usr/libexec/sssd/sssd_sudo

If it does not look like the above, call restorecon:
$ restorecon /usr/sbin/sssd
$ restorecon /usr/libexec/sssd/sssd*


Michal

Comment 3 Thomas Schweikle 2013-08-16 08:02:56 UTC
# ls -Z /usr/libexec/sssd/sssd*
-rwxr-xr-x. root root system_u:object_r:bin_t:s0       /usr/libexec/sssd/sssd_autofs
-rwxr-xr-x. root root system_u:object_r:bin_t:s0       /usr/libexec/sssd/sssd_be
-rwxr-xr-x. root root system_u:object_r:bin_t:s0       /usr/libexec/sssd/sssd_nss
-rwxr-xr-x. root root system_u:object_r:bin_t:s0       /usr/libexec/sssd/sssd_pac
-rwxr-xr-x. root root system_u:object_r:bin_t:s0       /usr/libexec/sssd/sssd_pam
-rwxr-xr-x. root root system_u:object_r:bin_t:s0       /usr/libexec/sssd/sssd_ssh
-rwxr-xr-x. root root system_u:object_r:bin_t:s0       /usr/libexec/sssd/sssd_sudo

looks the same.

Comment 4 Thomas Schweikle 2013-08-16 08:08:55 UTC
# ls -Zd /usr/libexec/sssd
drwxr-xr-x. root root system_u:object_r:bin_t:s0       /usr/libexec/sssd

Now same after
# ls -Zd /usr/libexec/sssd
drwxr-xr-x. root root system_u:object_r:bin_t:s0       /usr/libexec/sssd
# restorecon /usr/sbin/sssd
# restorecon /usr/libexec/sssd/sssd*
# ls -Zd /usr/libexec/sssd
drwxr-xr-x. root root system_u:object_r:bin_t:s0       /usr/libexec/sssd
# ls -Z /usr/libexec/sssd/sssd*
-rwxr-xr-x. root root system_u:object_r:bin_t:s0       sssd_autofs
-rwxr-xr-x. root root system_u:object_r:bin_t:s0       sssd_be
-rwxr-xr-x. root root system_u:object_r:bin_t:s0       sssd_nss
-rwxr-xr-x. root root system_u:object_r:bin_t:s0       sssd_pac
-rwxr-xr-x. root root system_u:object_r:bin_t:s0       sssd_pam
-rwxr-xr-x. root root system_u:object_r:bin_t:s0       sssd_ssh
-rwxr-xr-x. root root system_u:object_r:bin_t:s0       sssd_sudo

restorecon does not seem to change anything.

Comment 5 Thomas Schweikle 2013-08-16 08:10:51 UTC
# ls -Z /usr/sbin/sssd
-rwxr-xr-x. root root system_u:object_r:sssd_exec_t:s0 /usr/sbin/sssd

Comment 6 Thomas Schweikle 2013-08-16 08:11:18 UTC
And after restorecon

# ls -Z /usr/sbin/sssd
-rwxr-xr-x. root root system_u:object_r:sssd_exec_t:s0 /usr/sbin/sssd

Comment 7 Ondrej Kos 2013-08-16 08:50:40 UTC
Hi Thomas,

We are still unable to reproduce this issue, could you please do the following?

Look into /var/log/messages and find corresponding SELinux alerts
For each, there's an id of issue, run the 'sealert -l $ID' and post the output.

Comment 8 Jakub Hrozek 2013-08-19 09:02:57 UTC
What is the output of:
matchpathcon /usr/sbin/sssd

?

Comment 9 Thomas Schweikle 2013-08-19 12:10:05 UTC
# matchpathcon /usr/sbin/sssd
/usr/sbin/sssd  system_u:object_r:sssd_exec_t:s0

Comment 10 Jakub Hrozek 2013-08-19 12:15:53 UTC
That matches the policy, do you still see failures and AVC denials after restorecon?

Comment 11 Thomas Schweikle 2013-08-19 12:26:06 UTC
Looks as if it is OK now.

Comment 12 Jakub Hrozek 2013-08-19 12:41:39 UTC
(In reply to Thomas Schweikle from comment #11)
> Looks as if it is OK now.

Then I think this might be yet another case of yum ordering issue, maybe selinux-policy was updated before sssd? You could check in /var/log/yum.log

Comment 13 Jakub Hrozek 2013-08-19 12:43:21 UTC
Sorry, submitted the comment too early..

You can see a (rather long) discussion about a similar problem in #924044, especially https://bugzilla.redhat.com/show_bug.cgi?id=924044#c38

We can't reproduce the bug on our end, so I'm inclined to close it as WORKSFORME..

Comment 14 Thomas Schweikle 2013-08-19 14:16:29 UTC
On the fly searched for such a thing. Found it:

if there are packages for selinux and sssd available selinux is upgraded first, then sssd. Looks like it doesn't reload the policy as expected after upgrading sssd using the old policy ...

Comment 15 Jakub Hrozek 2013-08-19 17:02:09 UTC
(In reply to Thomas Schweikle from comment #14)
> On the fly searched for such a thing. Found it:
> 
> if there are packages for selinux and sssd available selinux is upgraded
> first, then sssd. Looks like it doesn't reload the policy as expected after
> upgrading sssd using the old policy ...

Yes, that's the same reason. Thanks for confirming it. It's going to be fixed in RHEL7, but as far as I know, there are no plans to fix the problem in RHEL6.

*** This bug has been marked as a duplicate of bug 924044 ***