RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1664448 - unable to ssh as IPA user with SELinux user map
Summary: unable to ssh as IPA user with SELinux user map
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: selinux-policy
Version: 8.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: 8.0
Assignee: Lukas Vrabec
QA Contact: Milos Malik
URL:
Whiteboard:
Depends On:
Blocks: 1652753 1672685 1778780
TreeView+ depends on / blocked
 
Reported: 2019-01-08 20:35 UTC by Scott Poore
Modified: 2021-10-07 11:47 UTC (History)
16 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1672685 (view as bug list)
Environment:
Last Closed: 2019-06-14 01:07:26 UTC
Type: Bug
Target Upstream Version:
Embargoed:
spoore: needinfo+
rule-engine: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-12291 0 None None None 2021-10-07 11:47:02 UTC

Description Scott Poore 2019-01-08 20:35:39 UTC
Description of problem:

When using SELinux User Mapping in IPA, I am unable to login with SSH.


Here I can see the selinuxusermap rule that sets mapping for user seuser1:

[root@host-8-253-79 ~]# ipa selinuxusermap-show seuser1 --all
  dn: ipaUniqueID=9091795c-1369-11e9-acf5-fa163e329909,cn=usermap,cn=selinux,dc=testrelm,dc=test
  Rule name: seuser1
  SELinux User: user_u:s0
  Enabled: TRUE
  Users: seuser1
  Hosts: host-8-253-63.testrelm.test, host-8-253-79.testrelm.test
  ipauniqueid: 9091795c-1369-11e9-acf5-fa163e329909
  objectclass: ipaassociation, ipaselinuxusermap

I can login with seuser2 which has no mapping:

[root@host-8-253-79 ~]# ssh -l seuser2 $(hostname) 'id -Z'
Password: 
unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

But, I get a completely different result with user with mapping:

[root@host-8-253-79 ~]# date; sleep 1; ssh -l seuser1 $(hostname) 'id -Z'
Tue Jan  8 15:16:32 EST 2019
Password: 
Instance used by: https://platform-stg-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/ipa-rhel-8.0.0-candidate-runtest-selinux-usermap-func/93/
Activate the web console with: systemctl enable --now cockpit.socket

Last failed login: Tue Jan  8 14:02:34 EST 2019 from 172.16.169.81 on ssh:notty
There was 1 failed login attempt since the last successful login.

[root@host-8-253-79 ~]# 

And This is what I see in the journal:

Jan 08 15:16:36 host-8-253-79.testrelm.test systemd[17905]: pam_selinux(systemd-user:session): Unable to get valid context for seuser1
Jan 08 15:16:36 host-8-253-79.testrelm.test systemd[17905]: pam_selinux(systemd-user:session): conversation failed
Jan 08 15:16:36 host-8-253-79.testrelm.test systemd[17905]: pam_unix(systemd-user:session): session opened for user seuser1 by (uid=0)
Jan 08 15:16:36 host-8-253-79.testrelm.test systemd[17905]: PAM failed: Cannot make/remove an entry for the specified session
Jan 08 15:16:36 host-8-253-79.testrelm.test systemd[17905]: user: Failed to set up PAM session: Operation not permitted
Jan 08 15:16:36 host-8-253-79.testrelm.test systemd[17905]: user: Failed at step PAM spawning /usr/lib/systemd/systemd: Operation not permitted
-- Subject: Process /usr/lib/systemd/systemd could not be executed
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
-- 
-- The process /usr/lib/systemd/systemd could not be executed and failed.
-- 
-- The error number returned by this process is 1.
Jan 08 15:16:36 host-8-253-79.testrelm.test systemd[1]: user: Failed with result 'protocol'.
Jan 08 15:16:36 host-8-253-79.testrelm.test systemd[1]: Failed to start User Manager for UID 1303600013.
-- Subject: Unit user has failed
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
-- 
-- Unit user has failed.
-- 
-- The result is RESULT.
Jan 08 15:16:36 host-8-253-79.testrelm.test systemd[1]: user-runtime-dir: Unit not needed anymore. Stopping.
Jan 08 15:16:36 host-8-253-79.testrelm.test systemd[1]: Stopping /run/user/1303600013 mount wrapper...
-- Subject: Unit user-runtime-dir has begun shutting down
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
-- 
-- Unit user-runtime-dir has begun shutting down.
Jan 08 15:16:36 host-8-253-79.testrelm.test sshd[17897]: pam_systemd(sshd:session): Failed to create session: Start job for unit user failed with 'failed'
Jan 08 15:16:36 host-8-253-79.testrelm.test sshd[17897]: pam_unix(sshd:session): session opened for user seuser1 by (uid=0)
Jan 08 15:16:36 host-8-253-79.testrelm.test sshd[17897]: debug1: auth_restrict_session: restricting session
Jan 08 15:16:36 host-8-253-79.testrelm.test sshd[17897]: debug1: auth_activate_options: setting new authentication options
Jan 08 15:16:36 host-8-253-79.testrelm.test sshd[17897]: error: PAM: pam_open_session(): System error


Version-Release number of selected component (if applicable):
systemd-239-10.el8.x86_64
selinux-policy-3.14.1-49.el8.noarch
sssd-2.0.0-32.el8.x86_64
openssh-server-7.8p1-4.el8.x86_64
pam-1.3.1-4.el8.x86_64


How reproducible:
unknown.  but, occurring frequently in some of the IPA tests.

Steps to Reproduce:
1.  setup SELinux User Mapping (in IPA) for a user
2.  ssh -l testuser testhost

Actual results:
fails for user with mapping
passes for user without mapping

Expected results:
passes for all users and ssh works as expected and system properly handles SELinux User Mapping so that id -Z when logged in shows expected results.

Additional info:
Unfortunately in this particular case, I'm not seeing AVCs to list with this:

[root@host-8-253-79 ~]# ausearch -m avc -ts recent
<no matches>

Comment 1 Scott Poore 2019-01-08 20:36:35 UTC
I forgot to mention that I could ssh when SELinux is in Permissive mode:

[root@host-8-253-79 ~]# setenforce 0
[root@host-8-253-79 ~]# ssh -l seuser1 $(hostname) 'id -Z'
Password: 
user_u:user_r:user_t:s0

Comment 2 Scott Poore 2019-01-08 21:27:32 UTC
Here's ausearch with don't audit suppressed:

[root@host-8-253-79 ~]# semodule -DB
libsemanage.add_user: user user1 not in password file
libsemanage.add_user: user user2 not in password file
libsemanage.add_user: user user3_1 not in password file
libsemanage.add_user: user user3_2 not in password file

[root@host-8-253-79 ~]# getenforce
Enforcing

[root@host-8-253-79 ~]# ausearch -m avc -ts recent
<no matches>

[root@host-8-253-79 ~]# ssh -l seuser1 $(hostname) 'id -Z'
Password: 

Activate the web console with: systemctl enable --now cockpit.socket

[root@host-8-253-79 ~]# ausearch -m avc -ts recent
----
time->Tue Jan  8 16:25:10 2019
type=PROCTITLE msg=audit(1546982710.858:46689): proctitle=2F7573722F6C6962657865632F737373642F73656C696E75785F6368696C64002D2D64656275672D6D6963726F7365636F6E64733D30002D2D64656275672D74696D657374616D70733D31002D2D64656275672D66643D3239002D2D64656275672D6C6576656C3D307866376630
type=SYSCALL msg=audit(1546982710.858:46689): arch=c000003e syscall=59 success=yes exit=0 a0=7fc01d04ee30 a1=55eac3b58600 a2=55eac3afcce0 a3=55eac3b58600 items=0 ppid=17613 pid=18586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="selinux_child" exe="/usr/libexec/sssd/selinux_child" subj=system_u:system_r:sssd_selinux_manager_t:s0 key=(null)
type=AVC msg=audit(1546982710.858:46689): avc:  denied  { siginh } for  pid=18586 comm="selinux_child" scontext=system_u:system_r:sssd_t:s0 tcontext=system_u:system_r:sssd_selinux_manager_t:s0 tclass=process permissive=0
type=AVC msg=audit(1546982710.858:46689): avc:  denied  { rlimitinh } for  pid=18586 comm="selinux_child" scontext=system_u:system_r:sssd_t:s0 tcontext=system_u:system_r:sssd_selinux_manager_t:s0 tclass=process permissive=0
type=AVC msg=audit(1546982710.858:46689): avc:  denied  { noatsecure } for  pid=18586 comm="sssd_be" scontext=system_u:system_r:sssd_t:s0 tcontext=system_u:system_r:sssd_selinux_manager_t:s0 tclass=process permissive=0
----

time->Tue Jan  8 16:25:10 2019
type=PROCTITLE msg=audit(1546982710.913:46697): proctitle=2F7573722F6C6962657865632F737373642F73656C696E75785F6368696C64002D2D64656275672D6D6963726F7365636F6E64733D30002D2D64656275672D74696D657374616D70733D31002D2D64656275672D66643D3239002D2D64656275672D6C6576656C3D307866376630
type=SYSCALL msg=audit(1546982710.913:46697): arch=c000003e syscall=59 success=yes exit=0 a0=7fc01d04ee30 a1=55eac3b34780 a2=55eac3afcce0 a3=55eac3b34780 items=0 ppid=17613 pid=18591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="selinux_child" exe="/usr/libexec/sssd/selinux_child" subj=system_u:system_r:sssd_selinux_manager_t:s0 key=(null)
type=AVC msg=audit(1546982710.913:46697): avc:  denied  { siginh } for  pid=18591 comm="selinux_child" scontext=system_u:system_r:sssd_t:s0 tcontext=system_u:system_r:sssd_selinux_manager_t:s0 tclass=process permissive=0
type=AVC msg=audit(1546982710.913:46697): avc:  denied  { rlimitinh } for  pid=18591 comm="selinux_child" scontext=system_u:system_r:sssd_t:s0 tcontext=system_u:system_r:sssd_selinux_manager_t:s0 tclass=process permissive=0
type=AVC msg=audit(1546982710.913:46697): avc:  denied  { noatsecure } for  pid=18591 comm="sssd_be" scontext=system_u:system_r:sssd_t:s0 tcontext=system_u:system_r:sssd_selinux_manager_t:s0 tclass=process permissive=0
[root@host-8-253-79 ~]#

Comment 3 Scott Poore 2019-01-08 21:35:25 UTC
Also noting:

[root@host-8-253-79 ~]# semanage login -l

Login Name           SELinux User         MLS/MCS Range        Service

__default__          unconfined_u         s0-s0:c0.c1023       *
root                 unconfined_u         s0-s0:c0.c1023       *
seuser1              user_u               s0                   *
seuser2              unconfined_u         s0-s0:c0.c1023       *
user1                unconfined_u         s0-s0:c0.c1023       *
user2                unconfined_u         s0-s0:c0.c1023       *
user3_1              staff_u              s0-s0:c0.c1023       *
user3_2              unconfined_u         s0-s0:c0.c1023       *
user4_1              user_u               s0                   *
[root@host-8-253-79 ~]# 


Going to move this to sssd for now since I just noticed the selinux_child messages from ausearch without don't audit.  I had missed those earlier today but, maybe they can tell me where to pull more info to debug this.

Comment 5 Jakub Hrozek 2019-01-08 22:12:39 UTC
Does the login pass in permissive mode?

Maybe it's too late for me to read access logs, but this...

Jan 08 15:16:36 host-8-253-79.testrelm.test systemd[17905]: pam_selinux(systemd-user:session): Unable to get valid context for seuser1
Jan 08 15:16:36 host-8-253-79.testrelm.test systemd[17905]: pam_selinux(systemd-user:session): conversation failed

...shows that the systemd-user service (not the sshd service) which is executed by the systemd user instance failed. Nonetheless, reading the pam_selinux code, the compute_exec_context() should still return PAM_SUCCESS..

Jan 08 15:16:36 host-8-253-79.testrelm.test systemd[17905]: pam_unix(systemd-user:session): session opened for user seuser1 by (uid=0)

and indeed the systemd-user session phase succeeded

Jan 08 15:16:36 host-8-253-79.testrelm.test systemd[17905]: PAM failed: Cannot make/remove an entry for the specified session
Jan 08 15:16:36 host-8-253-79.testrelm.test systemd[17905]: user: Failed to set up PAM session: Operation not permitted
Jan 08 15:16:36 host-8-253-79.testrelm.test systemd[17905]: user: Failed at step PAM spawning /usr/lib/systemd/systemd: Operation not permitted

OK, this sounds bad. I wonder if systemd has some issues opening the session with the restricted SELinux user_u?

-- Subject: Process /usr/lib/systemd/systemd could not be executed
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
-- 
-- The process /usr/lib/systemd/systemd could not be executed and failed.
-- 
-- The error number returned by this process is 1.
Jan 08 15:16:36 host-8-253-79.testrelm.test systemd[1]: user: Failed with result 'protocol'.
Jan 08 15:16:36 host-8-253-79.testrelm.test systemd[1]: Failed to start User Manager for UID 1303600013.
-- Subject: Unit user has failed
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
-- 
-- Unit user has failed.
-- 
-- The result is RESULT.
Jan 08 15:16:36 host-8-253-79.testrelm.test systemd[1]: user-runtime-dir: Unit not needed anymore. Stopping.
Jan 08 15:16:36 host-8-253-79.testrelm.test systemd[1]: Stopping /run/user/1303600013 mount wrapper...
-- Subject: Unit user-runtime-dir has begun shutting down
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
-- 
-- Unit user-runtime-dir has begun shutting down.
Jan 08 15:16:36 host-8-253-79.testrelm.test sshd[17897]: pam_systemd(sshd:session): Failed to create session: Start job for unit user failed with 'failed'
Jan 08 15:16:36 host-8-253-79.testrelm.test sshd[17897]: pam_unix(sshd:session): session opened for user seuser1 by (uid=0)

..nonetheless the session is opened here..

Jan 08 15:16:36 host-8-253-79.testrelm.test sshd[17897]: debug1: auth_restrict_session: restricting session
Jan 08 15:16:36 host-8-253-79.testrelm.test sshd[17897]: debug1: auth_activate_options: setting new authentication options
Jan 08 15:16:36 host-8-253-79.testrelm.test sshd[17897]: error: PAM: pam_open_session(): System error

...so currently I don't have a good idea what exact step fails here..

Comment 6 Lukas Vrabec 2019-01-09 09:01:25 UTC
Hi Scott, 

Could you reproduce it again and attach output of command:

# ausearch -m AVC -m USER_AVC -m SELINUX_ERR -ts today 

Thanks,
Lukas.

Comment 7 Jakub Hrozek 2019-01-09 09:30:21 UTC
(In reply to Lukas Vrabec from comment #6)
> Hi Scott, 
> 
> Could you reproduce it again and attach output of command:
> 
> # ausearch -m AVC -m USER_AVC -m SELINUX_ERR -ts today 
> 
> Thanks,
> Lukas.

Thank you for looking into this..the bug got reassigned to sssd in the meantime, do you think there is some legitimate selinux issue?

Nonetheless, it looks like the host got reprovisioned in the meantime, I was able to log in, but the sssd configuration was gone..

Comment 8 Scott Poore 2019-01-09 16:33:09 UTC
I'm having some issues with the test hosts at the moment so it's been difficult today to get a live system to work with.  I was able to reproduce briefly and grab all of /var/log...I'll upload shortly.

Lukas, FYI, this was all I saw with the ausearch you requested:

[root@host-8-241-175 ~]# ausearch -m AVC -m USER_AVC -m SELINUX_ERR -ts today
----
time->Wed Jan  9 10:53:12 2019
type=USER_AVC msg=audit(1547049192.418:1015): pid=610 uid=81 auid=4294967295 ses=4294967295 subj=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 msg='avc:  received policyload notice (seqno=2)  exe="/usr/bin/dbus-daemon" sauid=81 hostname=? addr=? terminal=?'
----
time->Wed Jan  9 10:53:12 2019
type=USER_AVC msg=audit(1547049192.530:1022): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  received policyload notice (seqno=2)  exe="/usr/lib/systemd/systemd" sauid=0 hostname=? addr=? terminal=?'
----
time->Wed Jan  9 11:11:53 2019
type=USER_AVC msg=audit(1547050313.960:1199): pid=21682 uid=81 auid=4294967295 ses=4294967295 subj=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 msg='avc:  received policyload notice (seqno=3)  exe="/usr/bin/dbus-daemon" sauid=81 hostname=? addr=? terminal=?'
----
time->Wed Jan  9 11:11:54 2019
type=USER_AVC msg=audit(1547050314.054:1206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  received policyload notice (seqno=3)  exe="/usr/lib/systemd/systemd" sauid=0 hostname=? addr=? terminal=?'

Comment 10 Scott Poore 2019-01-10 02:57:33 UTC
I don't know if it's related but, we also are seeing some strange failures with su in other tests where the first failure shows an error:

su: cannot open session: System error

When I check journal or /var/log/secure, I see this:

Jan  9 21:30:40 rhel-8 su[16824]: pam_systemd(su-l:session): pam-systemd initializing
Jan  9 21:30:40 rhel-8 su[16824]: pam_systemd(su-l:session): Asking logind to create session: uid=56678 pid=16824 service=su-l type=unspecified class=background desktop= seat= vtnr=0 tty= display= remote=no remote_user=root remote_host=
Jan  9 21:30:40 rhel-8 su[16824]: pam_systemd(su-l:session): Session limits: memory_max=n/a tasks_max=n/a cpu_weight=n/a io_weight=n/a
Jan  9 21:30:40 rhel-8 su[16824]: pam_systemd(su-l:session): Failed to create session: No such file or directory
Jan  9 21:30:40 rhel-8 su[16824]: pam_succeed_if(su-l:session): 'service' resolves to 'su-l'
Jan  9 21:30:40 rhel-8 su[16824]: pam_succeed_if(su-l:session): requirement "service in crond" not met by user "root"
Jan  9 21:30:40 rhel-8 su[16824]: pam_unix(su-l:session): session opened for user testuser1 by (uid=0)

Could this be related or should I open a new/separate bug for that?

Comment 11 Sumit Bose 2019-01-10 06:18:06 UTC
(In reply to Scott Poore from comment #10)
> I don't know if it's related but, we also are seeing some strange failures
> with su in other tests where the first failure shows an error:
> 
> su: cannot open session: System error
> 
> When I check journal or /var/log/secure, I see this:
> 
> Jan  9 21:30:40 rhel-8 su[16824]: pam_systemd(su-l:session): pam-systemd
> initializing
> Jan  9 21:30:40 rhel-8 su[16824]: pam_systemd(su-l:session): Asking logind
> to create session: uid=56678 pid=16824 service=su-l type=unspecified
> class=background desktop= seat= vtnr=0 tty= display= remote=no
> remote_user=root remote_host=
> Jan  9 21:30:40 rhel-8 su[16824]: pam_systemd(su-l:session): Session limits:
> memory_max=n/a tasks_max=n/a cpu_weight=n/a io_weight=n/a
> Jan  9 21:30:40 rhel-8 su[16824]: pam_systemd(su-l:session): Failed to
> create session: No such file or directory
> Jan  9 21:30:40 rhel-8 su[16824]: pam_succeed_if(su-l:session): 'service'
> resolves to 'su-l'
> Jan  9 21:30:40 rhel-8 su[16824]: pam_succeed_if(su-l:session): requirement
> "service in crond" not met by user "root"
> Jan  9 21:30:40 rhel-8 su[16824]: pam_unix(su-l:session): session opened for
> user testuser1 by (uid=0)
> 
> Could this be related or should I open a new/separate bug for that?

Hi Scott,

this sounds a bit like https://bugzilla.redhat.com/show_bug.cgi?id=1657295 . Is nsswitch.conf modified in your tests somehow? Can you try to restart the systemd-logind service before running the tests?

HTH

bye,
Sumit

Comment 12 Scott Poore 2019-01-10 12:44:33 UTC
Well, I'm running authselect to enable mkhomedir.  I don't see any manual changes to nsswitch.conf in most of the code but, I do in a couple random places where that failure from comment #10 occurred.  I manually tried on a failing host with the issue reported here with no luck:

[root@vm-idm-022 ~]# systemctl restart systemd-logind
[root@vm-idm-022 ~]# !ssh
ssh -l user4_1 $(hostname)
**  **  **  **  **  **  **  **  **  **  **  **  **  **  **  **  **  **
         This System is part of the Red Hat Test System.              
                                                                      
      Please do not use this system for individual unit testing.      
                                                                      
      RHTS Test information:                                          
                         HOSTNAME=vm-idm-022.lab.eng.pnq.redhat.com                           
                            JOBID=3281881                              
                         RECIPEID=6379693                           
                       LAB_SERVER=                         
                    RESULT_SERVER=LEGACY                      
                           DISTRO=RHEL-8.0-20190109.n.0                             
**  **  **  **  **  **  **  **  **  **  **  **  **  **  **  **  **  **
Last login: Thu Jan 10 05:46:27 2019 from 10.65.206.156
Connection to vm-idm-022.testrelm.test closed.

Comment 13 Jakub Hrozek 2019-01-10 14:48:12 UTC
So I was only able to reproduce the issue as long as /etc/pam.d/systemd-user contains:
  session  required pam_selinux.so nottys open debug
if I switch the configuration so that the module is optional, I'm able to log in. Because all SSSD operations (everything from/to pam_sss and nss_sss) work fine, I'll reassign the bug to PAM to get the PAM maintainer's opinion.

I'm afraid the pam_selinux debugging is not helpful to me, but maybe Tomas would know better?

I only see this in journal:
Jan 10 18:55:58 vm-idm-022.testrelm.test sshd[19705]: error: PAM: pam_open_session(): System error
Jan 10 18:55:58 vm-idm-022.testrelm.test sshd[19705]: debug1: auth_activate_options: setting new authentication options
Jan 10 18:55:58 vm-idm-022.testrelm.test sshd[19705]: debug1: auth_restrict_session: restricting session
Jan 10 18:55:58 vm-idm-022.testrelm.test sshd[19705]: pam_unix(sshd:session): session opened for user user4_1 by (uid=0)
Jan 10 18:55:58 vm-idm-022.testrelm.test sshd[19705]: pam_succeed_if(sshd:session): requirement "service in crond" not met by user "root"
Jan 10 18:55:58 vm-idm-022.testrelm.test sshd[19705]: pam_succeed_if(sshd:session): 'service' resolves to 'sshd'
Jan 10 18:55:58 vm-idm-022.testrelm.test sshd[19705]: pam_systemd(sshd:session): Failed to create session: Start job for unit user failed with 'failed'
Jan 10 18:55:58 vm-idm-022.testrelm.test systemd[1]: Stopping /run/user/1636000011 mount wrapper...
Jan 10 18:55:58 vm-idm-022.testrelm.test systemd[1]: user-runtime-dir: Unit not needed anymore. Stopping.
Jan 10 18:55:58 vm-idm-022.testrelm.test systemd[1]: Failed to start User Manager for UID 1636000011.
Jan 10 18:55:58 vm-idm-022.testrelm.test systemd[1]: user: Failed with result 'protocol'.
Jan 10 18:55:58 vm-idm-022.testrelm.test systemd[19713]: user: Failed at step PAM spawning /usr/lib/systemd/systemd: Operation not permitted
Jan 10 18:55:58 vm-idm-022.testrelm.test systemd[19713]: user: Failed to set up PAM session: Operation not permitted
Jan 10 18:55:58 vm-idm-022.testrelm.test systemd[19713]: PAM failed: Cannot make/remove an entry for the specified session
Jan 10 18:55:58 vm-idm-022.testrelm.test systemd[19713]: pam_unix(systemd-user:session): session opened for user user4_1 by (uid=0)
Jan 10 18:55:58 vm-idm-022.testrelm.test systemd[19713]: pam_succeed_if(systemd-user:session): requirement "service in crond" not met by user "root"
Jan 10 18:55:58 vm-idm-022.testrelm.test systemd[19713]: pam_succeed_if(systemd-user:session): 'service' resolves to 'systemd-user'
Jan 10 18:55:58 vm-idm-022.testrelm.test systemd[19713]: pam_selinux(systemd-user:session): conversation failed
Jan 10 18:55:58 vm-idm-022.testrelm.test systemd[19713]: pam_selinux(systemd-user:session): Unable to get valid context for user4_1
Jan 10 18:55:58 vm-idm-022.testrelm.test systemd[19713]: pam_selinux(systemd-user:session): Username= user4_1 SELinux User= user_u Level= s0
Jan 10 18:55:58 vm-idm-022.testrelm.test systemd[19713]: pam_selinux(systemd-user:session): Open Session
Jan 10 18:55:58 vm-idm-022.testrelm.test systemd[19713]: pam_selinux(systemd-user:session): Open Session

Comment 14 Tomas Mraz 2019-01-10 15:04:13 UTC
pam_selinux uses regular selinux library calls to obtain the right default context, etc. If that does not work for IPA seuser mappings then it rightfully cannot work.

I do not think this is a bug in pam_selinux. At worst it is unimplemented RFE, however I do not think pam_selinux should be made to explicitly query LDAP or SSSD or something. Shouldn't it be done transparently on the SELinux library level or so? Or, alternatively there should be written some pam_sssd_selinux module which would do this and it should replace pam_selinux.

Comment 15 Scott Poore 2019-01-10 18:17:07 UTC
Tomas,

Are there any other debug options/settings we can use to get more information back from PAM (or pam_selinux) that may help us troubleshoot this issue better?

This was working until the end of November and the tests started failing in December.

I'm not seeing much from the audit log around the time of a failure:


# ausearch -m avc -m user_avc -m selinux_err -ts 02:52 -te 02:53
----
time->Thu Jan 10 02:52:34 2019
type=USER_AVC msg=audit(1547068954.069:355016): pid=720 uid=81 auid=4294967295 ses=4294967295 subj=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 msg='avc:  received policyload notice (seqno=6)  exe="/usr/bin/dbus-daemon" sauid=81 hostname=? addr=? terminal=?'
----
time->Thu Jan 10 02:52:34 2019
type=USER_AVC msg=audit(1547068954.209:355023): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  received policyload notice (seqno=6)  exe="/usr/lib/systemd/systemd" sauid=0 hostname=? addr=? terminal=?'

Comment 16 Scott Poore 2019-01-10 18:36:23 UTC
Also note, that it seems like after putting the pam line back to required, it's still working.  I am not seeing the failure any more.  I'm not sure if that helps or confuses things further.

Comment 17 Jakub Hrozek 2019-01-10 20:31:13 UTC
First, I'm sorry if I wasn't clear earlier -- by reassigning the bug to PAM, I didn't mean to throw the BZ at you to fix something, I'm just so far not really sure where the problem is, so far my investigation showed that flipping pam_selinux from required to optional helps so I wanted to get your attention. Perhaps needinfo would have been more appropriate, sorry again if reassigning the bug was seen as rude.

(In reply to Tomas Mraz from comment #14)
> pam_selinux uses regular selinux library calls to obtain the right default
> context, etc. If that does not work for IPA seuser mappings then it
> rightfully cannot work.
> 

Sure and SSSD writes the mappings for the IPA users into the "selinux database" (sorry if that's not the right term). See e.g. comment #3, it clearly shows that the mappings are written to the db. At that point, pam_selinux should be using them, no?

On the current system, despite having the context written to the db:
$ semanage login -l

Login Name           SELinux User         MLS/MCS Range        Service

__default__          unconfined_u         s0-s0:c0.c1023       *
root                 unconfined_u         s0-s0:c0.c1023       *
user1                staff_u              s0-s0:c0.c1023       *
user2                unconfined_u         s0-s0:c0.c1023       *
user4_1              user_u               s0                   *

Login as user "user4_1" is prevented by pam_selinux:
Jan 11 01:57:52 vm-idm-022.testrelm.test systemd[22643]: pam_selinux(systemd-user:session): Username= user4_1 SELinux User= user_u Level= s0
Jan 11 01:57:52 vm-idm-022.testrelm.test systemd[22643]: pam_selinux(systemd-user:session): Unable to get valid context for user4_1
Jan 11 01:57:52 vm-idm-022.testrelm.test systemd[22643]: pam_selinux(systemd-user:session): conversation failed

I'm at loss at how to debug the problem further, do you have some idea?

Comment 18 Tomas Mraz 2019-01-11 07:33:22 UTC
Could it be some change in how libselinux reads or caches the data from the files as written by sssd?

pam_selinux was not touched recently so I suppose the change would be in the libraries it calls or in the selinux policy.

What semanage user -l prints?

Comment 19 Jakub Hrozek 2019-01-11 07:41:26 UTC
(In reply to Tomas Mraz from comment #18)
> Could it be some change in how libselinux reads or caches the data from the
> files as written by sssd?
> 
> pam_selinux was not touched recently so I suppose the change would be in the
> libraries it calls or in the selinux policy.
> 
> What semanage user -l prints?

semanage user -l

                Labeling   MLS/       MLS/                          
SELinux User    Prefix     MCS Level  MCS Range                      SELinux Roles

guest_u         user       s0         s0                             guest_r
root            user       s0         s0-s0:c0.c1023                 staff_r sysadm_r system_r unconfined_r
staff_u         user       s0         s0-s0:c0.c1023                 staff_r sysadm_r system_r unconfined_r
sysadm_u        user       s0         s0-s0:c0.c1023                 sysadm_r
system_u        user       s0         s0-s0:c0.c1023                 system_r unconfined_r
unconfined_u    user       s0         s0-s0:c0.c1023                 system_r unconfined_r
user_u          user       s0         s0                             user_r
xguest_u        user       s0         s0                             xguest_r

btw I'm on PTO for the rest of the day, so I'll only be available for answers very late or on Monday.

Perhaps Scott can answer some questions in the meantime, too.

Comment 20 Tomas Mraz 2019-01-11 07:54:42 UTC
That looks ok, Lukáš, do you have any idea what could be wrong?

Comment 21 Lukas Vrabec 2019-01-11 15:04:27 UTC
No idea, outputs looks fine to me. I know that Petr worked on some patches for sssd so maybe he can has some idea.

Comment 22 Jakub Hrozek 2019-01-14 13:07:19 UTC
btw I was able to reproduce the issue also with a local user, w/o and sssd involvement:

[root@vm-idm-022 ~]# useradd lcluser
[root@vm-idm-022 ~]# semanage login -l

Login Name           SELinux User         MLS/MCS Range        Service

__default__          unconfined_u         s0-s0:c0.c1023       *
root                 unconfined_u         s0-s0:c0.c1023       *
user4_1              user_u               s0                   *
[root@vm-idm-022 ~]# semanage login --add -s user_u -r s0 lcluser
[root@vm-idm-022 ~]# semanage login -l

Login Name           SELinux User         MLS/MCS Range        Service

__default__          unconfined_u         s0-s0:c0.c1023       *
lcluser              user_u               s0                   *
root                 unconfined_u         s0-s0:c0.c1023       *
user4_1              user_u               s0                   *
[root@vm-idm-022 ~]# passwd lcluser
Changing password for user lcluser.
New password: 
BAD PASSWORD: The password fails the dictionary check - it is based on a dictionary word
Retype new password: 
passwd: all authentication tokens updated successfully.
[root@vm-idm-022 ~]# ssh lcluser.206.156
Password: 
Security Context user_u:user_r:user_t:s0 Assigned
Key Creation Context user_u:user_r:user_t:s0 Assigned
**  **  **  **  **  **  **  **  **  **  **  **  **  **  **  **  **  **
         This System is part of the Red Hat Test System.              
                                                                      
      Please do not use this system for individual unit testing.      
                                                                      
      RHTS Test information:                                          
                         HOSTNAME=vm-idm-022.lab.eng.pnq.redhat.com                           
                            JOBID=3281881                              
                         RECIPEID=6379693                           
                       LAB_SERVER=                         
                    RESULT_SERVER=LEGACY                      
                           DISTRO=RHEL-8.0-20190109.n.0                             
**  **  **  **  **  **  **  **  **  **  **  **  **  **  **  **  **  **
Connection to 10.65.206.156 closed

In the meantime, journal says:
Jan 14 18:34:05 vm-idm-022.testrelm.test systemd-logind[22232]: New session 208 of user lcluser.
Jan 14 18:34:05 vm-idm-022.testrelm.test systemd[2348]: pam_selinux(systemd-user:session): Open Session
Jan 14 18:34:05 vm-idm-022.testrelm.test systemd[2348]: pam_selinux(systemd-user:session): Open Session
Jan 14 18:34:05 vm-idm-022.testrelm.test systemd[2348]: pam_selinux(systemd-user:session): Username= lcluser SELinux User= user_u Level= s0
Jan 14 18:34:05 vm-idm-022.testrelm.test systemd[2348]: pam_selinux(systemd-user:session): Unable to get valid context for lcluser
Jan 14 18:34:05 vm-idm-022.testrelm.test systemd[2348]: pam_selinux(systemd-user:session): conversation failed
Jan 14 18:34:05 vm-idm-022.testrelm.test systemd[2348]: pam_succeed_if(systemd-user:session): 'service' resolves to 'systemd-user'
Jan 14 18:34:05 vm-idm-022.testrelm.test systemd[2348]: pam_succeed_if(systemd-user:session): requirement "service in crond" not met by user "root"
Jan 14 18:34:05 vm-idm-022.testrelm.test systemd[2348]: pam_unix(systemd-user:session): session opened for user lcluser by (uid=0)
Jan 14 18:34:05 vm-idm-022.testrelm.test systemd[2348]: PAM failed: Cannot make/remove an entry for the specified session
Jan 14 18:34:05 vm-idm-022.testrelm.test systemd[2348]: user: Failed to set up PAM session: Operation not permitted
Jan 14 18:34:05 vm-idm-022.testrelm.test systemd[2348]: user: Failed at step PAM spawning /usr/lib/systemd/systemd: Operation not permitted
Jan 14 18:34:05 vm-idm-022.testrelm.test systemd[1]: user: Failed with result 'protocol'.
Jan 14 18:34:05 vm-idm-022.testrelm.test systemd[1]: Failed to start User Manager for UID 1001.
Jan 14 18:34:05 vm-idm-022.testrelm.test systemd[1]: user-runtime-dir: Unit not needed anymore. Stopping.
Jan 14 18:34:05 vm-idm-022.testrelm.test systemd[1]: Stopping /run/user/1001 mount wrapper...
Jan 14 18:34:05 vm-idm-022.testrelm.test sshd[2340]: pam_systemd(sshd:session): Failed to create session: Start job for unit user failed with 'failed'

Since Tomas said in comment #18 that pam_selinux is unlikely to be the culprit, does it make sense to reassign the bug to selinux?

Comment 23 Tomas Mraz 2019-01-14 13:37:48 UTC
Let's reassign to selinux-policy then for the further investigation.

Comment 26 Tomas Mraz 2019-01-14 15:38:22 UTC
Given this is reproducible with local user without sssd involvement it looks like the problem is elsewhere than in sssd?

Comment 27 Lukas Vrabec 2019-01-16 10:47:19 UTC
I was also able to reproduce it, but in Enforcing I was able to login, just saw the PAM issues in logs. There is some bug for sure, but somewhere else, not in selinux-policy.

Comment 28 Jakub Hrozek 2019-01-16 22:17:04 UTC
(In reply to Lukas Vrabec from comment #27)
> I was also able to reproduce it, but in Enforcing I was able to login, just
> saw the PAM issues in logs. There is some bug for sure, but somewhere else,
> not in selinux-policy.

Would it help if we give you access to a machine where this bug can be reproduced?

Comment 29 Lukas Vrabec 2019-01-17 09:38:56 UTC
Yes, let's try it.

Comment 33 Lukas Vrabec 2019-01-17 12:42:21 UTC
@Milos,

Could you please look on this bug? Can you reproduce it? 

Thanks,
Lukas.

Comment 74 Milos Malik 2019-01-28 14:17:00 UTC
Following SELinux denial is D-bus related and appears in logs of various automated TCs, but I believe that IPA or user map are not involved:
----
type=PROCTITLE msg=audit(01/25/2019 09:11:44.422:1073760) : proctitle=/usr/lib/systemd/systemd --user 
type=SYSCALL msg=audit(01/25/2019 09:11:44.422:1073760) : arch=x86_64 syscall=accept4 success=no exit=EACCES(Permission denied) a0=0xb a1=0x0 a2=0x0 a3=0x80800 items=0 ppid=1 pid=20257 auid=user uid=user gid=user18028 euid=user suid=user fsuid=user egid=user18028 sgid=user18028 fsgid=user18028 tty=(none) ses=119 comm=systemd exe=/usr/lib/systemd/systemd subj=staff_u:staff_r:staff_t:s0-s0:c0.c1023 key=(null) 
type=AVC msg=audit(01/25/2019 09:11:44.422:1073760) : avc:  denied  { accept } for  pid=20257 comm=systemd path=/run/user/1001/bus scontext=staff_u:staff_r:staff_t:s0-s0:c0.c1023 tcontext=staff_u:staff_r:staff_dbusd_t:s0-s0:c0.c1023 tclass=unix_stream_socket permissive=0 
----


Note You need to log in before you can comment on or make changes to this bug.