Bug 1263350
Summary: | systemd user sessions running with the wrong context. | ||
---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Miroslav Grepl <mgrepl> |
Component: | selinux-policy | Assignee: | Lukas Vrabec <lvrabec> |
Status: | CLOSED ERRATA | QA Contact: | Fedora Extras Quality Assurance <extras-qa> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 24 | CC: | dominick.grift, dwalsh, extras-qa, johannbg, jsynacek, lnykryn, lvrabec, mgrepl, msekleta, plautrba, rlpowell, s, systemd-maint, zbyszek |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | 1262933 | Environment: | |
Last Closed: | 2016-12-06 17:03:37 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1262933 | ||
Bug Blocks: |
Description
Miroslav Grepl
2015-09-15 15:34:08 UTC
This bug appears to have been reported against 'rawhide' during the Fedora 24 development cycle. Changing version to '24'. More information and reason for this action is here: https://fedoraproject.org/wiki/Fedora_Program_Management/HouseKeeping/Fedora24#Rawhide_Rebase This package has changed ownership in the Fedora Package Database. Reassigning to the new owner of this component. I can't tell if this is related or not, but systemd --user is utterly failing for me on both my F24 and Rawhide systems unless selinux is unenforcing. If I do this: sudo systemctl restart user I get: type=AVC msg=audit(1480842972.704:1769): avc: denied { write } for pid=10222 comm="systemd" name="user" dev="cgroup" ino=371 scontext=staff_u:staff_r:staff_t:s0-s0:c0.c1023 tcontext=system_u:object_r:cgroup_t:s0 tclass=dir permissive=1 type=AVC msg=audit(1480842972.704:1770): avc: denied { remove_name } for pid=10222 comm="systemd" name="dbus.socket" dev="cgroup" ino=499 scontext=staff_u:staff_r:staff_t:s0-s0:c0.c1023 tcontext=system_u:object_r:cgroup_t:s0 tclass=dir permissive=1 type=AVC msg=audit(1480842972.704:1771): avc: denied { rmdir } for pid=10222 comm="systemd" name="dbus.socket" dev="cgroup" ino=499 scontext=staff_u:staff_r:staff_t:s0-s0:c0.c1023 tcontext=system_u:object_r:cgroup_t:s0 tclass=dir permissive=1 type=AVC msg=audit(1480842972.704:1772): avc: denied { add_name } for pid=10222 comm="systemd" name="systemd-exit.service" scontext=staff_u:staff_r:staff_t:s0-s0:c0.c1023 tcontext=system_u:object_r:cgroup_t:s0 tclass=dir permissive=1 type=AVC msg=audit(1480842972.704:1773): avc: denied { create } for pid=10222 comm="systemd" name="systemd-exit.service" scontext=staff_u:staff_r:staff_t:s0-s0:c0.c1023 tcontext=staff_u:object_r:cgroup_t:s0 tclass=dir permissive=1 type=AVC msg=audit(1480842972.814:1779): avc: denied { ioctl } for pid=10521 comm="systemd" path="socket:[79663]" dev="sockfs" ino=79663 ioctlcmd=0x5401 scontext=staff_u:staff_r:staff_t:s0-s0:c0.c1023 tcontext=system_u:system_r:init_t:s0 tclass=unix_stream_socket permissive=1 type=AVC msg=audit(1480842972.849:1780): avc: denied { write } for pid=10521 comm="systemd" name="user" dev="cgroup" ino=371 scontext=staff_u:staff_r:staff_t:s0-s0:c0.c1023 tcontext=system_u:object_r:cgroup_t:s0 tclass=dir permissive=1 type=AVC msg=audit(1480842972.849:1781): avc: denied { add_name } for pid=10521 comm="systemd" name="dbus.socket" scontext=staff_u:staff_r:staff_t:s0-s0:c0.c1023 tcontext=system_u:object_r:cgroup_t:s0 tclass=dir permissive=1 type=AVC msg=audit(1480842972.849:1782): avc: denied { create } for pid=10521 comm="systemd" name="dbus.socket" scontext=staff_u:staff_r:staff_t:s0-s0:c0.c1023 tcontext=staff_u:object_r:cgroup_t:s0 tclass=dir permissive=1 Not having user-level systemd is pretty bad, so please let me know if I should open another ticket, or anything else I can do to help. This is another bug. You opened one issue for this. We'll discuss it there. |