Bug 997357
Summary: | Selinux blocks cluster startup (service cman start) | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Jaroslav Kortus <jkortus> |
Component: | selinux-policy | Assignee: | Miroslav Grepl <mgrepl> |
Status: | CLOSED DUPLICATE | QA Contact: | BaseOS QE Security Team <qe-baseos-security> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 6.5 | CC: | ccaulfie, cfeist, cmarthal, dwalsh, fdinitto, mmalik |
Target Milestone: | rc | Keywords: | Regression |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2013-09-17 15:36:02 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Jaroslav Kortus
2013-08-15 09:24:17 UTC
Those should be labeled differently does matchpathcon or restorecon on those patches change anything? ## reboot+autorelabel [root@virt-132 ~]# restorecon -nvR /var [root@virt-132 ~]# setenforce 0 [root@virt-132 ~]# service cman start Starting cluster: Checking if cluster has been disabled at boot... [ OK ] Checking Network Manager... [ OK ] Global setup... [ OK ] Loading kernel modules... [ OK ] Mounting configfs... [ OK ] Starting cman... [ OK ] Waiting for quorum... [ OK ] Starting fenced... [ OK ] Starting dlm_controld... [ OK ] Tuning DLM kernel config... [ OK ] Starting gfs_controld... [ OK ] Unfencing self... [ OK ] Joining fence domain... [ OK ] [root@virt-132 ~]# restorecon -nvR /var restorecon reset /var/run/cman_admin context unconfined_u:object_r:var_run_t:s0->unconfined_u:object_r:cluster_var_run_t:s0 restorecon reset /var/run/cman_client context unconfined_u:object_r:var_run_t:s0->unconfined_u:object_r:cluster_var_run_t:s0 [root@virt-132 ~]# That looks like the process that created /var/run/cman_* was not running with a context. Are these files/directories created in an init script? (In reply to Daniel Walsh from comment #3) > That looks like the process that created /var/run/cman_* was not running > with a context. Are these files/directories created in an init script? I believe the /var/run/cman_admin created during a cman join request which is issued in the init script. What should be doing differently to make sure files created from the init script have a context? Adding Fabio & Chrissie, since they may be able to answer this question better than me. (In reply to Daniel Walsh from comment #3) > That looks like the process that created /var/run/cman_* was not running > with a context. Are these files/directories created in an init script? They are created by cman_tool, either executed manually or by the init script. It looks like the regression has been introduced between last week and this week. CAn't be more specific as we were doing testing friday and it was working. Well it would probably be best that a tool run by the user which creates the content should make sure it is labeled correctly? or the admin needs to do this. In RHEL7 we can add a file trans rule for this. As well as have systemd-tmpfiles create it with the right label. In RHEL6 aren't these directories in the rpm payload? Looking into this further, it looks like corosync or corosync-notifyd were run by an unconfined user directly rather then through the init script, I would guess. Which is why the sockets were created with the wrong context. (In reply to Daniel Walsh from comment #6) > Well it would probably be best that a tool run by the user which creates the > content should make sure it is labeled correctly? or the admin needs to do > this. This has never been a problem before so why is it becoming a necessity now? I am against adding specific selinux stuff in the cman init script. And it was never a requirement before. > > In RHEL7 we can add a file trans rule for this. As well as have > systemd-tmpfiles create it with the right label. there is no cman in rhel7. Only corosync. > > In RHEL6 aren't these directories in the rpm payload? Those are not directories, those are 2 sockets that haven't changed since RHEL4. (In reply to Daniel Walsh from comment #7) > Looking into this further, it looks like corosync or corosync-notifyd were > run by an unconfined user directly rather then through the init script, I > would guess. Which is why the sockets were created with the wrong context. Nope, in RHEL6 corosync is executed via cman init script. This behaviour has never changed. cman is a plugin for corosync and cman init script will execute corosync for the HA use case. We added some changes in RHEL6.5 for all cluster services. There was a broken build which could cause these issues. Any chance to install a fresh install with the latest builds and see if you are able to reproduce it? Thank you. sure, just point me to the RPMs :) *** Bug 1000624 has been marked as a duplicate of this bug. *** Any chance to find out how /var/run/cman_* is created? If it is created in an init script, then the restorecon is needed. (In reply to Miroslav Grepl from comment #16) > Any chance to find out how /var/run/cman_* is created? If it is created in > an init script, then the restorecon is needed. As written above in comment #8, those are 2 sockets. corosync daemon, started via cman_tool or via init script (that in turns call cman_tool) will create those sockets. Ok. Thank you. I see a bug. Could you pls re-test it with https://brewweb.devel.redhat.com/buildinfo?buildID=292125 no more denials with selinux-policy-3.7.19-213.el6 policy, services contexts as expected: unconfined_u:system_r:cluster_t:s0 2407 ? SLsl 0:16 corosync -f unconfined_u:system_r:fenced_t:s0 2462 ? Ssl 0:29 fenced unconfined_u:system_r:dlm_controld_t:s0 2477 ? Ssl 0:00 dlm_controld unconfined_u:system_r:gfs_controld_t:s0 2537 ? Ssl 0:00 gfs_controld Files also labeled correctly: # ll -Z /var/run/cman* srw-------. root root unconfined_u:object_r:cluster_var_run_t:s0 /var/run/cman_admin srw-rw----. root root unconfined_u:object_r:cluster_var_run_t:s0 /var/run/cman_client -rw-r--r--. root root unconfined_u:object_r:initrc_var_run_t:s0 /var/run/cman.pid and pacemaker bits: unconfined_u:system_r:cluster_t:s0 root 7795 0.1 0.2 80608 3020 pts/0 S 18:01 0:00 pacemakerd unconfined_u:system_r:cluster_t:s0 189 7801 0.0 0.9 93480 10068 ? Ss 18:01 0:00 \_ /usr/libexec/pacemaker/cib unconfined_u:system_r:cluster_t:s0 root 7802 0.0 0.3 94364 4052 ? Ss 18:01 0:00 \_ /usr/libexec/pacemaker/stonithd unconfined_u:system_r:cluster_t:s0 root 7803 0.0 0.3 76072 3172 ? Ss 18:01 0:00 \_ /usr/libexec/pacemaker/lrmd unconfined_u:system_r:cluster_t:s0 189 7804 0.0 0.3 89620 3400 ? Ss 18:01 0:00 \_ /usr/libexec/pacemaker/attrd unconfined_u:system_r:cluster_t:s0 189 7805 0.0 0.2 81168 2648 ? Ss 18:01 0:00 \_ /usr/libexec/pacemaker/pengine unconfined_u:system_r:cluster_t:s0 root 7806 0.0 0.4 106604 4324 ? Ss 18:01 0:00 \_ /usr/libexec/pacemaker/crmd so far so good :) Flags are missing here. *** This bug has been marked as a duplicate of bug 915151 *** |