RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 997357 - Selinux blocks cluster startup (service cman start)
Summary: Selinux blocks cluster startup (service cman start)
Keywords:
Status: CLOSED DUPLICATE of bug 915151
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: selinux-policy
Version: 6.5
Hardware: Unspecified
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Miroslav Grepl
QA Contact: BaseOS QE Security Team
URL:
Whiteboard:
: 1000624 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-08-15 09:24 UTC by Jaroslav Kortus
Modified: 2013-10-21 12:07 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-09-17 15:36:02 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Jaroslav Kortus 2013-08-15 09:24:17 UTC
Description of problem:
Selinux blocks cluster startup (service cman start). This is a regression from rhel6.4.

Version-Release number of selected component (if applicable):
selinux-policy-3.7.19-211.el6.noarch


How reproducible:
always

Steps to Reproduce:
1. set up a cluster (cluster.conf)
2. service cman start
3.

Actual results:
]# service cman start
Starting cluster: 
   Checking if cluster has been disabled at boot... [  OK  ]
   Checking Network Manager... [  OK  ]
   Global setup... [  OK  ]
   Loading kernel modules... [  OK  ]
   Mounting configfs... [  OK  ]
   Starting cman... [  OK  ]
   Waiting for quorum... [  OK  ]
   Starting fenced... [  OK  ]
   Starting dlm_controld... [  OK  ]
   Tuning DLM kernel config... [  OK  ]
   Starting gfs_controld... [  OK  ]
   Unfencing self... fence_node: cannot connect to cman
[FAILED]
Stopping cluster: 
   Leaving fence domain... [  OK  ]
   Stopping gfs_controld... [  OK  ]
   Stopping dlm_controld... [  OK  ]
   Stopping fenced... [  OK  ]
   Stopping cman... [  OK  ]
   Waiting for corosync to shutdown:[  OK  ]
   Unloading kernel modules... [  OK  ]
   Unmounting configfs... [  OK  ]


Expected results:
successful startup

Additional info:
It seems that it's caused by mislabeled files in /var/run:
# ll -Z /var/run/cman_*
srw-------. root root unconfined_u:object_r:var_run_t:s0 /var/run/cman_admin
srw-rw----. root root unconfined_u:object_r:var_run_t:s0 /var/run/cman_client

In RHEL6.4 they are:
srw-------. root root unconfined_u:object_r:corosync_var_run_t:s0 /var/run/cman_admin
srw-rw----. root root unconfined_u:object_r:corosync_var_run_t:s0 /var/run/cman_client


Denials dump:
----
time->Thu Aug 15 11:23:03 2013
type=SYSCALL msg=audit(1376558583.741:113): arch=c000003e syscall=42 success=yes exit=0 a0=6 a1=7fffb8a95e00 a2=6e a3=7fffb8a95b80 items=0 ppid=1 pid=3947 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=1 comm="dlm_controld" exe="/usr/sbin/dlm_controld" subj=unconfined_u:system_r:dlm_controld_t:s0 key=(null)
type=AVC msg=audit(1376558583.741:113): avc:  denied  { write } for  pid=3947 comm="dlm_controld" name="cman_admin" dev=dm-0 ino=18847 scontext=unconfined_u:system_r:dlm_controld_t:s0 tcontext=unconfined_u:object_r:var_run_t:s0 tclass=sock_file
----
time->Thu Aug 15 11:23:03 2013
type=SYSCALL msg=audit(1376558583.586:112): arch=c000003e syscall=42 success=yes exit=0 a0=8 a1=7fffe6771df0 a2=6e a3=7fffe6771b70 items=0 ppid=1 pid=3930 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=1 comm="fenced" exe="/usr/sbin/fenced" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1376558583.586:112): avc:  denied  { write } for  pid=3930 comm="fenced" name="cman_admin" dev=dm-0 ino=18847 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=unconfined_u:object_r:var_run_t:s0 tclass=sock_file
----
time->Thu Aug 15 11:23:04 2013
type=SYSCALL msg=audit(1376558584.244:114): arch=c000003e syscall=42 success=yes exit=0 a0=6 a1=7fff772b6530 a2=6e a3=7fff772b62b0 items=0 ppid=1 pid=4003 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=1 comm="gfs_controld" exe="/usr/sbin/gfs_controld" subj=unconfined_u:system_r:gfs_controld_t:s0 key=(null)
type=AVC msg=audit(1376558584.244:114): avc:  denied  { write } for  pid=4003 comm="gfs_controld" name="cman_admin" dev=dm-0 ino=18847 scontext=unconfined_u:system_r:gfs_controld_t:s0 tcontext=unconfined_u:object_r:var_run_t:s0 tclass=sock_file

Comment 1 Daniel Walsh 2013-08-15 19:00:35 UTC
Those should be labeled differently does matchpathcon or restorecon on those patches change anything?

Comment 2 Jaroslav Kortus 2013-08-16 08:57:59 UTC
## reboot+autorelabel
[root@virt-132 ~]# restorecon -nvR /var
[root@virt-132 ~]# setenforce 0
[root@virt-132 ~]# service cman start
Starting cluster: 
   Checking if cluster has been disabled at boot... [  OK  ]
   Checking Network Manager... [  OK  ]
   Global setup... [  OK  ]
   Loading kernel modules... [  OK  ]
   Mounting configfs... [  OK  ]
   Starting cman... [  OK  ]
   Waiting for quorum... [  OK  ]
   Starting fenced... [  OK  ]
   Starting dlm_controld... [  OK  ]
   Tuning DLM kernel config... [  OK  ]
   Starting gfs_controld... [  OK  ]
   Unfencing self... [  OK  ]
   Joining fence domain... [  OK  ]
[root@virt-132 ~]# restorecon -nvR /var
restorecon reset /var/run/cman_admin context unconfined_u:object_r:var_run_t:s0->unconfined_u:object_r:cluster_var_run_t:s0
restorecon reset /var/run/cman_client context unconfined_u:object_r:var_run_t:s0->unconfined_u:object_r:cluster_var_run_t:s0
[root@virt-132 ~]#

Comment 3 Daniel Walsh 2013-08-16 17:39:23 UTC
That looks like the process that created /var/run/cman_* was not running with a context.  Are these files/directories created in an init script?

Comment 4 Chris Feist 2013-08-16 18:47:09 UTC
(In reply to Daniel Walsh from comment #3)
> That looks like the process that created /var/run/cman_* was not running
> with a context.  Are these files/directories created in an init script?

I believe the /var/run/cman_admin created during a cman join request which is issued in the init script.  What should be doing differently to make sure files created from the init script have a context?

Adding Fabio & Chrissie, since they may be able to answer this question better than me.

Comment 5 Fabio Massimo Di Nitto 2013-08-17 05:10:39 UTC
(In reply to Daniel Walsh from comment #3)
> That looks like the process that created /var/run/cman_* was not running
> with a context.  Are these files/directories created in an init script?

They are created by cman_tool, either executed manually or by the init script. It looks like the regression has been introduced between last week and this week. CAn't be more specific as we were doing testing friday and it was working.

Comment 6 Daniel Walsh 2013-08-17 10:58:30 UTC
Well it would probably be best that a tool run by the user which creates the content should make sure it is labeled correctly?  or the admin needs to do this.

In RHEL7 we can add a file trans rule for this.  As well as have systemd-tmpfiles create it with the right label.  

In RHEL6 aren't these directories in the rpm payload?

Comment 7 Daniel Walsh 2013-08-17 11:01:30 UTC
Looking into this further, it looks like corosync or corosync-notifyd were run by an unconfined user directly rather then through the init script, I would guess.  Which is why the sockets were created with the wrong context.

Comment 8 Fabio Massimo Di Nitto 2013-08-18 06:26:03 UTC
(In reply to Daniel Walsh from comment #6)
> Well it would probably be best that a tool run by the user which creates the
> content should make sure it is labeled correctly?  or the admin needs to do
> this.

This has never been a problem before so why is it becoming a necessity now? I am against adding specific selinux stuff in the cman init script. And it was never a requirement before.

> 
> In RHEL7 we can add a file trans rule for this.  As well as have
> systemd-tmpfiles create it with the right label.  

there is no cman in rhel7. Only corosync.

> 
> In RHEL6 aren't these directories in the rpm payload?

Those are not directories, those are 2 sockets that haven't changed since RHEL4.

Comment 9 Fabio Massimo Di Nitto 2013-08-18 06:27:06 UTC
(In reply to Daniel Walsh from comment #7)
> Looking into this further, it looks like corosync or corosync-notifyd were
> run by an unconfined user directly rather then through the init script, I
> would guess.  Which is why the sockets were created with the wrong context.

Nope, in RHEL6 corosync is executed via cman init script. This behaviour has never changed.

cman is a plugin for corosync and cman init script will execute corosync for the HA use case.

Comment 11 Miroslav Grepl 2013-08-20 08:53:37 UTC
We added some changes in RHEL6.5 for all cluster services. There was a broken build which could cause these issues. Any chance to install a fresh install with the latest builds and see if you are able to reproduce it?

Thank you.

Comment 12 Jaroslav Kortus 2013-08-20 09:33:21 UTC
sure, just point me to the RPMs :)

Comment 15 Miroslav Grepl 2013-08-27 08:13:04 UTC
*** Bug 1000624 has been marked as a duplicate of this bug. ***

Comment 16 Miroslav Grepl 2013-08-27 08:27:28 UTC
Any chance to find out how /var/run/cman_* is created? If it is created in an init script, then the restorecon is needed.

Comment 17 Fabio Massimo Di Nitto 2013-08-27 08:34:35 UTC
(In reply to Miroslav Grepl from comment #16)
> Any chance to find out how /var/run/cman_* is created? If it is created in
> an init script, then the restorecon is needed.

As written above in comment #8, those are 2 sockets.

corosync daemon, started via cman_tool or via init script (that in turns call cman_tool) will create those sockets.

Comment 19 Miroslav Grepl 2013-08-27 11:28:10 UTC
Ok. Thank you. I see a bug.

Comment 20 Miroslav Grepl 2013-08-27 12:45:40 UTC
Could you pls re-test it with

https://brewweb.devel.redhat.com/buildinfo?buildID=292125

Comment 21 Jaroslav Kortus 2013-08-27 15:59:10 UTC
no more denials with selinux-policy-3.7.19-213.el6 policy, services contexts as expected:

unconfined_u:system_r:cluster_t:s0 2407 ?      SLsl   0:16 corosync -f
unconfined_u:system_r:fenced_t:s0 2462 ?       Ssl    0:29 fenced
unconfined_u:system_r:dlm_controld_t:s0 2477 ? Ssl    0:00 dlm_controld
unconfined_u:system_r:gfs_controld_t:s0 2537 ? Ssl    0:00 gfs_controld

Files also labeled correctly:
# ll -Z /var/run/cman*
srw-------. root root unconfined_u:object_r:cluster_var_run_t:s0 /var/run/cman_admin
srw-rw----. root root unconfined_u:object_r:cluster_var_run_t:s0 /var/run/cman_client
-rw-r--r--. root root unconfined_u:object_r:initrc_var_run_t:s0 /var/run/cman.pid

Comment 22 Jaroslav Kortus 2013-08-27 16:03:07 UTC
and pacemaker bits:
unconfined_u:system_r:cluster_t:s0 root   7795  0.1  0.2  80608  3020 pts/0    S    18:01   0:00 pacemakerd
unconfined_u:system_r:cluster_t:s0 189    7801  0.0  0.9  93480 10068 ?        Ss   18:01   0:00  \_ /usr/libexec/pacemaker/cib
unconfined_u:system_r:cluster_t:s0 root   7802  0.0  0.3  94364  4052 ?        Ss   18:01   0:00  \_ /usr/libexec/pacemaker/stonithd
unconfined_u:system_r:cluster_t:s0 root   7803  0.0  0.3  76072  3172 ?        Ss   18:01   0:00  \_ /usr/libexec/pacemaker/lrmd
unconfined_u:system_r:cluster_t:s0 189    7804  0.0  0.3  89620  3400 ?        Ss   18:01   0:00  \_ /usr/libexec/pacemaker/attrd
unconfined_u:system_r:cluster_t:s0 189    7805  0.0  0.2  81168  2648 ?        Ss   18:01   0:00  \_ /usr/libexec/pacemaker/pengine
unconfined_u:system_r:cluster_t:s0 root   7806  0.0  0.4 106604  4324 ?        Ss   18:01   0:00  \_ /usr/libexec/pacemaker/crmd

so far so good :)

Comment 23 Miroslav Grepl 2013-09-06 10:28:44 UTC
Flags are missing here.

Comment 24 Miroslav Grepl 2013-09-17 15:36:02 UTC

*** This bug has been marked as a duplicate of bug 915151 ***


Note You need to log in before you can comment on or make changes to this bug.