Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 619893 - luci generates selinux avcs
luci generates selinux avcs
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: selinux-policy (Show other bugs)
6.0
All Linux
high Severity high
: rc
: ---
Assigned To: Miroslav Grepl
Cluster QE
:
Depends On:
Blocks: 616643 619918 620576
  Show dependency treegraph
 
Reported: 2010-07-30 15:39 EDT by Paul Kennedy
Modified: 2015-04-19 20:48 EDT (History)
13 users (show)

See Also:
Fixed In Version: selinux-policy-3.7.19-37.el6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 619918 620576 (view as bug list)
Environment:
Last Closed: 2010-11-15 09:48:14 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Paul Kennedy 2010-07-30 15:39:11 EDT
Description of problem:
Running luci appears to cause problem with corosync. Causes stale pid.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Run 'service cman start' at each cluster node.
2. Verify that cluster software has started at each node.
3. Run 'service cman start' at each node to verify that cluster is running.
4. Start luci. Make sure that luci is configured to manage the cluster.
5. Run 'service cman start' at each node to verify that cluster is running.
  
Actual results:
# service cman status
Found stale pid file

Expected results:
# service cman status
cluster is running.

Additional info:

From /var/log/messages/:

Jul 30 08:24:58 doc-04 corosync[1898]:   [MAIN  ] Completed service synchronization, ready to provide service.
Jul 30 08:25:00 doc-04 fenced[1953]: fenced 3.0.12 started
Jul 30 08:25:00 doc-04 dlm_controld[1974]: dlm_controld 3.0.12 started
Jul 30 08:25:00 doc-04 gfs_controld[2025]: gfs_controld 3.0.12 started
Jul 30 08:42:54 doc-04 modclusterd: startup succeeded
Jul 30 08:42:54 doc-04 kernel: corosync[2257]: segfault at 8 ip 00000036f460d6f0 sp 00007f6fe35cdb58 error 4 in libpthread-2.12.so[36f4600000+17000]
Jul 30 08:42:55 doc-04 abrt[2258]: saved core dump of pid 1898 (/usr/sbin/corosync) to /var/spool/abrt/ccpp-1280497375-1898.new/coredump (56344576 bytes)
Jul 30 08:42:55 doc-04 abrtd: Directory 'ccpp-1280497375-1898' creation detected
Jul 30 08:42:55 doc-04 dlm_controld[1974]: cluster is down, exiting
Jul 30 08:42:55 doc-04 dlm_controld[1974]: daemon cpg_dispatch error 2
Jul 30 08:42:55 doc-04 gfs_controld[2025]: cluster is down, exiting
Jul 30 08:42:55 doc-04 gfs_controld[2025]: daemon cpg_dispatch error 2
Jul 30 08:42:55 doc-04 fenced[1953]: cluster is down, exiting
Jul 30 08:42:55 doc-04 fenced[1953]: daemon cpg_dispatch error 2
Jul 30 08:42:55 doc-04 fenced[1953]: cpg_dispatch error 2
Jul 30 08:42:55 doc-04 abrtd: Crash is in database already (dup of /var/spool/abrt/ccpp-1280331751-30293)
Jul 30 08:42:55 doc-04 abrtd: Deleting crash ccpp-1280497375-1898 (dup of ccpp-1280331751-30293), sending dbus signal
Jul 30 08:42:57 doc-04 kernel: dlm: closing connection to node 3
Jul 30 08:42:57 doc-04 kernel: dlm: closing connection to node 2
Jul 30 08:42:57 doc-04 kernel: dlm: closing connection to node 1
Comment 1 Paul Kennedy 2010-07-30 15:42:32 EDT
By the way, this is with the 722.0 build.
Comment 3 RHEL Product and Program Management 2010-07-30 16:07:34 EDT
This issue has been proposed when we are only considering blocker
issues in the current Red Hat Enterprise Linux release.

** If you would still like this issue considered for the current
release, ask your support representative to file as a blocker on
your behalf. Otherwise ask that it be considered for the next
Red Hat Enterprise Linux release. **
Comment 4 Steven Dake 2010-07-30 16:53:57 EDT
turning off selinux stops core dump.
Comment 5 Steven Dake 2010-07-30 17:04:02 EDT
avcs:
type=1400 audit(1280503340.368:54): avc:  denied  { name_connect } for  pid=1876 comm="modclusterd" scontext=system_u:system_r:ricci_modclusterd_t:s0 tcontext=system_u:object_r:port_t:s0 tclass=tcp_socket
type=1400 audit(1280503340.368:55): avc:  denied  { module_request } for  pid=1876 comm="modclusterd" kmod="net-pf-3" scontext=system_u:system_r:ricci_modclusterd_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=system
Comment 6 Steven Dake 2010-07-30 17:08:35 EDT
selinux-policy-targeted-3.7.19-32.el6.noarch
libselinux-2.0.94-1.el6.x86_64
selinux-policy-3.7.19-32.el6.noarch
libselinux-utils-2.0.94-1.el6.x86_64
Comment 7 Steven Dake 2010-07-30 17:12:57 EDT
This is a cman init script issue.  I believe cman is behaving as expected when the corosync process is no longer running (ie: it has failed via coredump).

Reassigning to Fabio for further validation of those assumptions, since the component is the cman init script.
Comment 8 Steven Dake 2010-07-30 17:13:33 EDT
ignore comment #7, was meant for a different bz...
Comment 9 Miroslav Grepl 2010-08-02 09:22:00 EDT
(In reply to comment #5)
> avcs:
> type=1400 audit(1280503340.368:54): avc:  denied  { name_connect } for 
> pid=1876 comm="modclusterd" scontext=system_u:system_r:ricci_modclusterd_t:s0
> tcontext=system_u:object_r:port_t:s0 tclass=tcp_socket

To which port is ricci-modclusterd connecting? Could you add full AVC message.
Comment 10 Chris Feist 2010-08-02 17:43:24 EDT
I see the following avc's with corosync:


Aug  2 16:36:12 ask-03 setroubleshoot: SELinux is preventing /usr/sbin/corosync "read write" access on control_buffer-umdEhg. For complete SELinux messages. run sealert -l 5650187a-ff57-4530-b194-b544aed24296
Aug  2 16:36:13 ask-03 setroubleshoot: SELinux is preventing /usr/sbin/corosync "unlink" access on control_buffer-umdEhg. For complete SELinux messages. run sealert -l bc80a856-7af6-4105-adf4-e4ac8d04a29e
Aug  2 16:36:13 ask-03 setroubleshoot: SELinux is preventing /usr/sbin/corosync "read write" access on request_buffer-abnIKe. For complete SELinux messages. run sealert -l 5650187a-ff57-4530-b194-b544aed24296
Aug  2 16:36:13 ask-03 setroubleshoot: SELinux is preventing /usr/sbin/corosync "unlink" access on request_buffer-abnIKe. For complete SELinux messages. run sealert -l bc80a856-7af6-4105-adf4-e4ac8d04a29e
Aug  2 16:36:13 ask-03 setroubleshoot: SELinux is preventing /usr/sbin/corosync "read write" access on response_buffer-KRU0dd. For complete SELinux messages. run sealert -l 5650187a-ff57-4530-b194-b544aed24296
Aug  2 16:36:13 ask-03 setroubleshoot: SELinux is preventing /usr/sbin/corosync "unlink" access on response_buffer-KRU0dd. For complete SELinux messages. run sealert -l bc80a856-7af6-4105-adf4-e4ac8d04a29e
Aug  2 16:36:13 ask-03 setroubleshoot: SELinux is preventing /usr/sbin/corosync "read write" access on dispatch_buffer-jWnyHb. For complete SELinux messages. run sealert -l 5650187a-ff57-4530-b194-b544aed24296
Aug  2 16:36:13 ask-03 setroubleshoot: SELinux is preventing /usr/sbin/corosync "unlink" access on dispatch_buffer-jWnyHb. For complete SELinux messages. run sealert -l bc80a856-7af6-4105-adf4-e4ac8d04a29e

[root@ask-03 ~]# sealert -l 5650187a-ff57-4530-b194-b544aed24296

Summary:

SELinux is preventing /usr/sbin/corosync "read write" access on
dispatch_buffer-jWnyHb.

Detailed Description:

SELinux denied access requested by corosync. It is not expected that this access
is required by corosync and this access may signal an intrusion attempt. It is
also possible that the specific version or configuration of the application is
causing it to require additional access.

Allowing Access:

You can generate a local policy module to allow this access - see FAQ
(http://docs.fedoraproject.org/selinux-faq-fc5/#id2961385) Please file a bug
report.

Additional Information:

Source Context                unconfined_u:system_r:corosync_t:s0
Target Context                unconfined_u:object_r:initrc_state_t:s0
Target Objects                dispatch_buffer-jWnyHb [ file ]
Source                        corosync
Source Path                   /usr/sbin/corosync
Port                          <Unknown>
Host                          ask-03
Source RPM Packages           corosync-1.2.3-13.el6
Target RPM Packages           
Policy RPM                    selinux-policy-3.7.19-32.el6
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Enforcing
Plugin Name                   catchall
Host Name                     ask-03
Platform                      Linux ask-03 2.6.32-44.1.el6.x86_64 #1 SMP Wed Jul
                              14 18:51:29 EDT 2010 x86_64 x86_64
Alert Count                   4
First Seen                    Mon Aug  2 16:36:06 2010
Last Seen                     Mon Aug  2 16:36:06 2010
Local ID                      5650187a-ff57-4530-b194-b544aed24296
Line Numbers                  

Raw Audit Messages            

node=ask-03 type=AVC msg=audit(1280784966.136:69): avc:  denied  { read write } for  pid=2844 comm="corosync" name="dispatch_buffer-jWnyHb" dev=tmpfs ino=20078 scontext=unconfined_u:system_r:corosync_t:s0 tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file

node=ask-03 type=SYSCALL msg=audit(1280784966.136:69): arch=c000003e syscall=2 success=no exit=-13 a0=2045d88 a1=2 a2=180 a3=100 items=0 ppid=1 pid=2844 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=3 comm="corosync" exe="/usr/sbin/corosync" subj=unconfined_u:system_r:corosync_t:s0 key=(null)

[root@ask-03 ~]# sealert -l bc80a856-7af6-4105-adf4-e4ac8d04a29e

Summary:

SELinux is preventing /usr/sbin/corosync "unlink" access on
dispatch_buffer-jWnyHb.

Detailed Description:

SELinux denied access requested by corosync. It is not expected that this access
is required by corosync and this access may signal an intrusion attempt. It is
also possible that the specific version or configuration of the application is
causing it to require additional access.

Allowing Access:

You can generate a local policy module to allow this access - see FAQ
(http://docs.fedoraproject.org/selinux-faq-fc5/#id2961385) Please file a bug
report.

Additional Information:

Source Context                unconfined_u:system_r:corosync_t:s0
Target Context                unconfined_u:object_r:initrc_state_t:s0
Target Objects                dispatch_buffer-jWnyHb [ file ]
Source                        corosync
Source Path                   /usr/sbin/corosync
Port                          <Unknown>
Host                          ask-03
Source RPM Packages           corosync-1.2.3-13.el6
Target RPM Packages           
Policy RPM                    selinux-policy-3.7.19-32.el6
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Enforcing
Plugin Name                   catchall
Host Name                     ask-03
Platform                      Linux ask-03 2.6.32-44.1.el6.x86_64 #1 SMP Wed Jul
                              14 18:51:29 EDT 2010 x86_64 x86_64
Alert Count                   4
First Seen                    Mon Aug  2 16:36:06 2010
Last Seen                     Mon Aug  2 16:36:06 2010
Local ID                      bc80a856-7af6-4105-adf4-e4ac8d04a29e
Line Numbers                  

Raw Audit Messages            

node=ask-03 type=AVC msg=audit(1280784966.136:70): avc:  denied  { unlink } for  pid=2844 comm="corosync" name="dispatch_buffer-jWnyHb" dev=tmpfs ino=20078 scontext=unconfined_u:system_r:corosync_t:s0 tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file

node=ask-03 type=SYSCALL msg=audit(1280784966.136:70): arch=c000003e syscall=87 success=no exit=-13 a0=2045d88 a1=2 a2=d a3=100 items=0 ppid=1 pid=2844 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=3 comm="corosync" exe="/usr/sbin/corosync" subj=unconfined_u:system_r:corosync_t:s0 key=(null)
Comment 11 Miroslav Grepl 2010-08-03 06:13:01 EDT
The problem is we don't have policy for the luci and I guess the luci creates ' dispatch_buffer-jWnyHb' with the initrc_state_t context.

Could you add your output of the following command

ps -eZ | grep initrc
Comment 12 Miroslav Grepl 2010-08-03 08:34:26 EDT
my output:

# ps -eZ | grep initrc
staff_u:system_r:initrc_t:s0     7510 ?        00:00:00 paster

from luci init script:

/usr/bin/paster serve --daemon --user "$LUCI_USER" --group "$LUCI_GROUP" "$LUCI_CONFIG_FILE" --log-file="$LUCI_PASTER_LOG" --pid-file="$LUCI_PID_FILE"

# rpm -qf /usr/bin/paster 
python-paste-script-1.7.3-4.el6.noarch


Dan,
I would imagine a new policy for paster and new types for luci files

For example:

---

type paster_t;
type paster_exec_t;
init_daemon_domain(paster_t, paster_exec_t)

type luci_var_lib_t;
files_type(luci_var_lib_t)

type luci_conf_t;
files_config_file(luci_conf_t)

---


What do you think?
Comment 13 Daniel Walsh 2010-08-03 10:04:02 EDT
We seem to be having an explosion of types.  Can any of this fit under an existing type?

I have no idea what paster is doing.
Comment 14 Miroslav Grepl 2010-08-03 10:24:11 EDT
I have just created local paster policy. I think it would be a good idea to test this policy in the real cluster configuration. We will see AVC messages and then we can choose a type for paster.
Comment 15 Chris Feist 2010-08-03 10:25:58 EDT
(In reply to comment #12)
> my output:
> 
> # ps -eZ | grep initrc
> staff_u:system_r:initrc_t:s0     7510 ?        00:00:00 paster
> 
> from luci init script:
> 
> /usr/bin/paster serve --daemon --user "$LUCI_USER" --group "$LUCI_GROUP"
> "$LUCI_CONFIG_FILE" --log-file="$LUCI_PASTER_LOG" --pid-file="$LUCI_PID_FILE"
> 
> # rpm -qf /usr/bin/paster 
> python-paste-script-1.7.3-4.el6.noarch
> 
> 
> Dan,
> I would imagine a new policy for paster and new types for luci files
> 
> For example:
> 
> ---
> 
> type paster_t;
> type paster_exec_t;
> init_daemon_domain(paster_t, paster_exec_t)
> 
> type luci_var_lib_t;
> files_type(luci_var_lib_t)
> 
> type luci_conf_t;
> files_config_file(luci_conf_t)
> 
> ---
> 
> 
> What do you think?    

Luci isn't creating those files, it isn't even running on the node.  I believe corosync (or some other corosync process) is creating them.
Comment 16 Miroslav Grepl 2010-08-03 10:40:21 EDT
Chris,
could you try to execute

ps -eZ | grep initrc

on the node.
Comment 17 Chris Feist 2010-08-03 14:46:15 EDT
There are no results with this query, if I do a 'service cman start' and while it's starting I run the 'ps -eZ' I get the following lines:


unconfined_u:system_r:initrc_t:s0 16578 pts/2  00:00:00 cman
unconfined_u:system_r:initrc_t:s0 17071 pts/2  00:00:00 cman
unconfined_u:system_r:initrc_t:s0 17072 pts/2  00:00:00 fence_tool

(but then the init script fails and 'ps -eZ' | grep initrc' returns nothing.
Comment 18 Daniel Walsh 2010-08-03 15:43:40 EDT
Do any of these tools create files in /dev/shm?
Comment 19 Miroslav Grepl 2010-08-04 05:02:23 EDT
Chris,
could you also try to execute

# chcon -t fenced_exec_t /usr/sbin/fence_tool

at each cluster node and test it.
Comment 20 Miroslav Grepl 2010-08-04 07:58:28 EDT
Chris,
are you trying to manage cluster-node thru the luci? I mean for example, reboot node, join cluster, leave cluster...

I am pretty sure we will need to add some additional rules for ricci policy. I am playing with it.
Comment 21 Miroslav Grepl 2010-08-04 08:31:29 EDT
> I am pretty sure we will need to add some additional rules for ricci policy. I
> am playing with it.    

At least we should add


kernel_read_system_state(ricci_t)

# ricci can restart node
optional_policy(`
        shutdown_domtrans(ricci_t)
')
Comment 22 Steven Dake 2010-08-04 13:49:57 EDT
Chris,

When luci retrieves cluster state, it communicates with corosync in some way.  This avc results in a corosync segfault.  I couldn't find the avcs or the app that accesses corosync.
Comment 23 Chris Feist 2010-08-04 14:31:53 EDT
Luci communicates with ricci, which does all the work on behalf of luci.

If I attempt to startup a cluster without running ricci, I still get avc
denials in corosync.


type=1400 audit(1280946664.270:12): avc:  denied  { read write } for  pid=11346
comm="corosync" name="control_buffer-7e3ueN" dev=tmpfs ino=21878
scontext=unconfined_u:system_r:corosync_t:s0
tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file
type=1400 audit(1280946664.292:13): avc:  denied  { open } for  pid=11346
comm="corosync" name="control_buffer-7e3ueN" dev=tmpfs ino=21878
scontext=unconfined_u:system_r:corosync_t:s0
tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file
type=1400 audit(1280946664.313:14): avc:  denied  { unlink } for  pid=11346
comm="corosync" name="control_buffer-7e3ueN" dev=tmpfs ino=21878
scontext=unconfined_u:system_r:corosync_t:s0 tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file

There is definitely an issue with corosync here (as well as a different issue with ricci).
Comment 24 Chris Feist 2010-08-04 14:36:48 EDT
(In reply to comment #20)
> Chris,
> are you trying to manage cluster-node thru the luci? I mean for example, reboot
> node, join cluster, leave cluster...
> 
> I am pretty sure we will need to add some additional rules for ricci policy. I
> am playing with it.    

yes, ricci does reboot nodes and run other commands.  Once you've got the policy ready for me to test, I'll try it out and see if we get any other avc denials.
Comment 25 Miroslav Grepl 2010-08-05 02:25:40 EDT
(In reply to comment #23)
> Luci communicates with ricci, which does all the work on behalf of luci.
> 
> If I attempt to startup a cluster without running ricci, I still get avc
> denials in corosync.
> 
> 
> type=1400 audit(1280946664.270:12): avc:  denied  { read write } for  pid=11346
> comm="corosync" name="control_buffer-7e3ueN" dev=tmpfs ino=21878
> scontext=unconfined_u:system_r:corosync_t:s0
> tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file
> type=1400 audit(1280946664.292:13): avc:  denied  { open } for  pid=11346
> comm="corosync" name="control_buffer-7e3ueN" dev=tmpfs ino=21878
> scontext=unconfined_u:system_r:corosync_t:s0
> tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file
> type=1400 audit(1280946664.313:14): avc:  denied  { unlink } for  pid=11346
> comm="corosync" name="control_buffer-7e3ueN" dev=tmpfs ino=21878
> scontext=unconfined_u:system_r:corosync_t:s0
> tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file
> 
> There is definitely an issue with corosync here (as well as a different issue
> with ricci).    

Did you try to execute

# chcon -t fenced_exec_t /usr/sbin/fence_tool
Comment 26 Chris Feist 2010-08-05 12:37:39 EDT
type=AVC msg=audit(1281026165.193:1887): avc:  denied  { connectto } for  pid=14208 comm="fence_tool" path=0066656E6365645F736F636B scontext=unconfined_u:system_r:fenced_t:s0 tcontext=unconfined_u:system_r:fenced_t:s0 tclass=unix_stream_socket
type=SYSCALL msg=audit(1281026165.193:1887): arch=c000003e syscall=42 success=yes exit=128 a0=3 a1=7fffcfeb68f0 a2=e a3=7fffcfeb68f3 items=0 ppid=14207 pid=14208 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=307 comm="fence_tool" exe="/usr/sbin/fence_tool" subj=unconfined_u:system_r:fenced_t:s0 key=(null)

I get these messages when I start cman (under permissive) after the chcon command.
Comment 27 Miroslav Grepl 2010-08-06 06:28:34 EDT
So AVC messages (from the comment 23) are gone.

If you execute

# grep fenced /var/log/audit/audit.log | audit2allow -M myfenced
# semodule -i myfenced

does it work?
Comment 28 Miroslav Grepl 2010-08-06 08:25:22 EDT
Nate, Jaroslav,

could you try to do some tests with the following change

chcon -t fenced_exec_t /usr/sbin/fence_tool
Comment 29 Miroslav Grepl 2010-08-06 09:23:05 EDT
(In reply to comment #24)
> (In reply to comment #20)
> > Chris,
> > are you trying to manage cluster-node thru the luci? I mean for example, reboot
> > node, join cluster, leave cluster...
> > 
> > I am pretty sure we will need to add some additional rules for ricci policy. I
> > am playing with it.    
> 
> yes, ricci does reboot nodes and run other commands.  Once you've got the
> policy ready for me to test, I'll try it out and see if we get any other avc
> denials.    

Fixes for luci and ricci were added to selinux-policy-3.7.19-36.el6.noarch. The packages are available from brew. Could you test it? Thanks.
Comment 30 Chris Feist 2010-08-06 15:41:18 EDT
I get these denials now:

when starting a cluster:

type=AVC msg=audit(1281123375.183:17294): avc:  denied  { read write } for  pid=1921 comm="corosync" name="control_buffer-lbJkUT" dev=tmpfs ino=14790 scontext=unconfined_u:system_r:corosync_t:s0 tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file
type=AVC msg=audit(1281123375.183:17294): avc:  denied  { open } for  pid=1921 comm="corosync" name="control_buffer-lbJkUT" dev=tmpfs ino=14790 scontext=unconfined_u:system_r:corosync_t:s0 tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file
type=AVC msg=audit(1281123375.183:17295): avc:  denied  { unlink } for  pid=1921 comm="corosync" name="control_buffer-lbJkUT" dev=tmpfs ino=14790 scontext=unconfined_u:system_r:corosync_t:s0 tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file

when rebooting the node:


type=AVC msg=audit(1281123434.822:17296): avc:  denied  { execute } for  pid=2786 comm="reboot" name="shutdown" dev=dm-0 ino=392528 scontext=unconfined_u:system_r:ricci_t:s0 tcontext=system_u:object_r:shutdown_exec_t:s0 tclass=file
type=AVC msg=audit(1281123434.822:17296): avc:  denied  { read open } for  pid=2786 comm="reboot" name="shutdown" dev=dm-0 ino=392528 scontext=unconfined_u:system_r:ricci_t:s0 tcontext=system_u:object_r:shutdown_exec_t:s0 tclass=file
type=AVC msg=audit(1281123434.822:17296): avc:  denied  { execute_no_trans } for  pid=2786 comm="reboot" path="/sbin/shutdown" dev=dm-0 ino=392528 scontext=unconfined_u:system_r:ricci_t:s0 tcontext=system_u:object_r:shutdown_exec_t:s0 tclass=file
type=AVC msg=audit(1281123434.828:17297): avc:  denied  { write } for  pid=2786 comm="shutdown" name="wtmp" dev=dm-0 ino=262231 scontext=unconfined_u:system_r:ricci_t:s0 tcontext=system_u:object_r:wtmp_t:s0 tclass=file
Comment 31 Miroslav Grepl 2010-08-09 06:44:18 EDT
(In reply to comment #30)
> 
> when rebooting the node:
> 
> 
> type=AVC msg=audit(1281123434.822:17296): avc:  denied  { execute } for 
> pid=2786 comm="reboot" name="shutdown" dev=dm-0 ino=392528
> scontext=unconfined_u:system_r:ricci_t:s0
> tcontext=system_u:object_r:shutdown_exec_t:s0 tclass=file
> type=AVC msg=audit(1281123434.822:17296): avc:  denied  { read open } for 
> pid=2786 comm="reboot" name="shutdown" dev=dm-0 ino=392528
> scontext=unconfined_u:system_r:ricci_t:s0
> tcontext=system_u:object_r:shutdown_exec_t:s0 tclass=file
> type=AVC msg=audit(1281123434.822:17296): avc:  denied  { execute_no_trans }
> for  pid=2786 comm="reboot" path="/sbin/shutdown" dev=dm-0 ino=392528
> scontext=unconfined_u:system_r:ricci_t:s0
> tcontext=system_u:object_r:shutdown_exec_t:s0 tclass=file
> type=AVC msg=audit(1281123434.828:17297): avc:  denied  { write } for  pid=2786
> comm="shutdown" name="wtmp" dev=dm-0 ino=262231
> scontext=unconfined_u:system_r:ricci_t:s0 tcontext=system_u:object_r:wtmp_t:s0
> tclass=file    

I will fix it.
Comment 32 Miroslav Grepl 2010-08-09 06:48:52 EDT
(In reply to comment #30)
> I get these denials now:
> 
> when starting a cluster:
> 
> type=AVC msg=audit(1281123375.183:17294): avc:  denied  { read write } for 
> pid=1921 comm="corosync" name="control_buffer-lbJkUT" dev=tmpfs ino=14790
> scontext=unconfined_u:system_r:corosync_t:s0
> tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file
> type=AVC msg=audit(1281123375.183:17294): avc:  denied  { open } for  pid=1921
> comm="corosync" name="control_buffer-lbJkUT" dev=tmpfs ino=14790
> scontext=unconfined_u:system_r:corosync_t:s0
> tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file
> type=AVC msg=audit(1281123375.183:17295): avc:  denied  { unlink } for 
> pid=1921 comm="corosync" name="control_buffer-lbJkUT" dev=tmpfs ino=14790
> scontext=unconfined_u:system_r:corosync_t:s0
> tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file
> 

Are you getting these AVC messages with fence_tool labelled as fenced_exec_t?

# ls -Z /usr/sbin/fence_tool
Comment 33 Chris Feist 2010-08-09 17:15:41 EDT
I just ran the 'chcon -t fenced_exec_t /usr/sbin/fence_tool' command and that does make the corosync error messages go away.

However, when rebooting a node I still get this avc:


type=AVC msg=audit(1281388449.815:3160): avc:  denied  { write } for  pid=20004 comm="shutdown" name="wtmp" dev=dm-0 ino=262231 scontext=unconfined_u:system_r:ricci_t:s0 tcontext=system_u:object_r:wtmp_t:s0 tclass=file
type=SYSCALL msg=audit(1281388449.815:3160): arch=c000003e syscall=2 success=yes exit=4 a0=40b042 a1=1 a2=2 a3=8 items=0 ppid=19953 pid=20004 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=1 comm="shutdown" exe="/sbin/shutdown" subj=unconfined_u:system_r:ricci_t:s0 key=(null)
Comment 34 Miroslav Grepl 2010-08-10 03:11:54 EDT
(In reply to comment #33)
> I just ran the 'chcon -t fenced_exec_t /usr/sbin/fence_tool' command and that
> does make the corosync error messages go away.

Ok. Thanks.

> 
> However, when rebooting a node I still get this avc:
> 
> 
> type=AVC msg=audit(1281388449.815:3160): avc:  denied  { write } for  pid=20004
> comm="shutdown" name="wtmp" dev=dm-0 ino=262231
> scontext=unconfined_u:system_r:ricci_t:s0 tcontext=system_u:object_r:wtmp_t:s0
> tclass=file
> type=SYSCALL msg=audit(1281388449.815:3160): arch=c000003e syscall=2
> success=yes exit=4 a0=40b042 a1=1 a2=2 a3=8 items=0 ppid=19953 pid=20004 auid=0
> uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=1
> comm="shutdown" exe="/sbin/shutdown" subj=unconfined_u:system_r:ricci_t:s0
> key=(null)    

It will fix in selinux-policy-3.7.19-37.el6.
Comment 35 Miroslav Grepl 2010-08-10 14:15:00 EDT
Fixed in selinux-policy-3.7.19-37.el6.noarch
Comment 36 Chris Feist 2010-08-10 15:27:07 EDT
This appears to have fixed the avc denials.  The only question I have now is where is the best place for the 'chcon -t fenced_exec_t /usr/sbin/fence_tool'?  Should that be done in the rpm during installation?  Or is there a place where selinux will do that?
Comment 37 Chris Feist 2010-08-10 15:53:03 EDT
Nevermind, I see that that installing the new policy updates the context for fence_tool.  We should be good to go.
Comment 40 releng-rhel@redhat.com 2010-11-15 09:48:14 EST
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.

Note You need to log in before you can comment on or make changes to this bug.