RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 619893 - luci generates selinux avcs
Summary: luci generates selinux avcs
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: selinux-policy
Version: 6.0
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Miroslav Grepl
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks: 616643 619918 620576
TreeView+ depends on / blocked
 
Reported: 2010-07-30 19:39 UTC by Paul Kennedy
Modified: 2015-04-20 00:48 UTC (History)
13 users (show)

Fixed In Version: selinux-policy-3.7.19-37.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 619918 620576 (view as bug list)
Environment:
Last Closed: 2010-11-15 14:48:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Paul Kennedy 2010-07-30 19:39:11 UTC
Description of problem:
Running luci appears to cause problem with corosync. Causes stale pid.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Run 'service cman start' at each cluster node.
2. Verify that cluster software has started at each node.
3. Run 'service cman start' at each node to verify that cluster is running.
4. Start luci. Make sure that luci is configured to manage the cluster.
5. Run 'service cman start' at each node to verify that cluster is running.
  
Actual results:
# service cman status
Found stale pid file

Expected results:
# service cman status
cluster is running.

Additional info:

From /var/log/messages/:

Jul 30 08:24:58 doc-04 corosync[1898]:   [MAIN  ] Completed service synchronization, ready to provide service.
Jul 30 08:25:00 doc-04 fenced[1953]: fenced 3.0.12 started
Jul 30 08:25:00 doc-04 dlm_controld[1974]: dlm_controld 3.0.12 started
Jul 30 08:25:00 doc-04 gfs_controld[2025]: gfs_controld 3.0.12 started
Jul 30 08:42:54 doc-04 modclusterd: startup succeeded
Jul 30 08:42:54 doc-04 kernel: corosync[2257]: segfault at 8 ip 00000036f460d6f0 sp 00007f6fe35cdb58 error 4 in libpthread-2.12.so[36f4600000+17000]
Jul 30 08:42:55 doc-04 abrt[2258]: saved core dump of pid 1898 (/usr/sbin/corosync) to /var/spool/abrt/ccpp-1280497375-1898.new/coredump (56344576 bytes)
Jul 30 08:42:55 doc-04 abrtd: Directory 'ccpp-1280497375-1898' creation detected
Jul 30 08:42:55 doc-04 dlm_controld[1974]: cluster is down, exiting
Jul 30 08:42:55 doc-04 dlm_controld[1974]: daemon cpg_dispatch error 2
Jul 30 08:42:55 doc-04 gfs_controld[2025]: cluster is down, exiting
Jul 30 08:42:55 doc-04 gfs_controld[2025]: daemon cpg_dispatch error 2
Jul 30 08:42:55 doc-04 fenced[1953]: cluster is down, exiting
Jul 30 08:42:55 doc-04 fenced[1953]: daemon cpg_dispatch error 2
Jul 30 08:42:55 doc-04 fenced[1953]: cpg_dispatch error 2
Jul 30 08:42:55 doc-04 abrtd: Crash is in database already (dup of /var/spool/abrt/ccpp-1280331751-30293)
Jul 30 08:42:55 doc-04 abrtd: Deleting crash ccpp-1280497375-1898 (dup of ccpp-1280331751-30293), sending dbus signal
Jul 30 08:42:57 doc-04 kernel: dlm: closing connection to node 3
Jul 30 08:42:57 doc-04 kernel: dlm: closing connection to node 2
Jul 30 08:42:57 doc-04 kernel: dlm: closing connection to node 1

Comment 1 Paul Kennedy 2010-07-30 19:42:32 UTC
By the way, this is with the 722.0 build.

Comment 3 RHEL Program Management 2010-07-30 20:07:34 UTC
This issue has been proposed when we are only considering blocker
issues in the current Red Hat Enterprise Linux release.

** If you would still like this issue considered for the current
release, ask your support representative to file as a blocker on
your behalf. Otherwise ask that it be considered for the next
Red Hat Enterprise Linux release. **

Comment 4 Steven Dake 2010-07-30 20:53:57 UTC
turning off selinux stops core dump.

Comment 5 Steven Dake 2010-07-30 21:04:02 UTC
avcs:
type=1400 audit(1280503340.368:54): avc:  denied  { name_connect } for  pid=1876 comm="modclusterd" scontext=system_u:system_r:ricci_modclusterd_t:s0 tcontext=system_u:object_r:port_t:s0 tclass=tcp_socket
type=1400 audit(1280503340.368:55): avc:  denied  { module_request } for  pid=1876 comm="modclusterd" kmod="net-pf-3" scontext=system_u:system_r:ricci_modclusterd_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=system

Comment 6 Steven Dake 2010-07-30 21:08:35 UTC
selinux-policy-targeted-3.7.19-32.el6.noarch
libselinux-2.0.94-1.el6.x86_64
selinux-policy-3.7.19-32.el6.noarch
libselinux-utils-2.0.94-1.el6.x86_64

Comment 7 Steven Dake 2010-07-30 21:12:57 UTC
This is a cman init script issue.  I believe cman is behaving as expected when the corosync process is no longer running (ie: it has failed via coredump).

Reassigning to Fabio for further validation of those assumptions, since the component is the cman init script.

Comment 8 Steven Dake 2010-07-30 21:13:33 UTC
ignore comment #7, was meant for a different bz...

Comment 9 Miroslav Grepl 2010-08-02 13:22:00 UTC
(In reply to comment #5)
> avcs:
> type=1400 audit(1280503340.368:54): avc:  denied  { name_connect } for 
> pid=1876 comm="modclusterd" scontext=system_u:system_r:ricci_modclusterd_t:s0
> tcontext=system_u:object_r:port_t:s0 tclass=tcp_socket

To which port is ricci-modclusterd connecting? Could you add full AVC message.

Comment 10 Chris Feist 2010-08-02 21:43:24 UTC
I see the following avc's with corosync:


Aug  2 16:36:12 ask-03 setroubleshoot: SELinux is preventing /usr/sbin/corosync "read write" access on control_buffer-umdEhg. For complete SELinux messages. run sealert -l 5650187a-ff57-4530-b194-b544aed24296
Aug  2 16:36:13 ask-03 setroubleshoot: SELinux is preventing /usr/sbin/corosync "unlink" access on control_buffer-umdEhg. For complete SELinux messages. run sealert -l bc80a856-7af6-4105-adf4-e4ac8d04a29e
Aug  2 16:36:13 ask-03 setroubleshoot: SELinux is preventing /usr/sbin/corosync "read write" access on request_buffer-abnIKe. For complete SELinux messages. run sealert -l 5650187a-ff57-4530-b194-b544aed24296
Aug  2 16:36:13 ask-03 setroubleshoot: SELinux is preventing /usr/sbin/corosync "unlink" access on request_buffer-abnIKe. For complete SELinux messages. run sealert -l bc80a856-7af6-4105-adf4-e4ac8d04a29e
Aug  2 16:36:13 ask-03 setroubleshoot: SELinux is preventing /usr/sbin/corosync "read write" access on response_buffer-KRU0dd. For complete SELinux messages. run sealert -l 5650187a-ff57-4530-b194-b544aed24296
Aug  2 16:36:13 ask-03 setroubleshoot: SELinux is preventing /usr/sbin/corosync "unlink" access on response_buffer-KRU0dd. For complete SELinux messages. run sealert -l bc80a856-7af6-4105-adf4-e4ac8d04a29e
Aug  2 16:36:13 ask-03 setroubleshoot: SELinux is preventing /usr/sbin/corosync "read write" access on dispatch_buffer-jWnyHb. For complete SELinux messages. run sealert -l 5650187a-ff57-4530-b194-b544aed24296
Aug  2 16:36:13 ask-03 setroubleshoot: SELinux is preventing /usr/sbin/corosync "unlink" access on dispatch_buffer-jWnyHb. For complete SELinux messages. run sealert -l bc80a856-7af6-4105-adf4-e4ac8d04a29e

[root@ask-03 ~]# sealert -l 5650187a-ff57-4530-b194-b544aed24296

Summary:

SELinux is preventing /usr/sbin/corosync "read write" access on
dispatch_buffer-jWnyHb.

Detailed Description:

SELinux denied access requested by corosync. It is not expected that this access
is required by corosync and this access may signal an intrusion attempt. It is
also possible that the specific version or configuration of the application is
causing it to require additional access.

Allowing Access:

You can generate a local policy module to allow this access - see FAQ
(http://docs.fedoraproject.org/selinux-faq-fc5/#id2961385) Please file a bug
report.

Additional Information:

Source Context                unconfined_u:system_r:corosync_t:s0
Target Context                unconfined_u:object_r:initrc_state_t:s0
Target Objects                dispatch_buffer-jWnyHb [ file ]
Source                        corosync
Source Path                   /usr/sbin/corosync
Port                          <Unknown>
Host                          ask-03
Source RPM Packages           corosync-1.2.3-13.el6
Target RPM Packages           
Policy RPM                    selinux-policy-3.7.19-32.el6
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Enforcing
Plugin Name                   catchall
Host Name                     ask-03
Platform                      Linux ask-03 2.6.32-44.1.el6.x86_64 #1 SMP Wed Jul
                              14 18:51:29 EDT 2010 x86_64 x86_64
Alert Count                   4
First Seen                    Mon Aug  2 16:36:06 2010
Last Seen                     Mon Aug  2 16:36:06 2010
Local ID                      5650187a-ff57-4530-b194-b544aed24296
Line Numbers                  

Raw Audit Messages            

node=ask-03 type=AVC msg=audit(1280784966.136:69): avc:  denied  { read write } for  pid=2844 comm="corosync" name="dispatch_buffer-jWnyHb" dev=tmpfs ino=20078 scontext=unconfined_u:system_r:corosync_t:s0 tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file

node=ask-03 type=SYSCALL msg=audit(1280784966.136:69): arch=c000003e syscall=2 success=no exit=-13 a0=2045d88 a1=2 a2=180 a3=100 items=0 ppid=1 pid=2844 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=3 comm="corosync" exe="/usr/sbin/corosync" subj=unconfined_u:system_r:corosync_t:s0 key=(null)

[root@ask-03 ~]# sealert -l bc80a856-7af6-4105-adf4-e4ac8d04a29e

Summary:

SELinux is preventing /usr/sbin/corosync "unlink" access on
dispatch_buffer-jWnyHb.

Detailed Description:

SELinux denied access requested by corosync. It is not expected that this access
is required by corosync and this access may signal an intrusion attempt. It is
also possible that the specific version or configuration of the application is
causing it to require additional access.

Allowing Access:

You can generate a local policy module to allow this access - see FAQ
(http://docs.fedoraproject.org/selinux-faq-fc5/#id2961385) Please file a bug
report.

Additional Information:

Source Context                unconfined_u:system_r:corosync_t:s0
Target Context                unconfined_u:object_r:initrc_state_t:s0
Target Objects                dispatch_buffer-jWnyHb [ file ]
Source                        corosync
Source Path                   /usr/sbin/corosync
Port                          <Unknown>
Host                          ask-03
Source RPM Packages           corosync-1.2.3-13.el6
Target RPM Packages           
Policy RPM                    selinux-policy-3.7.19-32.el6
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Enforcing
Plugin Name                   catchall
Host Name                     ask-03
Platform                      Linux ask-03 2.6.32-44.1.el6.x86_64 #1 SMP Wed Jul
                              14 18:51:29 EDT 2010 x86_64 x86_64
Alert Count                   4
First Seen                    Mon Aug  2 16:36:06 2010
Last Seen                     Mon Aug  2 16:36:06 2010
Local ID                      bc80a856-7af6-4105-adf4-e4ac8d04a29e
Line Numbers                  

Raw Audit Messages            

node=ask-03 type=AVC msg=audit(1280784966.136:70): avc:  denied  { unlink } for  pid=2844 comm="corosync" name="dispatch_buffer-jWnyHb" dev=tmpfs ino=20078 scontext=unconfined_u:system_r:corosync_t:s0 tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file

node=ask-03 type=SYSCALL msg=audit(1280784966.136:70): arch=c000003e syscall=87 success=no exit=-13 a0=2045d88 a1=2 a2=d a3=100 items=0 ppid=1 pid=2844 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=3 comm="corosync" exe="/usr/sbin/corosync" subj=unconfined_u:system_r:corosync_t:s0 key=(null)

Comment 11 Miroslav Grepl 2010-08-03 10:13:01 UTC
The problem is we don't have policy for the luci and I guess the luci creates ' dispatch_buffer-jWnyHb' with the initrc_state_t context.

Could you add your output of the following command

ps -eZ | grep initrc

Comment 12 Miroslav Grepl 2010-08-03 12:34:26 UTC
my output:

# ps -eZ | grep initrc
staff_u:system_r:initrc_t:s0     7510 ?        00:00:00 paster

from luci init script:

/usr/bin/paster serve --daemon --user "$LUCI_USER" --group "$LUCI_GROUP" "$LUCI_CONFIG_FILE" --log-file="$LUCI_PASTER_LOG" --pid-file="$LUCI_PID_FILE"

# rpm -qf /usr/bin/paster 
python-paste-script-1.7.3-4.el6.noarch


Dan,
I would imagine a new policy for paster and new types for luci files

For example:

---

type paster_t;
type paster_exec_t;
init_daemon_domain(paster_t, paster_exec_t)

type luci_var_lib_t;
files_type(luci_var_lib_t)

type luci_conf_t;
files_config_file(luci_conf_t)

---


What do you think?

Comment 13 Daniel Walsh 2010-08-03 14:04:02 UTC
We seem to be having an explosion of types.  Can any of this fit under an existing type?

I have no idea what paster is doing.

Comment 14 Miroslav Grepl 2010-08-03 14:24:11 UTC
I have just created local paster policy. I think it would be a good idea to test this policy in the real cluster configuration. We will see AVC messages and then we can choose a type for paster.

Comment 15 Chris Feist 2010-08-03 14:25:58 UTC
(In reply to comment #12)
> my output:
> 
> # ps -eZ | grep initrc
> staff_u:system_r:initrc_t:s0     7510 ?        00:00:00 paster
> 
> from luci init script:
> 
> /usr/bin/paster serve --daemon --user "$LUCI_USER" --group "$LUCI_GROUP"
> "$LUCI_CONFIG_FILE" --log-file="$LUCI_PASTER_LOG" --pid-file="$LUCI_PID_FILE"
> 
> # rpm -qf /usr/bin/paster 
> python-paste-script-1.7.3-4.el6.noarch
> 
> 
> Dan,
> I would imagine a new policy for paster and new types for luci files
> 
> For example:
> 
> ---
> 
> type paster_t;
> type paster_exec_t;
> init_daemon_domain(paster_t, paster_exec_t)
> 
> type luci_var_lib_t;
> files_type(luci_var_lib_t)
> 
> type luci_conf_t;
> files_config_file(luci_conf_t)
> 
> ---
> 
> 
> What do you think?    

Luci isn't creating those files, it isn't even running on the node.  I believe corosync (or some other corosync process) is creating them.

Comment 16 Miroslav Grepl 2010-08-03 14:40:21 UTC
Chris,
could you try to execute

ps -eZ | grep initrc

on the node.

Comment 17 Chris Feist 2010-08-03 18:46:15 UTC
There are no results with this query, if I do a 'service cman start' and while it's starting I run the 'ps -eZ' I get the following lines:


unconfined_u:system_r:initrc_t:s0 16578 pts/2  00:00:00 cman
unconfined_u:system_r:initrc_t:s0 17071 pts/2  00:00:00 cman
unconfined_u:system_r:initrc_t:s0 17072 pts/2  00:00:00 fence_tool

(but then the init script fails and 'ps -eZ' | grep initrc' returns nothing.

Comment 18 Daniel Walsh 2010-08-03 19:43:40 UTC
Do any of these tools create files in /dev/shm?

Comment 19 Miroslav Grepl 2010-08-04 09:02:23 UTC
Chris,
could you also try to execute

# chcon -t fenced_exec_t /usr/sbin/fence_tool

at each cluster node and test it.

Comment 20 Miroslav Grepl 2010-08-04 11:58:28 UTC
Chris,
are you trying to manage cluster-node thru the luci? I mean for example, reboot node, join cluster, leave cluster...

I am pretty sure we will need to add some additional rules for ricci policy. I am playing with it.

Comment 21 Miroslav Grepl 2010-08-04 12:31:29 UTC
> I am pretty sure we will need to add some additional rules for ricci policy. I
> am playing with it.    

At least we should add


kernel_read_system_state(ricci_t)

# ricci can restart node
optional_policy(`
        shutdown_domtrans(ricci_t)
')

Comment 22 Steven Dake 2010-08-04 17:49:57 UTC
Chris,

When luci retrieves cluster state, it communicates with corosync in some way.  This avc results in a corosync segfault.  I couldn't find the avcs or the app that accesses corosync.

Comment 23 Chris Feist 2010-08-04 18:31:53 UTC
Luci communicates with ricci, which does all the work on behalf of luci.

If I attempt to startup a cluster without running ricci, I still get avc
denials in corosync.


type=1400 audit(1280946664.270:12): avc:  denied  { read write } for  pid=11346
comm="corosync" name="control_buffer-7e3ueN" dev=tmpfs ino=21878
scontext=unconfined_u:system_r:corosync_t:s0
tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file
type=1400 audit(1280946664.292:13): avc:  denied  { open } for  pid=11346
comm="corosync" name="control_buffer-7e3ueN" dev=tmpfs ino=21878
scontext=unconfined_u:system_r:corosync_t:s0
tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file
type=1400 audit(1280946664.313:14): avc:  denied  { unlink } for  pid=11346
comm="corosync" name="control_buffer-7e3ueN" dev=tmpfs ino=21878
scontext=unconfined_u:system_r:corosync_t:s0 tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file

There is definitely an issue with corosync here (as well as a different issue with ricci).

Comment 24 Chris Feist 2010-08-04 18:36:48 UTC
(In reply to comment #20)
> Chris,
> are you trying to manage cluster-node thru the luci? I mean for example, reboot
> node, join cluster, leave cluster...
> 
> I am pretty sure we will need to add some additional rules for ricci policy. I
> am playing with it.    

yes, ricci does reboot nodes and run other commands.  Once you've got the policy ready for me to test, I'll try it out and see if we get any other avc denials.

Comment 25 Miroslav Grepl 2010-08-05 06:25:40 UTC
(In reply to comment #23)
> Luci communicates with ricci, which does all the work on behalf of luci.
> 
> If I attempt to startup a cluster without running ricci, I still get avc
> denials in corosync.
> 
> 
> type=1400 audit(1280946664.270:12): avc:  denied  { read write } for  pid=11346
> comm="corosync" name="control_buffer-7e3ueN" dev=tmpfs ino=21878
> scontext=unconfined_u:system_r:corosync_t:s0
> tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file
> type=1400 audit(1280946664.292:13): avc:  denied  { open } for  pid=11346
> comm="corosync" name="control_buffer-7e3ueN" dev=tmpfs ino=21878
> scontext=unconfined_u:system_r:corosync_t:s0
> tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file
> type=1400 audit(1280946664.313:14): avc:  denied  { unlink } for  pid=11346
> comm="corosync" name="control_buffer-7e3ueN" dev=tmpfs ino=21878
> scontext=unconfined_u:system_r:corosync_t:s0
> tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file
> 
> There is definitely an issue with corosync here (as well as a different issue
> with ricci).    

Did you try to execute

# chcon -t fenced_exec_t /usr/sbin/fence_tool

Comment 26 Chris Feist 2010-08-05 16:37:39 UTC
type=AVC msg=audit(1281026165.193:1887): avc:  denied  { connectto } for  pid=14208 comm="fence_tool" path=0066656E6365645F736F636B scontext=unconfined_u:system_r:fenced_t:s0 tcontext=unconfined_u:system_r:fenced_t:s0 tclass=unix_stream_socket
type=SYSCALL msg=audit(1281026165.193:1887): arch=c000003e syscall=42 success=yes exit=128 a0=3 a1=7fffcfeb68f0 a2=e a3=7fffcfeb68f3 items=0 ppid=14207 pid=14208 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=307 comm="fence_tool" exe="/usr/sbin/fence_tool" subj=unconfined_u:system_r:fenced_t:s0 key=(null)

I get these messages when I start cman (under permissive) after the chcon command.

Comment 27 Miroslav Grepl 2010-08-06 10:28:34 UTC
So AVC messages (from the comment 23) are gone.

If you execute

# grep fenced /var/log/audit/audit.log | audit2allow -M myfenced
# semodule -i myfenced

does it work?

Comment 28 Miroslav Grepl 2010-08-06 12:25:22 UTC
Nate, Jaroslav,

could you try to do some tests with the following change

chcon -t fenced_exec_t /usr/sbin/fence_tool

Comment 29 Miroslav Grepl 2010-08-06 13:23:05 UTC
(In reply to comment #24)
> (In reply to comment #20)
> > Chris,
> > are you trying to manage cluster-node thru the luci? I mean for example, reboot
> > node, join cluster, leave cluster...
> > 
> > I am pretty sure we will need to add some additional rules for ricci policy. I
> > am playing with it.    
> 
> yes, ricci does reboot nodes and run other commands.  Once you've got the
> policy ready for me to test, I'll try it out and see if we get any other avc
> denials.    

Fixes for luci and ricci were added to selinux-policy-3.7.19-36.el6.noarch. The packages are available from brew. Could you test it? Thanks.

Comment 30 Chris Feist 2010-08-06 19:41:18 UTC
I get these denials now:

when starting a cluster:

type=AVC msg=audit(1281123375.183:17294): avc:  denied  { read write } for  pid=1921 comm="corosync" name="control_buffer-lbJkUT" dev=tmpfs ino=14790 scontext=unconfined_u:system_r:corosync_t:s0 tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file
type=AVC msg=audit(1281123375.183:17294): avc:  denied  { open } for  pid=1921 comm="corosync" name="control_buffer-lbJkUT" dev=tmpfs ino=14790 scontext=unconfined_u:system_r:corosync_t:s0 tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file
type=AVC msg=audit(1281123375.183:17295): avc:  denied  { unlink } for  pid=1921 comm="corosync" name="control_buffer-lbJkUT" dev=tmpfs ino=14790 scontext=unconfined_u:system_r:corosync_t:s0 tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file

when rebooting the node:


type=AVC msg=audit(1281123434.822:17296): avc:  denied  { execute } for  pid=2786 comm="reboot" name="shutdown" dev=dm-0 ino=392528 scontext=unconfined_u:system_r:ricci_t:s0 tcontext=system_u:object_r:shutdown_exec_t:s0 tclass=file
type=AVC msg=audit(1281123434.822:17296): avc:  denied  { read open } for  pid=2786 comm="reboot" name="shutdown" dev=dm-0 ino=392528 scontext=unconfined_u:system_r:ricci_t:s0 tcontext=system_u:object_r:shutdown_exec_t:s0 tclass=file
type=AVC msg=audit(1281123434.822:17296): avc:  denied  { execute_no_trans } for  pid=2786 comm="reboot" path="/sbin/shutdown" dev=dm-0 ino=392528 scontext=unconfined_u:system_r:ricci_t:s0 tcontext=system_u:object_r:shutdown_exec_t:s0 tclass=file
type=AVC msg=audit(1281123434.828:17297): avc:  denied  { write } for  pid=2786 comm="shutdown" name="wtmp" dev=dm-0 ino=262231 scontext=unconfined_u:system_r:ricci_t:s0 tcontext=system_u:object_r:wtmp_t:s0 tclass=file

Comment 31 Miroslav Grepl 2010-08-09 10:44:18 UTC
(In reply to comment #30)
> 
> when rebooting the node:
> 
> 
> type=AVC msg=audit(1281123434.822:17296): avc:  denied  { execute } for 
> pid=2786 comm="reboot" name="shutdown" dev=dm-0 ino=392528
> scontext=unconfined_u:system_r:ricci_t:s0
> tcontext=system_u:object_r:shutdown_exec_t:s0 tclass=file
> type=AVC msg=audit(1281123434.822:17296): avc:  denied  { read open } for 
> pid=2786 comm="reboot" name="shutdown" dev=dm-0 ino=392528
> scontext=unconfined_u:system_r:ricci_t:s0
> tcontext=system_u:object_r:shutdown_exec_t:s0 tclass=file
> type=AVC msg=audit(1281123434.822:17296): avc:  denied  { execute_no_trans }
> for  pid=2786 comm="reboot" path="/sbin/shutdown" dev=dm-0 ino=392528
> scontext=unconfined_u:system_r:ricci_t:s0
> tcontext=system_u:object_r:shutdown_exec_t:s0 tclass=file
> type=AVC msg=audit(1281123434.828:17297): avc:  denied  { write } for  pid=2786
> comm="shutdown" name="wtmp" dev=dm-0 ino=262231
> scontext=unconfined_u:system_r:ricci_t:s0 tcontext=system_u:object_r:wtmp_t:s0
> tclass=file    

I will fix it.

Comment 32 Miroslav Grepl 2010-08-09 10:48:52 UTC
(In reply to comment #30)
> I get these denials now:
> 
> when starting a cluster:
> 
> type=AVC msg=audit(1281123375.183:17294): avc:  denied  { read write } for 
> pid=1921 comm="corosync" name="control_buffer-lbJkUT" dev=tmpfs ino=14790
> scontext=unconfined_u:system_r:corosync_t:s0
> tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file
> type=AVC msg=audit(1281123375.183:17294): avc:  denied  { open } for  pid=1921
> comm="corosync" name="control_buffer-lbJkUT" dev=tmpfs ino=14790
> scontext=unconfined_u:system_r:corosync_t:s0
> tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file
> type=AVC msg=audit(1281123375.183:17295): avc:  denied  { unlink } for 
> pid=1921 comm="corosync" name="control_buffer-lbJkUT" dev=tmpfs ino=14790
> scontext=unconfined_u:system_r:corosync_t:s0
> tcontext=unconfined_u:object_r:initrc_state_t:s0 tclass=file
> 

Are you getting these AVC messages with fence_tool labelled as fenced_exec_t?

# ls -Z /usr/sbin/fence_tool

Comment 33 Chris Feist 2010-08-09 21:15:41 UTC
I just ran the 'chcon -t fenced_exec_t /usr/sbin/fence_tool' command and that does make the corosync error messages go away.

However, when rebooting a node I still get this avc:


type=AVC msg=audit(1281388449.815:3160): avc:  denied  { write } for  pid=20004 comm="shutdown" name="wtmp" dev=dm-0 ino=262231 scontext=unconfined_u:system_r:ricci_t:s0 tcontext=system_u:object_r:wtmp_t:s0 tclass=file
type=SYSCALL msg=audit(1281388449.815:3160): arch=c000003e syscall=2 success=yes exit=4 a0=40b042 a1=1 a2=2 a3=8 items=0 ppid=19953 pid=20004 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=1 comm="shutdown" exe="/sbin/shutdown" subj=unconfined_u:system_r:ricci_t:s0 key=(null)

Comment 34 Miroslav Grepl 2010-08-10 07:11:54 UTC
(In reply to comment #33)
> I just ran the 'chcon -t fenced_exec_t /usr/sbin/fence_tool' command and that
> does make the corosync error messages go away.

Ok. Thanks.

> 
> However, when rebooting a node I still get this avc:
> 
> 
> type=AVC msg=audit(1281388449.815:3160): avc:  denied  { write } for  pid=20004
> comm="shutdown" name="wtmp" dev=dm-0 ino=262231
> scontext=unconfined_u:system_r:ricci_t:s0 tcontext=system_u:object_r:wtmp_t:s0
> tclass=file
> type=SYSCALL msg=audit(1281388449.815:3160): arch=c000003e syscall=2
> success=yes exit=4 a0=40b042 a1=1 a2=2 a3=8 items=0 ppid=19953 pid=20004 auid=0
> uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=1
> comm="shutdown" exe="/sbin/shutdown" subj=unconfined_u:system_r:ricci_t:s0
> key=(null)    

It will fix in selinux-policy-3.7.19-37.el6.

Comment 35 Miroslav Grepl 2010-08-10 18:15:00 UTC
Fixed in selinux-policy-3.7.19-37.el6.noarch

Comment 36 Chris Feist 2010-08-10 19:27:07 UTC
This appears to have fixed the avc denials.  The only question I have now is where is the best place for the 'chcon -t fenced_exec_t /usr/sbin/fence_tool'?  Should that be done in the rpm during installation?  Or is there a place where selinux will do that?

Comment 37 Chris Feist 2010-08-10 19:53:03 UTC
Nevermind, I see that that installing the new policy updates the context for fence_tool.  We should be good to go.

Comment 40 releng-rhel@redhat.com 2010-11-15 14:48:14 UTC
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.


Note You need to log in before you can comment on or make changes to this bug.