Bug 915151 - Please create (working) policy for pacemaker
Summary: Please create (working) policy for pacemaker
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: selinux-policy
Version: 6.4
Hardware: All
OS: Linux
urgent
urgent
Target Milestone: rc
: ---
Assignee: Miroslav Grepl
QA Contact: Michal Trunecka
URL:
Whiteboard:
: 814334 838259 869136 978578 984253 997357 (view as bug list)
Depends On: 801493
Blocks: 768522 848613 875794 985442 987355 1003613 1021635
TreeView+ depends on / blocked
 
Reported: 2013-02-25 05:31 UTC by Andrew Beekhof
Modified: 2018-12-04 14:35 UTC (History)
19 users (show)

Fixed In Version: selinux-policy-3.7.19-226.el6
Doc Type: Bug Fix
Doc Text:
Previously, the pacemaker resource manager did not have its own policy defined and started in the initrc_t domain. With this update, the wrong context has been fixed and proper permissions have been set for pacemaker, thus fixing the bug.
Clone Of: 801493
: 1003613 (view as bug list)
Environment:
Last Closed: 2013-11-21 10:16:47 UTC
Target Upstream Version:


Attachments (Terms of Use)
Updated module (2.24 KB, application/x-gzip)
2013-02-26 11:27 UTC, Vladislav Bogdanov
no flags Details
avc denials with .213 (50.53 KB, text/plain)
2013-09-05 10:58 UTC, michal novacek
no flags Details
avc from node1 (32.37 KB, text/plain)
2013-10-22 11:13 UTC, Fabio Massimo Di Nitto
no flags Details
avc from node2 (31.22 KB, text/plain)
2013-10-22 11:14 UTC, Fabio Massimo Di Nitto
no flags Details
new avc with shadow config in audit.rules for node1 (8.28 KB, text/plain)
2013-10-22 11:34 UTC, Fabio Massimo Di Nitto
no flags Details
new avc with shadow config in audit.rules for node2 (9.92 KB, text/plain)
2013-10-22 11:35 UTC, Fabio Massimo Di Nitto
no flags Details
new avc with shadow config in audit.rules for node1 with first 227 build (9.65 KB, text/plain)
2013-10-22 12:33 UTC, Fabio Massimo Di Nitto
no flags Details
new avc with shadow config in audit.rules for node2 with first 227 build (4.96 KB, text/plain)
2013-10-22 12:33 UTC, Fabio Massimo Di Nitto
no flags Details
pcmk avc (13.89 KB, text/plain)
2013-10-22 14:25 UTC, Fabio Massimo Di Nitto
no flags Details
pcmk avc with third 227 build (12.21 KB, text/plain)
2013-10-22 15:09 UTC, Fabio Massimo Di Nitto
no flags Details
latest avc with rgmanger (227 third build) (1.97 KB, text/plain)
2013-10-22 20:01 UTC, Fabio Massimo Di Nitto
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 127483 0 None None None 2018-12-04 14:35:38 UTC
Red Hat Knowledge Base (Solution) 404043 0 None None None 2018-12-02 16:48:45 UTC
Red Hat Product Errata RHBA-2013:1598 0 normal SHIPPED_LIVE selinux-policy bug fix and enhancement update 2013-11-20 21:39:24 UTC

Comment 1 Andrew Beekhof 2013-02-25 05:32:49 UTC
In particular there is feedback in comment #17 of the original bug.

Comment 4 Miroslav Grepl 2013-02-25 09:28:13 UTC
We have pacemaker running as permissive domain as we do it for new domains.

Could you attach all AVC msgs which you are getting? Thank you.

Comment 6 Jaroslav Kortus 2013-02-25 10:12:29 UTC
I'd like to add that the policy is working and the setup is not broken. You can for instance get working apache service running in enforcing mode.

It could probably be improved, so that different daemons get their own domains and also the services could transition to their respective domains.

My snip from 6.4beta (selinux-policy-3.7.19-195.el6.noarch):
system_u:system_r:pacemaker_t:s0 3202 ?        S      0:00 pacemakerd
system_u:system_r:pacemaker_t:s0 3209 ?        Ss     0:00  \_ /usr/libexec/pacemaker/stonithd
system_u:system_r:pacemaker_t:s0 3210 ?        Ss     0:00  \_ /usr/libexec/pacemaker/lrmd
system_u:system_r:pacemaker_t:s0 3213 ?        Ss     0:00  \_ /usr/libexec/pacemaker/crmd
system_u:system_r:pacemaker_t:s0 5321 ?        Ss     0:00 /usr/sbin/httpd -DSTATUS -f /etc/httpd/conf/httpd.conf

AFAIK the pacemaker_t domain is very permissive one (same as rgmanager_t). So the only risk is being not confined enough rather than something would not work.

I'd be also interested in the denials caught.

Comment 8 Vladislav Bogdanov 2013-02-25 15:10:02 UTC
Pacemaker's lrmd is more like an init rather then an ordinary daemon - it may start almost everything. Why should it start all that in one weak domain rather then use well-confined ones?
I'd expect that my apache (samba, vsftpd, etc. etc.) runs in the same context independently of how is it started, and that context should be the most secure one.

Another point is that in some circumstances pacemaker may manage services started by an init sequence, f.e. dlm_controld.

Comment 9 Miroslav Grepl 2013-02-25 16:30:28 UTC
Well, pacemaker is more like rgmanager from my point of view. I will make some changes to make this working but really thinking about merging it with rgmanager. All these cluster stuff end up as powerful "services".

Comment 10 Jaroslav Kortus 2013-02-25 16:43:35 UTC
Vladislav, you are absolutely right and I believe we'll get to that point. For the beginning we intended to create a new domain for pacemaker and improve it later.

Comment 12 Vladislav Bogdanov 2013-02-25 18:40:08 UTC
Jaroslav, why not to take what I attached to an original bug, polish it a bit (if needed, because I did not test it with cman) and just use? IMHO it is already huge step forward.

Comment 13 Jaroslav Kortus 2013-02-25 18:53:26 UTC
That's Miroslav's call to make as he's maintaining the package. I'm sure he will take a look, as it sounds very reasonable to start with it :).

Comment 14 Miroslav Grepl 2013-02-26 09:21:01 UTC
Vladislav,
some of rules are OK. But for some rules I need to see AVC msgs to see what's going on.

Comment 15 Vladislav Bogdanov 2013-02-26 09:31:59 UTC
Miroslav, can you please list that rules in question?
I would explain.

Comment 16 Miroslav Grepl 2013-02-26 09:46:37 UTC
I am interested about

# crmd gathers metadata from OCF scripts
allow pacemaker_t initrc_exec_t:dir list_dir_perms;
init_getattr_script_files(pacemaker_t)
init_read_script_files(pacemaker_t)
init_exec_script_files(pacemaker_t)


I understand we will need to add what we have for rgmanager

init_domtrans_script(rgmanager_t)
init_initrc_domain(rgmanager_t)

Just is there a service running as initrc_t then?

Comment 17 Vladislav Bogdanov 2013-02-26 11:19:30 UTC
Miroslav, can you please look at the first message in thread (link was provided in the original bug too) http://permalink.gmane.org/gmane.linux.highavailability.pacemaker/15817 

It contains one more policy module, which is not strictly related to pacemaker, but is necessary to run next-generation (corosync2 based) cluster with all common components (dlm_controld, gfs_controld, etc.) on EL6. Some parts there are backported from fedora policy, some are added by me (esp. gfs_controld part, which does not exist in modern fedora anymore, but is necessary to run GFS2 on EL6, so I ported it to corosync2).

Among others, it contains line

/usr/lib/ocf(/.*)?                      gen_context(system_u:object_r:initrc_exec_t,s0)

, which should be included in some other suitable module.

(In reply to comment #16)
> I am interested about
> 
> # crmd gathers metadata from OCF scripts
> allow pacemaker_t initrc_exec_t:dir list_dir_perms;
> init_getattr_script_files(pacemaker_t)
> init_read_script_files(pacemaker_t)
> init_exec_script_files(pacemaker_t)

crmd (Cluster Resource Manager Daemon, part of pacemaker) scans all relevant directories in /usr/lib/ocf/resource.d and runs all executable files found there with parameter 'metadata' and fills internal database with obtained data. It does not need to make transition to initrc_t domain, because 'metadata' action is mostly noop except it outputs some XML to stdout. crmd just need to stat, read and exec that files (together with libraries found in /usr/lib/ocf/lib. Andrew can fix me if I'm wrong.

Actual actions (start, stop, monitor) are performed by lrmd which I run in init_t domain, so only lrmd needs domtrans.

But, that OCF scripts are not the only option for pacemaker. It is much more powerful than rgmanager, f.e. it can manage old sysinit, upstart (via dbus) and systemd services as well. Mostly everything that init (various flavours) does.
In addition, some OCF scripts (f.e. nfs-server) are just a "cluster-enabler" wrappers for sysinit scripts (found in /etc/rc.d/init.d).
That's why I decided it is much simpler to just move lrmd to a well-defined and well-maintained domain which suits its needs almost 100%, and allow rest of pacemaker to communicate with that domain.

I hope Andrew can provide more information on pacemaker internals if needed.

> 
> 
> I understand we will need to add what we have for rgmanager
> 
> init_domtrans_script(rgmanager_t)
> init_initrc_domain(rgmanager_t)
> 
> Just is there a service running as initrc_t then?

Can you reword last phrase please?

Comment 18 Miroslav Grepl 2013-02-26 11:27:15 UTC
Ok then I would label them as bin_t instead of initrc_exec_t. So

/usr/lib/ocf(/.*)?                      gen_context(system_u:object_r:bin_t,s0)

I meant your output of

# ps -eZ |grep initrc

Comment 19 Vladislav Bogdanov 2013-02-26 11:27:54 UTC
Created attachment 702799 [details]
Updated module

Here is updated module version.
The only difference should be transition of stonithd (fencing daemon implementation) to a fenced_t role.

Comment 21 Vladislav Bogdanov 2013-02-26 11:35:03 UTC
# ps axfZ|grep [i]nitrc
#
output is null.

Relevant part of ps output is:
==========
system_u:system_r:pacemaker_t:s0 1584 ?        S      3:07 pacemakerd
system_u:system_r:pacemaker_t:s0 1586 ?        Ss    19:21  \_ /usr/libexec/pacemaker/cib
system_u:system_r:fenced_t:s0    1587 ?        Ss     3:13  \_ /usr/libexec/pacemaker/stonithd
system_u:system_r:init_t:s0      1588 ?        Ss     9:06  \_ /usr/libexec/pacemaker/lrmd
system_u:system_r:pacemaker_t:s0 1589 ?        Ss    19:27  \_ /usr/libexec/pacemaker/attrd
system_u:system_r:pacemaker_t:s0 1590 ?        Ss     5:56  \_ /usr/libexec/pacemaker/pengine
system_u:system_r:pacemaker_t:s0 1591 ?        Ss     4:17  \_ /usr/libexec/pacemaker/crmd
system_u:system_r:dlm_controld_t:s0 1860 ?     Ssl    0:00 dlm_controld
system_u:system_r:gfs_controld_t:s0 2029 ?     Ssl    0:00 gfs_controld
system_u:system_r:clvmd_t:s0     2043 ?        SLsl   0:04 /usr/sbin/clvmd -T30 -I corosync
=========
please note last three lines: pacemaker-managed processes run in their domains, not in initrc one.

The same I see on another cluster where I run virtual machines (in addition to that qemu processes run confined too).

Comment 26 internet+rhbz 2013-05-21 20:54:37 UTC
Hi,

as Vladislav has pointed out, it would be nice if the ressources which are managed by pacemaker, would run in their well known and confined domains. The update and hence the introduction of the pacemaker module in RHEL 6.4, has broken our confined services, as pacemaker_t currently runs as a permissive domain. I must admit that we use pacemaker in production already, and we've overlooked the label tech-preview.

Our plan is to adjust this in our own policy via adding the needed transition separately, i.e. for mysqld:

> # === allow transition for mysqld ocf-ra from pacemaker_t ===  
> domtrans_pattern(pacemaker_t, mysqld_safe_exec_t, mysqld_safe_t)

We've tested this in our staging environment, and it seems to work there. Is this an acceptable workaround or should we wait for an update (maybe shipped with RHEL 6.5)?!

Best regards,
Martin

Comment 27 Miroslav Grepl 2013-05-22 07:38:26 UTC
We have more fixes for pacemaker, rgmanager, corosync, aisexec in Fedora and I would like to backport them to resolve these issues.

Comment 32 Robert Scheck 2013-07-14 13:33:12 UTC
I completely agree with Martin in comment #26, we also need some transitions,
see bug #984253 for the transitions we use as workaround in our own policy.

Comment 33 Miroslav Grepl 2013-07-15 08:52:10 UTC
*** Bug 984253 has been marked as a duplicate of this bug. ***

Comment 34 Miroslav Grepl 2013-07-17 11:59:03 UTC
*** Bug 869136 has been marked as a duplicate of this bug. ***

Comment 35 Miroslav Grepl 2013-07-17 11:59:08 UTC
*** Bug 838259 has been marked as a duplicate of this bug. ***

Comment 36 Miroslav Grepl 2013-07-17 12:04:58 UTC
*** Bug 978578 has been marked as a duplicate of this bug. ***

Comment 38 Jaroslav Kortus 2013-07-25 09:02:24 UTC
status as of selinux-policy-3.7.19-207.el6.noarch:

1. the service transitions are still not active (httpd runs as pacemaker_t)

2. denials in fencing (this time fence_xvm):
----
time->Thu Jul 25 10:59:22 2013
type=SYSCALL msg=audit(1374742762.589:452): arch=c000003e syscall=42 success=no exit=-13 a0=5 a1=7fff9d171440 a2=6e a3=0 items=0 ppid=20213 pid=20214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="stonith_admin" exe="/usr/sbin/stonith_admin" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1374742762.589:452): avc:  denied  { connectto } for  pid=20214 comm="stonith_admin" path=0073746F6E6974682D6E6700000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=unconfined_u:system_r:pacemaker_t:s0 tclass=unix_stream_socket
----
time->Thu Jul 25 10:59:28 2013
type=SYSCALL msg=audit(1374742768.695:456): arch=c000003e syscall=109 success=yes exit=0 a0=4f0c a1=0 a2=e9c620 a3=6 items=0 ppid=6564 pid=20236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fence_pcmk" exe="/usr/bin/perl" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1374742768.695:456): avc:  denied  { setpgid } for  pid=20236 comm="fence_pcmk" scontext=unconfined_u:system_r:fenced_t:s0 tcontext=unconfined_u:system_r:fenced_t:s0 tclass=process
----
time->Thu Jul 25 10:59:28 2013
type=SYSCALL msg=audit(1374742768.702:457): arch=c000003e syscall=42 success=yes exit=0 a0=5 a1=7fff760b9530 a2=6e a3=0 items=0 ppid=20236 pid=20237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="stonith_admin" exe="/usr/sbin/stonith_admin" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1374742768.702:457): avc:  denied  { connectto } for  pid=20237 comm="stonith_admin" path=0073746F6E6974682D6E6700000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=unconfined_u:system_r:pacemaker_t:s0 tclass=unix_stream_socket
----
time->Thu Jul 25 10:59:28 2013
type=SYSCALL msg=audit(1374742768.709:458): arch=c000003e syscall=2 success=yes exit=8 a0=7fff760b8560 a1=2 a2=180 a3=7fff760b8590 items=0 ppid=20236 pid=20237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="stonith_admin" exe="/usr/sbin/stonith_admin" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1374742768.709:458): avc:  denied  { open } for  pid=20237 comm="stonith_admin" name="qb-stonith-ng-control-6801-20237-17" dev=tmpfs ino=81545 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=unconfined_u:object_r:tmpfs_t:s0 tclass=file
type=AVC msg=audit(1374742768.709:458): avc:  denied  { read write } for  pid=20237 comm="stonith_admin" name="qb-stonith-ng-control-6801-20237-17" dev=tmpfs ino=81545 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=unconfined_u:object_r:tmpfs_t:s0 tclass=file
----
time->Thu Jul 25 10:59:30 2013
type=SYSCALL msg=audit(1374742770.926:459): arch=c000003e syscall=87 success=yes exit=0 a0=14b5558 a1=18 a2=0 a3=1 items=0 ppid=20236 pid=20237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="stonith_admin" exe="/usr/sbin/stonith_admin" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1374742770.926:459): avc:  denied  { unlink } for  pid=20237 comm="stonith_admin" name="qb-stonith-ng-control-6801-20237-17" dev=tmpfs ino=81545 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=unconfined_u:object_r:tmpfs_t:s0 tclass=file

# getsebool -a | grep fence
fenced_can_network_connect --> on
fenced_can_ssh --> off

Comment 39 Miroslav Grepl 2013-07-25 11:08:59 UTC
Jaroslav,
selinux-policy-3.7.19-207.el6.noarch does not contain changes. Please test it with

https://brewweb.devel.redhat.com/buildinfo?buildID=282657

Comment 40 Jaroslav Kortus 2013-07-25 13:45:34 UTC
during upgrade to selinux-policy-3.7.19-209.el6.noarch:

libsepol.scope_copy_callback: rhcs: Duplicate declaration in module: type/attribute rgmanager_var_lib_t (No such file or directory).
libsemanage.semanage_link_sandbox: Link packages failed (No such file or directory).

on fence event:
----
time->Thu Jul 25 15:44:16 2013
type=SYSCALL msg=audit(1374759856.762:72): arch=c000003e syscall=42 success=yes exit=0 a0=5 a1=7fffd8bab3d0 a2=6e a3=0 items=0 ppid=3400 pid=3401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="stonith_admin" exe="/usr/sbin/stonith_admin" subj=system_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1374759856.762:72): avc:  denied  { connectto } for  pid=3401 comm="stonith_admin" path=0073746F6E6974682D6E6700000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 scontext=system_u:system_r:fenced_t:s0 tcontext=system_u:system_r:pacemaker_t:s0 tclass=unix_stream_socket
----
time->Thu Jul 25 15:44:16 2013
type=SYSCALL msg=audit(1374759856.752:71): arch=c000003e syscall=109 success=yes exit=0 a0=d48 a1=0 a2=c83620 a3=6 items=0 ppid=2915 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="fence_pcmk" exe="/usr/bin/perl" subj=system_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1374759856.752:71): avc:  denied  { setpgid } for  pid=3400 comm="fence_pcmk" scontext=system_u:system_r:fenced_t:s0 tcontext=system_u:system_r:fenced_t:s0 tclass=process
----
time->Thu Jul 25 15:44:16 2013
type=SYSCALL msg=audit(1374759856.769:73): arch=c000003e syscall=2 success=yes exit=8 a0=7fffd8baa400 a1=2 a2=180 a3=7fffd8baa430 items=0 ppid=3400 pid=3401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="stonith_admin" exe="/usr/sbin/stonith_admin" subj=system_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1374759856.769:73): avc:  denied  { open } for  pid=3401 comm="stonith_admin" name="qb-stonith-ng-control-3153-3401-17" dev=tmpfs ino=20748 scontext=system_u:system_r:fenced_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=file
type=AVC msg=audit(1374759856.769:73): avc:  denied  { read write } for  pid=3401 comm="stonith_admin" name="qb-stonith-ng-control-3153-3401-17" dev=tmpfs ino=20748 scontext=system_u:system_r:fenced_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=file
----
time->Thu Jul 25 15:44:19 2013
type=SYSCALL msg=audit(1374759859.814:74): arch=c000003e syscall=87 success=yes exit=0 a0=791558 a1=18 a2=0 a3=1 items=0 ppid=3400 pid=3401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="stonith_admin" exe="/usr/sbin/stonith_admin" subj=system_u:system_r:fenced_t:s0 key=(null)
type=AVC msg=audit(1374759859.814:74): avc:  denied  { unlink } for  pid=3401 comm="stonith_admin" name="qb-stonith-ng-control-3153-3401-17" dev=tmpfs ino=20748 scontext=system_u:system_r:fenced_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=file

Comment 41 Miroslav Grepl 2013-07-25 15:01:07 UTC
Then something is wrong with upgrade.

Milos,
are you getting it also with the latest release?

Comment 42 Milos Malik 2013-07-29 08:07:09 UTC
No error messages seen when upgrading from -207.el6 to -209.el6:

# rpm -qa selinux\*
selinux-policy-targeted-3.7.19-207.el6.noarch
selinux-policy-mls-3.7.19-207.el6.noarch
selinux-policy-3.7.19-207.el6.noarch
selinux-policy-doc-3.7.19-207.el6.noarch
selinux-policy-minimum-3.7.19-207.el6.noarch
# rpm -Uvh http://download.devel.redhat.com/brewroot/packages/selinux-policy/3.7.19/209.el6/noarch/selinux-policy-{,doc-,minimum-,mls-,targeted-}3.7.19-209.el6.noarch.rpm
Retrieving http://download.devel.redhat.com/brewroot/packages/selinux-policy/3.7.19/209.el6/noarch/selinux-policy-3.7.19-209.el6.noarch.rpm
Retrieving http://download.devel.redhat.com/brewroot/packages/selinux-policy/3.7.19/209.el6/noarch/selinux-policy-doc-3.7.19-209.el6.noarch.rpm
Retrieving http://download.devel.redhat.com/brewroot/packages/selinux-policy/3.7.19/209.el6/noarch/selinux-policy-minimum-3.7.19-209.el6.noarch.rpm
Retrieving http://download.devel.redhat.com/brewroot/packages/selinux-policy/3.7.19/209.el6/noarch/selinux-policy-mls-3.7.19-209.el6.noarch.rpm
Retrieving http://download.devel.redhat.com/brewroot/packages/selinux-policy/3.7.19/209.el6/noarch/selinux-policy-targeted-3.7.19-209.el6.noarch.rpm
Preparing...                ########################################### [100%]
   1:selinux-policy         ########################################### [ 20%]
   2:selinux-policy-doc     ########################################### [ 40%]
   3:selinux-policy-minimum ########################################### [ 60%]
   4:selinux-policy-mls     ########################################### [ 80%]
   5:selinux-policy-targeted########################################### [100%]
#

Comment 43 Miroslav Grepl 2013-07-29 08:48:19 UTC
Ok, I was finally able to get working also on cluster systems.

Preparing...                                                            (########################################### [100%]
   1:selinux-policy                                                     (########################################### [ 50%]
   2:selinux-policy-targeted                                            (########################################### [100%]
[root@virt-125 ~]# service pacemaker restart
Signaling Pacemaker Cluster Manager to terminate: [  OK  ]
Waiting for cluster services to unload:.[  OK  ]
Starting Pacemaker Cluster Manager: [  OK  ]
[root@virt-125 ~]# ausearch -m avc -ts recent
<no matches>
[root@virt-125 ~]# ps -eZ |grep pacema
unconfined_u:system_r:cluster_t:s0 30837 pts/1 00:00:00 pacemakerd


I needed to clean up old installations. Now we need to start with testing.

Comment 44 Miroslav Grepl 2013-08-07 11:33:25 UTC
*** Bug 814334 has been marked as a duplicate of this bug. ***

Comment 46 Fabio Massimo Di Nitto 2013-08-27 05:53:48 UTC
*** Bug 1000624 has been marked as a duplicate of this bug. ***

Comment 47 Fabio Massimo Di Nitto 2013-08-27 05:54:46 UTC
Based on feedback from bug #1000624 selinux-policy appears to be still broken.

Comment 48 Miroslav Grepl 2013-08-27 08:04:12 UTC
(In reply to Fabio Massimo Di Nitto from comment #47)
> Based on feedback from bug #1000624 selinux-policy appears to be still
> broken.

Ok, this is a different issue.

Comment 49 Jaroslav Kortus 2013-08-27 08:11:03 UTC
that's https://bugzilla.redhat.com/show_bug.cgi?id=997357 I believe.

Comment 50 Nate Straz 2013-08-27 14:58:08 UTC
While rebuilding the qarshd policy we're finding that the interfaces for rhcs_domtrans_cluster and friends are broken.

$ rpm -q selinux-policy
selinux-policy-3.7.19-213.el6.noarch
[root@gfs-node01:/usr/share/doc/qarsh-selinux-1.28]$ sh rebuild-policy.sh
make: Nothing to be done for `all'.
Avoiding: openvswitch_domtrans
Avoiding: oracleasm_initrc_domtrans
Avoiding: rhcs_domtrans_cluster
Avoiding: oracleasm_domtrans
Avoiding: antivirus_domtrans
Avoiding: rhcs_initrc_domtrans_cluster
Avoiding: puppet_domtrans_master
Saved policy rebuild logs in /tmp/qarshd-rebuild-policy.3mk2

$ cat /tmp/qarshd-rebuild-policy.3mk2/rhcs_domtrans_cluster.log
cat qarshd.te.in qarshd.te.trans > qarshd.te
Compiling targeted qarshd module
/usr/bin/checkmodule:  loading policy configuration from tmp/qarshd.tmp
/usr/bin/checkmodule:  policy configuration loaded
/usr/bin/checkmodule:  writing binary representation (version 10) to tmp/qarshd.mod
Creating targeted qarshd.pp policy package
Loading targeted modules: qarshd
libsepol.print_missing_requirements: qarshd's global requirements were not met: type/attribute cluster_exec_t (No such file or directory).
libsemanage.semanage_link_sandbox: Link packages failed (No such file or directory).
/usr/sbin/semodule:  Failed!
make: *** [tmp/loaded] Error 1
rm tmp/qarshd.mod.fc tmp/qarshd.mod

$ cat /tmp/qarshd-rebuild-policy.3mk2/rhcs_initrc_domtrans_cluster.log
cat qarshd.te.in qarshd.te.trans > qarshd.te
Compiling targeted qarshd module
/usr/bin/checkmodule:  loading policy configuration from tmp/qarshd.tmp
/usr/bin/checkmodule:  policy configuration loaded
/usr/bin/checkmodule:  writing binary representation (version 10) to tmp/qarshd.mod
Creating targeted qarshd.pp policy package
Loading targeted modules: qarshd
libsepol.print_missing_requirements: qarshd's global requirements were not met: type/attribute cluster_initrc_exec_t (No such file or directory).
libsemanage.semanage_link_sandbox: Link packages failed (No such file or directory).
/usr/sbin/semodule:  Failed!
make: *** [tmp/loaded] Error 1
rm tmp/qarshd.mod.fc tmp/qarshd.mod

Comment 53 michal novacek 2013-09-05 10:58:52 UTC
Created attachment 794188 [details]
avc denials with .213

Comment 54 michal novacek 2013-09-05 11:00:54 UTC
I checked with the latest snapshot 6.5-20130902.n.0 and selinux-policy .213 after one of the nodes is fenced there is a lot af AVC messages. 

I believe that it is the node that is fencing that gets those messages.

Comment 55 michal novacek 2013-09-09 12:44:52 UTC
It seems to be gone in .214

Comment 56 Miroslav Grepl 2013-09-09 13:41:53 UTC
So no more issues with .214.

Comment 57 michal novacek 2013-09-10 10:10:50 UTC
Sorry I do take back my last comment -- I mixed two different problems together.

The issue still appears with selinux-policy 3.7.19-214

Comment 58 Miroslav Grepl 2013-09-10 10:33:36 UTC
commit 7782b290811f73b158af50465e5f4cd26c711b08
Author: Miroslav Grepl <mgrepl@redhat.com>
Date:   Tue Sep 10 12:33:04 2013 +0200

    Allow setpgid and r/w cluster tmpfs for fenced_t

Comment 59 Nate Straz 2013-09-10 21:11:45 UTC
Still hitting the issues in comment #50 with selinux-policy-3.7.19-214.el6.  I don't see that selinux-policy-3.7.19-215.el6 exists.

[root@gfs-node03:/usr/share/doc/qarsh-selinux-2.00]$ rpm -q selinux-policy
selinux-policy-3.7.19-214.el6.noarch
[root@gfs-node03:/usr/share/doc/qarsh-selinux-2.00]$ sh rebuild-policy.sh
/tmp/qarshd-rebuild-policy.xA8m /usr/share/doc/qarsh-selinux-2.00
Avoiding: openvswitch_domtrans
Avoiding: oracleasm_initrc_domtrans
Avoiding: rhcs_domtrans_cluster
Avoiding: oracleasm_domtrans
Avoiding: antivirus_domtrans
Avoiding: rhcs_initrc_domtrans_cluster
Avoiding: puppet_domtrans_master
/usr/share/doc/qarsh-selinux-2.00
Saved policy rebuild logs in /tmp/qarshd-rebuild-policy.xA8m
[root@gfs-node03:/usr/share/doc/qarsh-selinux-2.00]$ cd /tmp/qarshd-rebuild-policy.xA8m
[root@gfs-node03:/tmp/qarshd-rebuild-policy.xA8m]$ cat rhcs_domtrans_cluster.log
cat qarshd.te.in qarshd.te.trans > qarshd.te
Compiling targeted qarshd module
/usr/bin/checkmodule:  loading policy configuration from tmp/qarshd.tmp
/usr/bin/checkmodule:  policy configuration loaded
/usr/bin/checkmodule:  writing binary representation (version 10) to tmp/qarshd.mod
Creating targeted qarshd.pp policy package
Loading targeted modules: qarshd
libsepol.print_missing_requirements: qarshd's global requirements were not met: type/attribute cluster_exec_t (No such file or directory).
libsemanage.semanage_link_sandbox: Link packages failed (No such file or directory).
/usr/sbin/semodule:  Failed!
make: *** [tmp/loaded] Error 1
rm tmp/qarshd.mod.fc tmp/qarshd.mod
[root@gfs-node03:/tmp/qarshd-rebuild-policy.xA8m]$ cat rhcs_initrc_domtrans_cluster.log
cat qarshd.te.in qarshd.te.trans > qarshd.te
Compiling targeted qarshd module
/usr/bin/checkmodule:  loading policy configuration from tmp/qarshd.tmp
/usr/bin/checkmodule:  policy configuration loaded
/usr/bin/checkmodule:  writing binary representation (version 10) to tmp/qarshd.mod
Creating targeted qarshd.pp policy package
Loading targeted modules: qarshd
libsepol.print_missing_requirements: qarshd's global requirements were not met: type/attribute cluster_initrc_exec_t (No such file or directory).
libsemanage.semanage_link_sandbox: Link packages failed (No such file or directory).
/usr/sbin/semodule:  Failed!
make: *** [tmp/loaded] Error 1
rm tmp/qarshd.mod.fc tmp/qarshd.mod

Comment 60 Miroslav Grepl 2013-09-11 07:47:30 UTC
I don't see these issues.

$ cat mypolicy.te 
policy_module(mypolicy,1.0)

type abc_t;
domain_type(abc_t)

optional_policy(`
	rhcs_domtrans_cluster(abc_t)
	rhcs_initrc_domtrans_cluster(abc_t)
')

# make -f /usr/share/selinux/devel/Makefile mypolicy.pp
Compiling targeted mypolicy module
/usr/bin/checkmodule:  loading policy configuration from tmp/mypolicy.tmp
/usr/bin/checkmodule:  policy configuration loaded
/usr/bin/checkmodule:  writing binary representation (version 10) to tmp/mypolicy.mod
Creating targeted mypolicy.pp policy package
rm tmp/mypolicy.mod.fc tmp/mypolicy.mod
# semodule -i mypolicy.pp 

and it works. Switching back to ON_QA.

A new build is coming today.

Comment 61 Miroslav Grepl 2013-09-17 15:36:02 UTC
*** Bug 997357 has been marked as a duplicate of this bug. ***

Comment 62 Jaroslav Kortus 2013-10-07 19:21:09 UTC
So far I've seen no AVCs with selinux-policy-3.7.19-218.el6.noarch (all daemons running in cluster_t domain).

# ps ax -o label,euser,egroup,cmd
unconfined_u:system_r:cluster_t:s0 root  root     pacemakerd
unconfined_u:system_r:cluster_t:s0 189   root     /usr/libexec/pacemaker/cib
unconfined_u:system_r:cluster_t:s0 root  root     /usr/libexec/pacemaker/stonith
unconfined_u:system_r:cluster_t:s0 root  root     /usr/libexec/pacemaker/lrmd
unconfined_u:system_r:cluster_t:s0 189   root     /usr/libexec/pacemaker/attrd
unconfined_u:system_r:cluster_t:s0 189   root     /usr/libexec/pacemaker/pengine
unconfined_u:system_r:cluster_t:s0 root  root     /usr/libexec/pacemaker/crmd

Comment 63 Jaroslav Kortus 2013-10-17 19:13:22 UTC
as noted in https://bugzilla.redhat.com/show_bug.cgi?id=1003613 this works for me in latest rhel6 snaps.

Tested with selinux-policy-3.7.19-224.el6.noarch

Comment 64 Miroslav Grepl 2013-10-21 22:00:24 UTC
The latest fixes have been added to selinux-policy-3.7.19-226.el6

Comment 65 Fabio Massimo Di Nitto 2013-10-22 11:13:17 UTC
Created attachment 814938 [details]
avc from node1

Comment 66 Fabio Massimo Di Nitto 2013-10-22 11:14:24 UTC
Created attachment 814939 [details]
avc from node2

Comment 67 Fabio Massimo Di Nitto 2013-10-22 11:19:30 UTC
notes for comment #65 and #66.

[root@rhel6-node2 cluster]# rpm -q -i selinux-policy
Name        : selinux-policy               Relocations: (not relocatable)
Version     : 3.7.19                            Vendor: Red Hat, Inc.
Release     : 226.el6                       Build Date: Tue Oct 22 00:10:17 2013

tested cluster with qdiskd (started by cman init script)

several virtual ip address, cluster fs (gfs2), clvmd, ext4 fs, nfs client, nfs server, samba server, apache and mysql.

[root@rhel6-node1 cluster]# ls -lZ /var/run/cluster
drwxr-xr-x. root  root unconfined_u:object_r:var_run_t:s0 apache
drwxr-xr-x. mysql root unconfined_u:object_r:var_run_t:s0 mysql
srw-rw----. root  root unconfined_u:object_r:cluster_var_run_t:s0 rgmanager.sk
drwxr-xr-x. root  root unconfined_u:object_r:var_run_t:s0 samba

as requested.

even tho we can see AVCs, all services appear to be operational. Sorry I just don´t know what I am looking at exactly to discern between good or bad.

selinux is of course enabled and in enforcing mode.

cluster is running cman/clvmd/rgmanager.

Comment 68 Miroslav Grepl 2013-10-22 11:34:07 UTC
(In reply to Fabio Massimo Di Nitto from comment #67)
> notes for comment #65 and #66.
> 
> [root@rhel6-node2 cluster]# rpm -q -i selinux-policy
> Name        : selinux-policy               Relocations: (not relocatable)
> Version     : 3.7.19                            Vendor: Red Hat, Inc.
> Release     : 226.el6                       Build Date: Tue Oct 22 00:10:17
> 2013
> 
> tested cluster with qdiskd (started by cman init script)
> 
> several virtual ip address, cluster fs (gfs2), clvmd, ext4 fs, nfs client,
> nfs server, samba server, apache and mysql.
> 
> [root@rhel6-node1 cluster]# ls -lZ /var/run/cluster
> drwxr-xr-x. root  root unconfined_u:object_r:var_run_t:s0 apache
> drwxr-xr-x. mysql root unconfined_u:object_r:var_run_t:s0 mysql
> srw-rw----. root  root unconfined_u:object_r:cluster_var_run_t:s0
> rgmanager.sk
> drwxr-xr-x. root  root unconfined_u:object_r:var_run_t:s0 samba
> 
> as requested.
> 

Looks Ok now. And the policy needs to be updated by rules for AVC msgs which you added.

Comment 69 Fabio Massimo Di Nitto 2013-10-22 11:34:54 UTC
Created attachment 814944 [details]
new avc with shadow config in audit.rules for node1

Comment 70 Fabio Massimo Di Nitto 2013-10-22 11:35:24 UTC
Created attachment 814945 [details]
new avc with shadow config in audit.rules for node2

Comment 71 Miroslav Grepl 2013-10-22 11:39:01 UTC
Fixes have been added and doing a new scratch build.

Comment 72 Fabio Massimo Di Nitto 2013-10-22 12:33:17 UTC
Created attachment 814968 [details]
new avc with shadow config in audit.rules for node1 with first 227 build

Comment 73 Fabio Massimo Di Nitto 2013-10-22 12:33:46 UTC
Created attachment 814969 [details]
new avc with shadow config in audit.rules for node2 with first 227 build

Comment 74 Fabio Massimo Di Nitto 2013-10-22 13:20:05 UTC
No AVC with second 227 build.

Comment 75 Fabio Massimo Di Nitto 2013-10-22 14:25:27 UTC
Created attachment 815020 [details]
pcmk avc

this is 227 second build with cman+clvmd+pacemaker with:

 virt-fencing   (stonith:fence_xvm):    Started rhel6-node1 
 Resource Group: mysql-group
     mysql-vip  (ocf::heartbeat:IPaddr2):       Started rhel6-node2 
     mysql-ha-lvm       (ocf::heartbeat:LVM):   Started rhel6-node2 
     mysql-fs   (ocf::heartbeat:Filesystem):    Started rhel6-node2 
     mysql-db   (ocf::heartbeat:mysql): Started rhel6-node2 
     nfs        (ocf::heartbeat:Filesystem):    Started rhel6-node2 
     webserver  (ocf::heartbeat:apache):        Started rhel6-node2 

running a virtuap ip, ha-lvm, local fs (ext4 mounted as /var/lib/mysql/), mysql, nfs mount and apache.

Comment 76 Miroslav Grepl 2013-10-22 14:31:23 UTC
time->Tue Oct 22 16:20:11 2013
type=PATH msg=audit(1382451611.950:181): item=0 name="." inode=264847 dev=fd:00 mode=040750 ouid=497 ogid=494 rdev=00:00 obj=system_u:object_r:cluster_var_lib_t:s0 nametype=NORMAL
type=CWD msg=audit(1382451611.950:181):  cwd="/var/lib/pacemaker/cores"
type=SYSCALL msg=audit(1382451611.950:181): arch=c000003e syscall=4 success=yes exit=0 a0=4a3bcb a1=7fff2cd06d40 a2=7fff2cd06d40 a3=7f9242391670 items=1 ppid=1642 pid=1644 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=1 comm="mysqld_safe" exe="/bin/bash" subj=unconfined_u:system_r:mysqld_safe_t:s0 key=(null)
type=AVC msg=audit(1382451611.950:181): avc:  denied  { search } for  pid=1644 comm="mysqld_safe" name="cores" dev=dm-0 ino=264847 scontext=unconfined_u:system_r:mysqld_safe_t:s0 tcontext=system_u:object_r:cluster_var_lib_t:s0 tclass=dir


it makes sense to fix. We have this rule only for initrc domains. Will update it for all daemons.

-----

time->Tue Oct 22 16:20:11 2013
type=PATH msg=audit(1382451611.983:182): item=0 name="/var/lib/mysql/my.cnf" inode=12 dev=fd:03 mode=0100644 ouid=27 ogid=27 rdev=00:00 obj=unconfined_u:object_r:file_t:s0 nametype=NORMAL
type=CWD msg=audit(1382451611.983:182):  cwd="/var/lib/pacemaker/cores"
type=SYSCALL msg=audit(1382451611.983:182): arch=c000003e syscall=4 success=yes exit=0 a0=1941ba0 a1=7fff2cd070d0 a2=7fff2cd070d0 a3=8 items=1 ppid=1545 pid=1634 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=1 comm="mysqld_safe" exe="/bin/bash" subj=unconfined_u:system_r:mysqld_safe_t:s0 key=(null)
type=AVC msg=audit(1382451611.983:182): avc:  denied  { getattr } for  pid=1634 comm="mysqld_safe" path="/var/lib/mysql/my.cnf" dev=dm-3 ino=12 scontext=unconfined_u:system_r:mysqld_safe_t:s0 tcontext=unconfined_u:object_r:file_t:s0 tclass=file


Well, file_t means "no label". So there is  restorecon needed. It looks it is moved from a disk?

Comment 77 Fabio Massimo Di Nitto 2013-10-22 15:09:38 UTC
Created attachment 815038 [details]
pcmk avc with third 227 build

Comment 78 Fabio Massimo Di Nitto 2013-10-22 20:01:17 UTC
Created attachment 815168 [details]
latest avc with rgmanger (227 third build)

Comment 79 Fabio Massimo Di Nitto 2013-10-23 04:49:23 UTC
reported from cluster-list:


The heartbeat:nfsserver and heartbeat:exportfs agents appear to function correctly, but I'm seeing this denial in the logs.

type=AVC msg=audit(1382483865.449:28): avc:  denied  { write } for  pid=3855 comm="rpc.mountd" path="/tmp/tmp.l4KcZrYayN" dev=dm-0 ino=276800 scontext=unconfined_u:system_r:nfsd_t:s0 tcontext=unconfined_u:object_r:cluster_tmp_t:s0 tclass=file
type=AVC msg=audit(1382483865.449:28): avc:  denied  { write } for  pid=3855 comm="rpc.mountd" path="/tmp/tmp.l4KcZrYayN" dev=dm-0 ino=276800 scontext=unconfined_u:system_r:nfsd_t:s0 tcontext=unconfined_u:object_r:cluster_tmp_t:s0 tclass=file
type=AVC msg=audit(1382483865.784:29): avc:  denied  { write } for  pid=3910 comm="rpc.nfsd" path="/tmp/tmp.l4KcZrYayN" dev=dm-0 ino=276800 scontext=unconfined_u:system_r:nfsd_t:s0 tcontext=unconfined_u:object_r:cluster_tmp_t:s0 tclass=file
type=AVC msg=audit(1382483865.784:29): avc:  denied  { write } for  pid=3910 comm="rpc.nfsd" path="/tmp/tmp.l4KcZrYayN" dev=dm-0 ino=276800 scontext=unconfined_u:system_r:nfsd_t:s0 tcontext=unconfined_u:object_r:cluster_tmp_t:s0 tclass=file

Comment 80 Miroslav Grepl 2013-10-23 06:25:55 UTC
Ok, it looks we are not able to avoid cluster_var_run_t for pid files which are created in /var/run/cluster/<servicename>/<servicename>.pid. Also the restorecon won't help here because of

/var/run/cluster/httpd/httpd.pid	<<none>>

So I got idea to have a boolean

deamons_enable_cluster_mode

with

optional_policy(`
 tunable_policy(`daemons_enable_cluster_mode',`
     rhcs_manage_cluster_pid_files(daemon)
     rhcs_stream_connect_cluster(daemon)
     rhcs_read_cluster_lib_files(daemon)
     rhcs_rw_inherited_cluster_tmp_files(daemon)
 ')
')

optional_policy(`
 tunable_policy(`daemons_enable_cluster_mode',`
     ccs_manage_config(daemon)
 ')
')

Comment 81 Miroslav Grepl 2013-10-23 06:31:43 UTC
And the boolean should be enabled if a cluster is installed.

Comment 82 Fabio Massimo Di Nitto 2013-10-23 06:57:15 UTC
(In reply to Miroslav Grepl from comment #81)
> And the boolean should be enabled if a cluster is installed.

This poses a problem at this stage of release tho. Upgrades will break because the boolean is off and new installs will require manual switch.

Is it possible to automate that somehow? can we default to on?

Comment 83 Fabio Massimo Di Nitto 2013-10-23 09:24:11 UTC
Tested with 228 scratch build and rgmanager/pacemaker, no AVCs recorded.

Comment 84 Fabio Massimo Di Nitto 2013-10-23 12:57:37 UTC
08:56 < fabbione> mgrepl: i am satisfied with all the tests so far and the latest builds you gave to me..
08:56 < fabbione> no AVC cluster related on both rgmanager and pacemkaer
08:56 < fabbione> both 6.5 and 6.4.z

Comment 85 Jaroslav Kortus 2013-10-23 18:40:53 UTC
no iscsi issues with selinux-policy-3.7.19-228.el6.noarch

Comment 86 Andrew Beekhof 2013-10-23 23:25:26 UTC
Problems with heartbeat:mysql

type=AVC msg=audit(1382568702.464:168): avc:  denied  { getattr } for  pid=20243 comm="mysqld" path="/var/lib/mysql" dev=0:15 ino=67040 scontext=unconfined_u:system_r:mysqld_t:s0 tcontext=system_u:object_r:nfs_t:s0 tclass=dir
type=AVC msg=audit(1382568702.474:169): avc:  denied  { search } for  pid=20243 comm="mysqld" name="" dev=0:15 ino=67040 scontext=unconfined_u:system_r:mysqld_t:s0 tcontext=system_u:object_r:nfs_t:s0 tclass=dir
type=AVC msg=audit(1382568702.475:170): avc:  denied  { write } for  pid=20243 comm="mysqld" name="" dev=0:15 ino=67040 scontext=unconfined_u:system_r:mysqld_t:s0 tcontext=system_u:object_r:nfs_t:s0 tclass=dir
type=AVC msg=audit(1382568702.475:170): avc:  denied  { add_name } for  pid=20243 comm="mysqld" name="corosync-host-1.lower-test" scontext=unconfined_u:system_r:mysqld_t:s0 tcontext=system_u:object_r:nfs_t:s0 tclass=dir
type=AVC msg=audit(1382568702.475:170): avc:  denied  { create } for  pid=20243 comm="mysqld" name="corosync-host-1.lower-test" scontext=unconfined_u:system_r:mysqld_t:s0 tcontext=unconfined_u:object_r:nfs_t:s0 tclass=file
type=AVC msg=audit(1382568702.475:170): avc:  denied  { open } for  pid=20243 comm="mysqld" name="corosync-host-1.lower-test" dev=0:15 ino=67242 scontext=unconfined_u:system_r:mysqld_t:s0 tcontext=system_u:object_r:nfs_t:s0 tclass=file
type=AVC msg=audit(1382568702.477:171): avc:  denied  { remove_name } for  pid=20243 comm="mysqld" name="corosync-host-1.lower-test" dev=0:15 ino=67242 scontext=unconfined_u:system_r:mysqld_t:s0 tcontext=system_u:object_r:nfs_t:s0 tclass=dir
type=AVC msg=audit(1382568702.477:171): avc:  denied  { unlink } for  pid=20243 comm="mysqld" name="corosync-host-1.lower-test" dev=0:15 ino=67242 scontext=unconfined_u:system_r:mysqld_t:s0 tcontext=system_u:object_r:nfs_t:s0 tclass=file

Agent is also complaining that the pid file (/var/run/mysql/mysqld.pid) is not present.

[root@corosync-host-1 ~]# ls -alZ /var/lib/mysql/
drwxrwxrwx. root      root      system_u:object_r:nfs_t:s0       .
drwxr-xr-x. root      root      system_u:object_r:var_lib_t:s0   ..
-rw-r--r--. root      root      system_u:object_r:nfs_t:s0       clientsharefile
-rw-rw----. mysql     mysql     system_u:object_r:nfs_t:s0       ibdata1
-rw-rw----. mysql     mysql     system_u:object_r:nfs_t:s0       ib_logfile0
-rw-rw----. mysql     mysql     system_u:object_r:nfs_t:s0       ib_logfile1
-rw-r--r--. nfsnobody nfsnobody system_u:object_r:nfs_t:s0       my.cnf
drwx------. nfsnobody nfsnobody system_u:object_r:nfs_t:s0       mysql
-rw-r-----. root      root      system_u:object_r:nfs_t:s0       .rmtab
drwx------. nfsnobody nfsnobody system_u:object_r:nfs_t:s0       test

[root@corosync-host-1 ~]# ls -alZ /var/run/mysql/
drwxr-x--x. mysql     mysql    unconfined_u:object_r:var_run_t:s0 .
drwxr-xr-x. hacluster haclient system_u:object_r:var_run_t:s0   ..

Comment 87 Andrew Beekhof 2013-10-23 23:40:02 UTC
I tracked down -228 and now I get:

type=AVC msg=audit(1382571476.454:284): avc:  denied  { open } for  pid=4302 comm="my_print_defaul" name="my.cnf" dev=0:15 ino=67263 scontext=unconfined_u:system_r:mysqld_safe_t:s0 tcontext=system_u:object_r:nfs_t:s0 tclass=file
type=SYSCALL msg=audit(1382571476.454:284): arch=c000003e syscall=2 success=yes exit=3 a0=7fffc85bd940 a1=0 a2=1b6 a3=0 items=0 ppid=4273 pid=4302 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=3 comm="my_print_defaul" exe="/usr/bin/my_print_defaults" subj=unconfined_u:system_r:mysqld_safe_t:s0 key=(null)
type=AVC msg=audit(1382571476.959:285): avc:  denied  { search } for  pid=4383 comm="mysqld" name="" dev=0:15 ino=67040 scontext=unconfined_u:system_r:mysqld_t:s0 tcontext=system_u:object_r:nfs_t:s0 tclass=dir
type=SYSCALL msg=audit(1382571476.959:285): arch=c000003e syscall=4 success=yes exit=0 a0=7fff1a1ac020 a1=7fff1a1a8f80 a2=7fff1a1a8f80 a3=7fff1a1a8c50 items=0 ppid=4273 pid=4383 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=3 comm="mysqld" exe="/usr/libexec/mysqld" subj=unconfined_u:system_r:mysqld_t:s0 key=(null)
type=AVC msg=audit(1382571476.961:286): avc:  denied  { open } for  pid=4383 comm="mysqld" name="my.cnf" dev=0:15 ino=67263 scontext=unconfined_u:system_r:mysqld_t:s0 tcontext=system_u:object_r:nfs_t:s0 tclass=file
type=SYSCALL msg=audit(1382571476.961:286): arch=c000003e syscall=2 success=yes exit=3 a0=7fff1a1ac020 a1=0 a2=1b6 a3=0 items=0 ppid=4273 pid=4383 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=3 comm="mysqld" exe="/usr/libexec/mysqld" subj=unconfined_u:system_r:mysqld_t:s0 key=(null)
type=AVC msg=audit(1382571476.963:287): avc:  denied  { getattr } for  pid=4383 comm="mysqld" path="/var/lib/mysql" dev=0:15 ino=67040 scontext=unconfined_u:system_r:mysqld_t:s0 tcontext=system_u:object_r:nfs_t:s0 tclass=dir
type=SYSCALL msg=audit(1382571476.963:287): arch=c000003e syscall=6 success=yes exit=0 a0=7fff1a1ab390 a1=7fff1a1ab2c0 a2=7fff1a1ab2c0 a3=7fff1a1ab110 items=0 ppid=4273 pid=4383 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=3 comm="mysqld" exe="/usr/libexec/mysqld" subj=unconfined_u:system_r:mysqld_t:s0 key=(null)
type=AVC msg=audit(1382571476.994:288): avc:  denied  { write } for  pid=4383 comm="mysqld" name="" dev=0:15 ino=67040 scontext=unconfined_u:system_r:mysqld_t:s0 tcontext=system_u:object_r:nfs_t:s0 tclass=dir
type=AVC msg=audit(1382571476.994:288): avc:  denied  { add_name } for  pid=4383 comm="mysqld" name="corosync-host-1.lower-test" scontext=unconfined_u:system_r:mysqld_t:s0 tcontext=system_u:object_r:nfs_t:s0 tclass=dir
type=AVC msg=audit(1382571476.994:288): avc:  denied  { create } for  pid=4383 comm="mysqld" name="corosync-host-1.lower-test" scontext=unconfined_u:system_r:mysqld_t:s0 tcontext=unconfined_u:object_r:nfs_t:s0 tclass=file
type=SYSCALL msg=audit(1382571476.994:288): arch=c000003e syscall=2 success=yes exit=3 a0=7fff1a1ac3d0 a1=42 a2=1b6 a3=7fff1a1abe90 items=0 ppid=4273 pid=4383 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=3 comm="mysqld" exe="/usr/libexec/mysqld" subj=unconfined_u:system_r:mysqld_t:s0 key=(null)
type=AVC msg=audit(1382571477.027:289): avc:  denied  { remove_name } for  pid=4383 comm="mysqld" name="corosync-host-1.lower-test" dev=0:15 ino=67271 scontext=unconfined_u:system_r:mysqld_t:s0 tcontext=system_u:object_r:nfs_t:s0 tclass=dir
type=AVC msg=audit(1382571477.027:289): avc:  denied  { unlink } for  pid=4383 comm="mysqld" name="corosync-host-1.lower-test" dev=0:15 ino=67271 scontext=unconfined_u:system_r:mysqld_t:s0 tcontext=system_u:object_r:nfs_t:s0 tclass=file
type=SYSCALL msg=audit(1382571477.027:289): arch=c000003e syscall=87 success=yes exit=0 a0=7fff1a1ac3d0 a1=10 a2=7f72d9a40af0 a3=6c2e312d74736f68 items=0 ppid=4273 pid=4383 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=3 comm="mysqld" exe="/usr/libexec/mysqld" subj=unconfined_u:system_r:mysqld_t:s0 key=(null)
type=AVC msg=audit(1382571477.297:290): avc:  denied  { read } for  pid=4383 comm="mysqld" name="" dev=0:15 ino=67040 scontext=unconfined_u:system_r:mysqld_t:s0 tcontext=system_u:object_r:nfs_t:s0 tclass=dir
type=AVC msg=audit(1382571477.297:290): avc:  denied  { open } for  pid=4383 comm="mysqld" name="" dev=0:15 ino=67040 scontext=unconfined_u:system_r:mysqld_t:s0 tcontext=system_u:object_r:nfs_t:s0 tclass=dir
type=SYSCALL msg=audit(1382571477.297:290): arch=c000003e syscall=2 success=yes exit=10 a0=8d26d2 a1=90800 a2=ceeef0 a3=7fff1a1a6420 items=0 ppid=4273 pid=4383 auid=0 uid=27 gid=27 euid=27 suid=27 fsuid=27 egid=27 sgid=27 fsgid=27 tty=(none) ses=3 comm="mysqld" exe="/usr/libexec/mysqld" subj=unconfined_u:system_r:mysqld_t:s0 key=(null)
type=AVC msg=audit(1382571477.516:291): avc:  denied  { create } for  pid=4383 comm="mysqld" name="mysql.sock" scontext=unconfined_u:system_r:mysqld_t:s0 tcontext=unconfined_u:object_r:nfs_t:s0 tclass=sock_file
type=SYSCALL msg=audit(1382571477.516:291): arch=c000003e syscall=49 success=yes exit=0 a0=c a1=7fff1a1ac8d0 a2=6e a3=7fff1a1ac8cc items=0 ppid=4273 pid=4383 auid=0 uid=27 gid=27 euid=27 suid=27 fsuid=27 egid=27 sgid=27 fsgid=27 tty=(none) ses=3 comm="mysqld" exe="/usr/libexec/mysqld" subj=unconfined_u:system_r:mysqld_t:s0 key=(null)
type=AVC msg=audit(1382571477.544:292): avc:  denied  { unlink } for  pid=4383 comm="mysqld" name="mysql.sock" dev=0:15 ino=67273 scontext=unconfined_u:system_r:mysqld_t:s0 tcontext=system_u:object_r:nfs_t:s0 tclass=sock_file
type=AVC msg=audit(1382571477.544:292): avc:  denied  { search } for  pid=4383 comm="mysqld" name="" dev=0:15 ino=67040 scontext=unconfined_u:system_r:mysqld_t:s0 tcontext=system_u:object_r:nfs_t:s0 tclass=dir
type=SYSCALL msg=audit(1382571477.544:292): arch=c000003e syscall=87 success=yes exit=0 a0=7fff1a1aec37 a1=10 a2=111f a3=111f items=0 ppid=4273 pid=4383 auid=0 uid=27 gid=27 euid=27 suid=27 fsuid=27 egid=27 sgid=27 fsgid=27 tty=(none) ses=3 comm="mysqld" exe="/usr/libexec/mysqld" subj=unconfined_u:system_r:mysqld_t:s0 key=(null)

Comment 88 Andrew Beekhof 2013-10-24 00:37:25 UTC
These are for dhcp:

[root@corosync-host-1 ~]# type=AVC msg=audit(1382574999.696:360): avc:  denied  { search } for  pid=27288 comm="dhcpd" name="" dev=0:15 ino=67040 scontext=unconfined_u:system_r:dhcpd_t:s0 tcontext=system_u:object_r:nfs_t:s0 tclass=dir
type=AVC msg=audit(1382574999.696:360): avc:  denied  { open } for  pid=27288 comm="dhcpd" name="dhcpd.conf" dev=0:15 ino=67699 scontext=unconfined_u:system_r:dhcpd_t:s0 tcontext=system_u:object_r:nfs_t:s0 tclass=file
type=AVC msg=audit(1382574999.707:361): avc:  denied  { append } for  pid=27288 comm="dhcpd" name="dhcpd.leases" dev=dm-0 ino=136421 scontext=unconfined_u:system_r:dhcpd_t:s0 tcontext=unconfined_u:object_r:cluster_var_lib_t:s0 tclass=file
type=AVC msg=audit(1382574999.708:362): avc:  denied  { write } for  pid=27288 comm="dhcpd" name="db" dev=dm-0 ino=131799 scontext=unconfined_u:system_r:dhcpd_t:s0 tcontext=unconfined_u:object_r:cluster_var_lib_t:s0 tclass=dir
type=AVC msg=audit(1382574999.708:362): avc:  denied  { add_name } for  pid=27288 comm="dhcpd" name="dhcpd.leases.1382574999" scontext=unconfined_u:system_r:dhcpd_t:s0 tcontext=unconfined_u:object_r:cluster_var_lib_t:s0 tclass=dir
type=AVC msg=audit(1382574999.708:362): avc:  denied  { create } for  pid=27288 comm="dhcpd" name="dhcpd.leases.1382574999" scontext=unconfined_u:system_r:dhcpd_t:s0 tcontext=unconfined_u:object_r:cluster_var_lib_t:s0 tclass=file
type=AVC msg=audit(1382574999.708:362): avc:  denied  { write } for  pid=27288 comm="dhcpd" name="dhcpd.leases.1382574999" dev=dm-0 ino=136469 scontext=unconfined_u:system_r:dhcpd_t:s0 tcontext=unconfined_u:object_r:cluster_var_lib_t:s0 tclass=file
type=AVC msg=audit(1382574999.718:363): avc:  denied  { link } for  pid=27288 comm="dhcpd" name="dhcpd.leases" dev=dm-0 ino=136421 scontext=unconfined_u:system_r:dhcpd_t:s0 tcontext=unconfined_u:object_r:cluster_var_lib_t:s0 tclass=file
type=AVC msg=audit(1382574999.718:364): avc:  denied  { remove_name } for  pid=27288 comm="dhcpd" name="dhcpd.leases.1382574999" dev=dm-0 ino=136469 scontext=unconfined_u:system_r:dhcpd_t:s0 tcontext=unconfined_u:object_r:cluster_var_lib_t:s0 tclass=dir
type=AVC msg=audit(1382574999.718:364): avc:  denied  { rename } for  pid=27288 comm="dhcpd" name="dhcpd.leases.1382574999" dev=dm-0 ino=136469 scontext=unconfined_u:system_r:dhcpd_t:s0 tcontext=unconfined_u:object_r:cluster_var_lib_t:s0 tclass=file
type=AVC msg=audit(1382574999.718:364): avc:  denied  { unlink } for  pid=27288 comm="dhcpd" name="dhcpd.leases" dev=dm-0 ino=136421 scontext=unconfined_u:system_r:dhcpd_t:s0 tcontext=unconfined_u:object_r:cluster_var_lib_t:s0 tclass=file

Comment 89 Fabio Massimo Di Nitto 2013-10-24 03:53:10 UTC
(In reply to Andrew Beekhof from comment #87)
> I tracked down -228 and now I get:
> 
> type=AVC msg=audit(1382571476.454:284): avc:  denied  { open } for  pid=4302
> comm="my_print_defaul" name="my.cnf" dev=0:15 ino=67263
> scontext=unconfined_u:system_r:mysqld_safe_t:s0
> tcontext=system_u:object_r:nfs_t:s0 tclass=file
> type=SYSCALL msg=audit(1382571476.454:284): arch=c000003e syscall=2
> success=yes exit=3 a0=7fffc85bd940 a1=0 a2=1b6 a3=0 items=0 ppid=4273
> pid=4302 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0
> tty=(none) ses=3 comm="my_print_defaul" exe="/usr/bin/my_print_defaults"
> subj=unconfined_u:system_r:mysqld_safe_t:s0 key=(null)

Looks like you have /var/lib/mysql mounted via nfs? If so you probably didn't restorecon on the mount point. I saw some of those too when using ext4 mount point. This is an admin task to perform or eventually we need to fix the Filesystem RAS to call restorecon on the mount.

I didn't see any AVC with mysql when NOT mounting external /var/lib/mysql.

Comment 90 Andrew Beekhof 2013-10-24 04:43:32 UTC
MailTo agent produces:

type=AVC msg=audit(1382589665.894:389): avc:  denied  { write } for  pid=8885 comm="postdrop" path="pipe:[543839]" dev=pipefs ino=543839 scontext=unconfined_u:system_r:postfix_postdrop_t:s0 tcontext=unconfined_u:system_r:cluster_t:s0 tclass=fifo_file
type=AVC msg=audit(1382589665.952:390): avc:  denied  { getattr } for  pid=8885 comm="postdrop" path="pipe:[543839]" dev=pipefs ino=543839 scontext=unconfined_u:system_r:postfix_postdrop_t:s0 tcontext=unconfined_u:system_r:cluster_t:s0 tclass=fifo_file

Comment 91 Andrew Beekhof 2013-10-24 04:44:28 UTC
(In reply to Fabio Massimo Di Nitto from comment #89)
> (In reply to Andrew Beekhof from comment #87)
> > I tracked down -228 and now I get:
> > 
> > type=AVC msg=audit(1382571476.454:284): avc:  denied  { open } for  pid=4302
> > comm="my_print_defaul" name="my.cnf" dev=0:15 ino=67263
> > scontext=unconfined_u:system_r:mysqld_safe_t:s0
> > tcontext=system_u:object_r:nfs_t:s0 tclass=file
> > type=SYSCALL msg=audit(1382571476.454:284): arch=c000003e syscall=2
> > success=yes exit=3 a0=7fffc85bd940 a1=0 a2=1b6 a3=0 items=0 ppid=4273
> > pid=4302 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0
> > tty=(none) ses=3 comm="my_print_defaul" exe="/usr/bin/my_print_defaults"
> > subj=unconfined_u:system_r:mysqld_safe_t:s0 key=(null)
> 
> Looks like you have /var/lib/mysql mounted via nfs? If so you probably
> didn't restorecon on the mount point.

Sounds like something the Filesystem agent should be doing?

Comment 92 Fabio Massimo Di Nitto 2013-10-24 06:49:51 UTC
(In reply to Andrew Beekhof from comment #91)
> (In reply to Fabio Massimo Di Nitto from comment #89)
> > (In reply to Andrew Beekhof from comment #87)
> > > I tracked down -228 and now I get:
> > > 
> > > type=AVC msg=audit(1382571476.454:284): avc:  denied  { open } for  pid=4302
> > > comm="my_print_defaul" name="my.cnf" dev=0:15 ino=67263
> > > scontext=unconfined_u:system_r:mysqld_safe_t:s0
> > > tcontext=system_u:object_r:nfs_t:s0 tclass=file
> > > type=SYSCALL msg=audit(1382571476.454:284): arch=c000003e syscall=2
> > > success=yes exit=3 a0=7fffc85bd940 a1=0 a2=1b6 a3=0 items=0 ppid=4273
> > > pid=4302 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0
> > > tty=(none) ses=3 comm="my_print_defaul" exe="/usr/bin/my_print_defaults"
> > > subj=unconfined_u:system_r:mysqld_safe_t:s0 key=(null)
> > 
> > Looks like you have /var/lib/mysql mounted via nfs? If so you probably
> > didn't restorecon on the mount point.
> 
> Sounds like something the Filesystem agent should be doing?

Yes, I agree, but operationally speaking for 6.5 we can live without it and make sure the user does it.

What we need to understand if it´s safe to call restorecon on any filesystem that can be mounted by different machines. What if nodeA has policy version X and nodeB has policy version B (upgrade for policy bugfix/change), then what happens?

I am not entirely sure yet if automation here is the right answer to the problem.

One way would be to track the selinux policy version as cluster attribute and relabel only on the node with latest version, and build a constrain that will allow the FS to be mounted only on the nodes with the same version. IMHO it gets hairy and complicated.

Comment 93 Jaroslav Kortus 2013-10-24 08:11:09 UTC
IMHO this is 100% admin task. We can't guess correct contexts or relabel filesystems that are part of the service. Users can have their own policy and/or mount the filesystem with correct labels directly.

Our guesses on how to relabel some (sub)trees automatically would produce more pain than good. Confined services are documented (in selinux-devel) and reachable via man <service>_selinux (httpd_selinux for instance).

Comment 94 Andrew Beekhof 2013-10-26 02:42:35 UTC
(In reply to Jaroslav Kortus from comment #93)
> IMHO this is 100% admin task.

I'm told that the relabelling only needs to happen once, not every time we mount the directory.  If this is indeed the case, then I agree.

Although it might be nice if the Filesystem agent had it as an option (off by default).

Is there a new build for the dhcp ones?

Comment 99 Robert Scheck 2013-11-05 11:12:34 UTC
Why is postfix (when using lsb:postfix in pacemaker) started with pacemaker_t
instead of postfix_pickup_t and postfix_qmgr_t like here:

system_u:system_r:pacemaker_t:s0 postfix  3454  0.0  0.0  80944  3272 ?        S    12:08   0:00 pickup -l -t fifo -u
system_u:system_r:pacemaker_t:s0 postfix  3455  0.0  0.0  81116  3320 ?        S    12:08   0:00 qmgr -l -t fifo -u

This is selinux-policy-targeted-3.7.19-195.el6_4.18.noarch.

Comment 100 Robert Scheck 2013-11-05 11:21:21 UTC
From my point of view the new SELinux policy does not solve this at all: Every
process that is started via pacemaker still runs under pacemaker_t?! So why is
there no proper transitioning into their domains? Some of the processes in the
output below are normally running in unconfined_t, but Samba or PostgreSQL or
Postfix or Zarafa definately have their own domains.

system_u:system_r:pacemaker_t:s0 root     3016  0.0  0.0  80404  2960 ?        S    12:08   0:00 pacemakerd
system_u:system_r:pacemaker_t:s0 498      3044  0.2  0.0  89620 10128 ?        Ss   12:08   0:01 /usr/libexec/pacemaker/cib
system_u:system_r:pacemaker_t:s0 root     3045  0.0  0.0  82352  4184 ?        Ss   12:08   0:00 /usr/libexec/pacemaker/stonithd
system_u:system_r:pacemaker_t:s0 root     3046  0.0  0.0  73260  3180 ?        Ss   12:08   0:00 /usr/libexec/pacemaker/lrmd
system_u:system_r:pacemaker_t:s0 498      3047  0.0  0.0  85916  3128 ?        Ss   12:08   0:00 /usr/libexec/pacemaker/attrd
system_u:system_r:pacemaker_t:s0 498      3048  0.0  0.0  80924  2516 ?        Ss   12:08   0:00 /usr/libexec/pacemaker/pengine
system_u:system_r:pacemaker_t:s0 root     3049  0.0  0.0 103612  5700 ?        Ss   12:08   0:00 /usr/libexec/pacemaker/crmd
system_u:system_r:pacemaker_t:s0 root     4460  0.0  0.0 183544  3476 ?        Sl   12:15   0:00 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5
system_u:system_r:pacemaker_t:s0 root     4472  1.4  0.0 114768   928 ?        Ss   12:15   0:00 crond
system_u:system_r:pacemaker_t:s0 root     4551  0.0  0.0 108464   968 ?        S    12:15   0:00 /bin/sh /etc/init.d/fsupdate start
system_u:system_r:pacemaker_t:s0 root     4552  0.0  0.0   4052  1844 ?        S    12:15   0:00 /opt/f-secure/fssp/libexec/fsupdated -f
system_u:system_r:pacemaker_t:s0 root     4614  0.0  0.0 108464   996 ?        S    12:15   0:00 /bin/sh /etc/init.d/fspms start
system_u:system_r:pacemaker_t:s0 fspms    4617  108  3.0 4136068 491456 ?      Sl   12:15   0:24 /opt/f-secure/fspms/jre/bin/java -server [...]
system_u:system_r:pacemaker_t:s0 postgres 4649  2.5  0.0 215836  5748 ?        S    12:15   0:00 /usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data
system_u:system_r:pacemaker_t:s0 root     4668  0.0  0.0 108200  1556 ?        S    12:15   0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --socket=/var/lib/mysql/mysql.sock --pid-file=/var/run/mysqld/mysqld.pid --basedir=/usr --user=mysql
system_u:system_r:pacemaker_t:s0 fsaua    4702  0.9  0.0   3396  2020 ?        Ss   12:15   0:00 /opt/f-secure/fsaua/bin/fsaua
system_u:system_r:pacemaker_t:s0 root     4817  0.0  0.0  80864  3316 ?        Ss   12:15   0:00 /usr/libexec/postfix/master
system_u:system_r:pacemaker_t:s0 postfix  4819  0.0  0.0  80944  3280 ?        S    12:15   0:00 pickup -l -t fifo -u
system_u:system_r:pacemaker_t:s0 postfix  4820  0.0  0.0  81116  3328 ?        S    12:15   0:00 qmgr -l -t fifo -u
system_u:system_r:pacemaker_t:s0 root     4828  0.0  0.0 174048  2064 ?        Ss   12:15   0:00 nmbd -D
system_u:system_r:pacemaker_t:s0 root     4829  0.0  0.0 173768  1228 ?        S    12:15   0:00 nmbd -D
system_u:system_r:pacemaker_t:s0 mysql    4855  1.4  1.2 2383968 203924 ?      Sl   12:15   0:00 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock
system_u:system_r:pacemaker_t:s0 fspms    4858  0.0  0.0 130244  2780 ?        S    12:15   0:00 /usr/bin/perl /opt/f-secure/fsaus/bin/fsaus -c /etc/opt/f-secure/fsaus/conf/server.cfg
system_u:system_r:pacemaker_t:s0 fspms    4860  0.0  0.0 108196  1356 ?        S    12:15   0:00 sh -c /opt/f-secure/fsaus/bin/bwserver -c /etc/opt/f-secure/fsaus/conf/server.cfg  >/dev/null 2>&1 
system_u:system_r:pacemaker_t:s0 fspms    4862  0.0  0.0 586420  5428 ?        Sl   12:15   0:00 /opt/f-secure/fsaus/bin/bwserver -c /etc/opt/f-secure/fsaus/conf/server.cfg
system_u:system_r:pacemaker_t:s0 root     4873  0.0  0.0 494484  4580 ?        Sl   12:15   0:00 /usr/bin/zarafa-licensed -c /etc/zarafa/licensed.cfg
system_u:system_r:pacemaker_t:s0 root     4876  0.0  0.0 212060  3444 ?        Ss   12:15   0:00 smbd -D
system_u:system_r:pacemaker_t:s0 root     4890  0.0  0.0 212060  1648 ?        S    12:15   0:00 smbd -D
system_u:system_r:pacemaker_t:s0 postgres 4981  0.0  0.0 178848  1264 ?        Ss   12:15   0:00 postgres: logger process                          
system_u:system_r:pacemaker_t:s0 postgres 4983  0.0  0.0 215836  1524 ?        Ss   12:15   0:00 postgres: writer process                          
system_u:system_r:pacemaker_t:s0 postgres 4984  0.0  0.0 215836  1460 ?        Ss   12:15   0:00 postgres: wal writer process                      
system_u:system_r:pacemaker_t:s0 postgres 4985  0.0  0.0 215972  1704 ?        Ss   12:15   0:00 postgres: autovacuum launcher process             
system_u:system_r:pacemaker_t:s0 postgres 4986  0.0  0.0 178980  1444 ?        Ss   12:15   0:00 postgres: stats collector process                 
system_u:system_r:pacemaker_t:s0 firebird 5079  0.0  0.0  25768   788 ?        S    12:15   0:00 /usr/sbin/fbguard -pidfile /var/run/firebird/default.pid -daemon -forever
system_u:system_r:pacemaker_t:s0 firebird 5080  0.0  0.0 102100  5644 ?        Sl   12:15   0:00 /usr/sbin/fbserver
system_u:system_r:pacemaker_t:s0 root     5097  0.1  0.0 380584 12212 ?        Sl   12:15   0:00 /usr/bin/zarafa-server -c /etc/zarafa/server.cfg
system_u:system_r:pacemaker_t:s0 root     5124  0.0  0.0 406116  6488 ?        Sl   12:15   0:00 /usr/bin/zarafa-monitor -c /etc/zarafa/monitor.cfg
system_u:system_r:pacemaker_t:s0 root     5137  0.0  0.0 370908  7400 ?        Sl   12:15   0:00 /usr/bin/zarafa-search -c /etc/zarafa/search.cfg
system_u:system_r:pacemaker_t:s0 root     5157  0.0  0.0 225192  1940 ?        S    12:15   0:00 /usr/bin/zarafa-dagent -d -c /etc/zarafa/dagent.cfg
system_u:system_r:pacemaker_t:s0 root     5160  0.0  0.0 302696  5360 ?        Sl   12:15   0:00 /usr/bin/zarafa-spooler -c /etc/zarafa/spooler.cfg
system_u:system_r:pacemaker_t:s0 root     5164  0.0  0.0 206372  1620 ?        S    12:15   0:00 /usr/bin/zarafa-gateway -c /etc/zarafa/gateway.cfg

Comment 101 Robert Scheck 2013-11-05 11:24:27 UTC
Additionally noticed:

$ yum update
[...]
  Updating   : selinux-policy-3.7.19-195.el6_4.18.noarch            26/92 
  Updating   : selinux-policy-targeted-3.7.19-195.el6_4.18.noarch   27/92 
libsepol.scope_copy_callback: rhcs: Duplicate declaration in module: type/attribute rgmanager_var_lib_t (No such file or directory).
libsemanage.semanage_link_sandbox: Link packages failed (No such file or directory).
semodule:  Failed!
[...]
Complete!
$

Comment 102 Miroslav Grepl 2013-11-05 12:01:39 UTC
It should work and you should not see services running as pacemaker_t.

The problem will be with your upgrade issue. Any chance try to re-install the policy? From which policy have you been upgrading?

Comment 103 Robert Scheck 2013-11-05 15:02:51 UTC
I upgraded here from selinux-policy-targeted-3.7.19-195.el6_4.12.noarch to
selinux-policy-targeted-3.7.19-195.el6_4.18.noarch. However even when I do
"yum reinstall selinux-policy selinux-policy-targeted" it looks for me like
at comment #101. What else could I do? Did you mean something else by "try
to re-install the policy"?

Comment 104 Miroslav Grepl 2013-11-11 12:09:30 UTC
Robert, 
do you have a local modification?

I don't see it

Running Transaction
  Updating   : selinux-policy-3.7.19-195.el6_4.18.noarch      1/8 
  Updating   : selinux-policy-mls-3.7.19-195.el6_4.18.noarch    2/8 
  Updating   : selinux-policy-targeted-3.7.19-195.el6_4.18.no   3/8 
  Updating   : selinux-policy-minimum-3.7.19-195.el6_4.18.noa   4/8 
  Cleanup    : selinux-policy-minimum-3.7.19-195.el6_4.12.noa   5/8 
  Cleanup    : selinux-policy-targeted-3.7.19-195.el6_4.12.no   6/8 
  Cleanup    : selinux-policy-mls-3.7.19-195.el6_4.12.noarch    7/8 
  Cleanup    : selinux-policy-3.7.19-195.el6_4.12.noarch        8/8 
Installed products updated.
  Verifying  : selinux-policy-3.7.19-195.el6_4.18.noarch        1/8 
  Verifying  : selinux-policy-mls-3.7.19-195.el6_4.18.noarch    2/8 
  Verifying  : selinux-policy-targeted-3.7.19-195.el6_4.18.no   3/8 
  Verifying  : selinux-policy-minimum-3.7.19-195.el6_4.18.noa   4/8 
  Verifying  : selinux-policy-mls-3.7.19-195.el6_4.12.noarch    5/8 
  Verifying  : selinux-policy-targeted-3.7.19-195.el6_4.12.no   6/8 
  Verifying  : selinux-policy-3.7.19-195.el6_4.12.noarch        7/8 
  Verifying  : selinux-policy-minimum-3.7.19-195.el6_4.12.noa   8/8

Comment 105 Robert Scheck 2013-11-11 12:26:20 UTC
Yes, we have some local modifications. I've attached them to Red Hat portal
ticket 00977044 and sent them via e-mail to you (no idea if you have access
to the customer portal). But I do not see anything relevant there. Feel free
to correct me.

Comment 106 Miroslav Grepl 2013-11-18 09:16:29 UTC
Robert,
this is caused by your local policy. 

Please try to remove your local policy and re-install the packages.

Comment 107 errata-xmlrpc 2013-11-21 10:16:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1598.html

Comment 108 Robert Scheck 2013-11-22 12:06:06 UTC
Miroslav, you are right. The trick was to remove the own policy not just to
disable it (semodule -d vs. semodule -r). For others that may find this RHBZ
with this error message: Be aware that pacemaker_t changed to cluster_t or
that pacemaker_var_lib_t changed to cluster_var_lib_t for example. And there 
might be futher examples in individual local policies.


Note You need to log in before you can comment on or make changes to this bug.