Bug 692571 - selinux policies do not allow cluster to run
Summary: selinux policies do not allow cluster to run
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: selinux-policy
Version: 6.1
Hardware: All
OS: Linux
urgent
urgent
Target Milestone: rc
: ---
Assignee: Miroslav Grepl
QA Contact: Milos Malik
URL:
Whiteboard:
Depends On:
Blocks: 693725
TreeView+ depends on / blocked
 
Reported: 2011-03-31 15:28 UTC by Corey Marthaler
Modified: 2012-11-23 21:07 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 693725 (view as bug list)
Environment:
Last Closed: 2011-05-19 12:27:23 UTC
Target Upstream Version:


Attachments (Terms of Use)
cman init script patch (284 bytes, patch)
2011-04-05 08:37 UTC, Miroslav Grepl
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2011:0526 0 normal SHIPPED_LIVE selinux-policy bug fix and enhancement update 2011-05-19 09:37:41 UTC

Description Corey Marthaler 2011-03-31 15:28:26 UTC
Description of problem:

[root@grant-01 ~]# service cman start
Starting cluster:
   Checking if cluster has been disabled at boot...        [  OK  ]
   Checking Network Manager...                             [  OK  ]
   Global setup...                                         [  OK  ]
   Loading kernel modules...                               [  OK  ]
   Mounting configfs...                                    [  OK  ]
   Starting cman... corosync died: Could not read cluster configuration Check cluster logs for details
                                                           [FAILED]


Mar 31 10:19:03 grant-01 corosync[2034]: parse error in config: parse error in config: .
Mar 31 10:19:03 grant-01 corosync[2034]:   [MAIN  ] Corosync Cluster Engine ('1.2.3'): started and ready to provide service.
Mar 31 10:19:03 grant-01 corosync[2034]:   [MAIN  ] Corosync built-in features: nss dbus rdma snmp
Mar 31 10:19:03 grant-01 corosync[2034]:   [MAIN  ] Successfully read config from /etc/cluster/cluster.conf
Mar 31 10:19:03 grant-01 corosync[2034]:   [MAIN  ] Successfully parsed cman config
Mar 31 10:19:03 grant-01 corosync[2034]:   [MAIN  ] parse error in config: parse error in config: .
Mar 31 10:19:03 grant-01 corosync[2034]:   [MAIN  ] Corosync Cluster Engine exiting with status 8 at main.c:1651.



[root@grant-01 ~]# xmllint --relaxng /usr/share/cluster/cluster.rng /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="2" name="GRANT">
  <dlm log_debug="1"/>
  <cman>
                </cman>
  <fence_daemon clean_start="0" post_join_delay="30"/>
  <clusternodes>
    <clusternode name="grant-01" nodeid="1">
      <fence>
        <method name="IPMI">
          <device name="grant-01-ipmi"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="grant-02" nodeid="2">
      <fence>
        <method name="IPMI">
          <device name="grant-02-ipmi"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="grant-03" nodeid="3">
      <fence>
        <method name="APCEE">
          <device name="apc1" port="5" switch="1"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>
  <fencedevices>
    <fencedevice agent="fence_ipmilan" ipaddr="grant-01-ipmi" login="root" name="grant-01-ipmi" passwd="password"/>
    <fencedevice agent="fence_ipmilan" ipaddr="grant-02-ipmi" login="root" name="grant-02-ipmi" passwd="password"/>
    <fencedevice agent="fence_apc" ipaddr="link-apc" login="apc" name="apc1" passwd="apc"/>
  </fencedevices>
</cluster>
/etc/cluster/cluster.conf validates



Version-Release number of selected component (if applicable):
[root@grant-01 ~]# uname -ar
Linux grant-01 2.6.32-125.el6.x86_64 #1 SMP Mon Mar 21 10:06:08 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
[root@grant-01 ~]# rpm -q corosync
corosync-1.2.3-31.el6.x86_64



How reproducible:
Often

Comment 1 Corey Marthaler 2011-03-31 16:10:14 UTC
Looks like it's dying during the initial start up attempt, and then the subsequent parse errors may just be symptoms of it being already dead.

Mar 31 11:04:56 grant-01 qarshd[2152]: Running cmdline: service cman start 2>&1
Mar 31 11:04:56 grant-01 corosync[2190]:   [MAIN  ] Corosync Cluster Engine ('1.2.3'): started and ready to provide service.
Mar 31 11:04:56 grant-01 corosync[2190]:   [MAIN  ] Corosync built-in features: nss dbus rdma snmp
Mar 31 11:04:56 grant-01 corosync[2190]:   [MAIN  ] Successfully read config from /etc/cluster/cluster.conf
Mar 31 11:04:56 grant-01 corosync[2190]:   [MAIN  ] Successfully parsed cman config
Mar 31 11:04:56 grant-01 corosync[2190]:   [TOTEM ] Initializing transport (UDP/IP Multicast).
Mar 31 11:04:56 grant-01 corosync[2190]:   [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Mar 31 11:04:56 grant-01 corosync[2190]:   [TOTEM ] The network interface [10.15.89.151] is now up.
Mar 31 11:04:57 grant-01 corosync[2190]:   [QUORUM] Using quorum provider quorum_cman
Mar 31 11:04:57 grant-01 corosync[2190]:   [SERV  ] Service engine loaded: corosync cluster quorum service v0.1
Mar 31 11:04:57 grant-01 corosync[2190]:   [CMAN  ] CMAN 3.0.12 (built Mar 22 2011 05:32:49) started
Mar 31 11:04:57 grant-01 corosync[2190]:   [SERV  ] Service engine loaded: corosync CMAN membership service 2.90
Mar 31 11:04:57 grant-01 corosync[2190]:   [SERV  ] Service engine loaded: openais checkpoint service B.01.01
Mar 31 11:04:57 grant-01 corosync[2190]:   [SERV  ] Service engine loaded: corosync extended virtual synchrony service
Mar 31 11:04:57 grant-01 corosync[2190]:   [SERV  ] Service engine loaded: corosync configuration service
Mar 31 11:04:57 grant-01 corosync[2190]:   [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01
Mar 31 11:04:57 grant-01 corosync[2190]:   [SERV  ] Service engine loaded: corosync cluster config database access v1.01
Mar 31 11:04:57 grant-01 corosync[2190]:   [SERV  ] Service engine loaded: corosync profile loading service
Mar 31 11:04:57 grant-01 corosync[2190]:   [QUORUM] Using quorum provider quorum_cman
Mar 31 11:04:57 grant-01 corosync[2190]:   [SERV  ] Service engine loaded: corosync cluster quorum service v0.1
Mar 31 11:04:57 grant-01 corosync[2190]:   [MAIN  ] Compatibility mode set to whitetank.  Using V1 and V2 of the synchronization engine.
Mar 31 11:04:57 grant-01 corosync[2190]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
Mar 31 11:04:57 grant-01 corosync[2190]:   [QUORUM] Members[1]: 1
Mar 31 11:04:57 grant-01 corosync[2190]:   [QUORUM] Members[1]: 1
Mar 31 11:04:57 grant-01 corosync[2190]:   [CPG   ] downlist received left_list: 0
Mar 31 11:04:57 grant-01 corosync[2190]:   [CPG   ] chosen downlist from node r(0) ip(10.15.89.151)
Mar 31 11:04:57 grant-01 corosync[2190]:   [MAIN  ] Completed service synchronization, ready to provide service.
Mar 31 11:04:57 grant-01 corosync[2190]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
Mar 31 11:04:57 grant-01 corosync[2190]:   [CMAN  ] quorum regained, resuming activity
Mar 31 11:04:57 grant-01 corosync[2190]:   [QUORUM] This node is within the primary component and will provide service.
Mar 31 11:04:57 grant-01 corosync[2190]:   [QUORUM] Members[2]: 1 2
Mar 31 11:04:57 grant-01 corosync[2190]:   [QUORUM] Members[2]: 1 2
Mar 31 11:04:57 grant-01 corosync[2190]:   [CPG   ] downlist received left_list: 0
Mar 31 11:04:57 grant-01 corosync[2190]:   [CPG   ] downlist received left_list: 0
Mar 31 11:04:57 grant-01 corosync[2190]:   [CPG   ] chosen downlist from node r(0) ip(10.15.89.151)
Mar 31 11:04:57 grant-01 corosync[2190]:   [MAIN  ] Completed service synchronization, ready to provide service.
Mar 31 11:04:59 grant-01 corosync[2190]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
Mar 31 11:04:59 grant-01 corosync[2190]:   [QUORUM] Members[3]: 1 2 3
Mar 31 11:04:59 grant-01 corosync[2190]:   [QUORUM] Members[3]: 1 2 3
Mar 31 11:04:59 grant-01 corosync[2190]:   [TOTEM ] Retransmit List: 16
Mar 31 11:04:59 grant-01 corosync[2190]:   [CPG   ] downlist received left_list: 0
Mar 31 11:04:59 grant-01 corosync[2190]:   [CPG   ] downlist received left_list: 0
Mar 31 11:04:59 grant-01 corosync[2190]:   [CPG   ] downlist received left_list: 0
Mar 31 11:04:59 grant-01 corosync[2190]:   [CPG   ] chosen downlist from node r(0) ip(10.15.89.151)
Mar 31 11:04:59 grant-01 corosync[2190]:   [TOTEM ] Retransmit List: 1f
Mar 31 11:04:59 grant-01 corosync[2190]:   [MAIN  ] Completed service synchronization, ready to provide service.
Mar 31 11:05:00 grant-01 fenced[2246]: fenced 3.0.12 started
Mar 31 11:05:01 grant-01 dlm_controld[2265]: dlm_controld 3.0.12 started
Mar 31 11:05:03 grant-01 gfs_controld[2323]: gfs_controld 3.0.12 started
Mar 31 11:05:03 grant-01 abrt[2342]: saved core dump of pid 2190 (/usr/sbin/corosync) to /var/spool/abrt/ccpp-1301587503-2190.new/coredump (51826688 bytes)
Mar 31 11:05:03 grant-01 abrtd: Directory 'ccpp-1301587503-2190' creation detected
Mar 31 11:05:03 grant-01 fenced[2246]: cluster is down, exiting
Mar 31 11:05:03 grant-01 dlm_controld[2265]: cluster is down, exiting
Mar 31 11:05:03 grant-01 dlm_controld[2265]: daemon cpg_dispatch error 2
Mar 31 11:05:03 grant-01 abrtd: Registered Database plugin 'SQLite3'
Mar 31 11:05:03 grant-01 xinetd[1785]: EXIT: qarsh status=0 pid=2152 duration=7(sec)
Mar 31 11:05:04 grant-01 abrtd: New crash /var/spool/abrt/ccpp-1301587503-2190, processing
Mar 31 11:05:05 grant-01 gfs_controld[2323]: daemon cpg_initialize error 6
Mar 31 11:05:05 grant-01 kernel: dlm: closing connection to node 3
Mar 31 11:05:05 grant-01 kernel: dlm: closing connection to node 2
Mar 31 11:05:05 grant-01 kernel: dlm: closing connection to node 1

Comment 2 Corey Marthaler 2011-03-31 16:14:25 UTC
FYI - comment #1 happened after upgrading to 1.2.3-33. comment #0 occurred with 1.2.3-31.

Comment 3 Fabio Massimo Di Nitto 2011-03-31 17:45:08 UTC
There have been no changes on our side and disabling selinux makes the world
work again.

Corey can provide a collection of audit.log from the nodes.

It appears that the issue is similar to one that has been filed recently in
fedora regarding /dev/dlm, but it needs to be double checked because I am no
selinux expert.

Once dlm_controld starts, everything goes south (at least based on cman init
startup sequence).

Comment 4 Daniel Walsh 2011-04-01 17:13:22 UTC
Could you attach the AVC messages?

I think this is caused because the /dev/dlm labels are wrong.

If you change the context I think this will work

# semanage fcontext -a -t dlm_control_device_t '/dev/dlm.*'
# restorecon -R -v /dev/dlm*

Miroslav this is the default labeling we now have in Fedora, can you put this into RHEL6,1

Comment 5 Daniel Walsh 2011-04-01 17:15:07 UTC
Actually this fix looks like it is in selinux-policy-3.7.19-80.el6

Preview is available on http://people.redhat.com/dwalsh/SELinux/RHEL6

Comment 6 Corey Marthaler 2011-04-01 19:12:36 UTC
type=AVC msg=audit(1301684886.261:50088): avc:  denied  { write } for  pid=6538 comm="cman_tool" name="cman_client" dev=dm-0 ino=1178785 scontext=unconfined_u:system_r:corosync_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file
type=SYSCALL msg=audit(1301684886.261:50088): arch=c000003e syscall=42 success=no exit=-13 a0=3 a1=7fff1eb64a40 a2=6e a3=7fff1eb647c0 items=0 ppid=6518 pid=6538 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=2 comm="cman_tool" exe="/usr/sbin/cman_tool" subj=unconfined_u:system_r:corosync_t:s0 key=(null)
type=AVC msg=audit(1301684886.357:50089): avc:  denied  { write } for  pid=6541 comm="cman_tool" name="cman_admin" dev=dm-0 ino=1178786 scontext=unconfined_u:system_r:corosync_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file
type=SYSCALL msg=audit(1301684886.357:50089): arch=c000003e syscall=42 success=no exit=-13 a0=3 a1=7fffa310cb00 a2=6e a3=7fffa310c880 items=0 ppid=6518 pid=6541 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=2 comm="cman_tool" exe="/usr/sbin/cman_tool" subj=unconfined_u:system_r:corosync_t:s0 key=(null)
type=AVC msg=audit(1301684886.471:50090): avc:  denied  { read } for  pid=6552 comm="corosync" name="corosync.log" dev=dm-0 ino=1178782 scontext=unconfined_u:system_r:corosync_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=SYSCALL msg=audit(1301684886.471:50090): arch=c000003e syscall=2 success=no exit=-13 a0=18b9210 a1=442 a2=1b6 a3=0 items=0 ppid=6541 pid=6552 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="corosync" exe="/usr/sbin/corosync" subj=unconfined_u:system_r:corosync_t:s0 key=(null)
type=AVC msg=audit(1301684889.430:50091): avc:  denied  { write } for  pid=6570 comm="cman_tool" name="cman_client" dev=dm-0 ino=1178785 scontext=unconfined_u:system_r:corosync_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file
type=SYSCALL msg=audit(1301684889.430:50091): arch=c000003e syscall=42 success=no exit=-13 a0=3 a1=7fff7e39a850 a2=6e a3=7fff7e39a5d0 items=0 ppid=6518 pid=6570 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=2 comm="cman_tool" exe="/usr/sbin/cman_tool" subj=unconfined_u:system_r:corosync_t:s0 key=(null)

Comment 7 Corey Marthaler 2011-04-01 19:14:48 UTC
I'm seeing this issue with the latest selinux-policy.

[root@taft-01 ~]# rpm -q selinux-policy
selinux-policy-3.7.19-80.el6.noarch

Comment 8 Corey Marthaler 2011-04-01 19:39:29 UTC
How sure are we that this is selinux? I'm able to reproduce this even in
permissive mode.

[root@taft-01 ~]# service cman start
Starting cluster:
   Checking if cluster has been disabled at boot...        [  OK  ]
   Checking Network Manager...                             [  OK  ]
   Global setup...                                         [  OK  ]
   Loading kernel modules...                               [  OK  ]
   Mounting configfs...                                    [  OK  ]
   Starting cman...                                        [  OK  ]
   Waiting for quorum... cman_tool: Cannot open connection to cman, is it
running ?
                                                           [FAILED]
[root@taft-01 ~]# getenforce
Permissive

Comment 9 Miroslav Grepl 2011-04-04 07:27:40 UTC
Corey,
could you turn on full auditing using 

# echo "-w /etc/shadow -p w" >> /etc/audit/audit.rules
# service auditd restart

and then try to re-test it and add AVC msgs in permissive mode. I would like to know where corosync.log, cman_admin, cman_client objects are located.

Also what does

# ls -lZ /var/run/cman*

Comment 10 Corey Marthaler 2011-04-04 23:14:54 UTC
[root@taft-01 ~]# service cman start
Starting cluster:
   Checking if cluster has been disabled at boot...        [  OK  ]
   Checking Network Manager...                             [  OK  ]
   Global setup...                                         [  OK  ]
   Loading kernel modules...                               [  OK  ]
   Mounting configfs...                                    [  OK  ]
   Starting cman...                                        [  OK  ]
   Waiting for quorum...                                   [  OK  ]
   Starting fenced...                                      [  OK  ]
   Starting dlm_controld...                                [  OK  ]
   Starting gfs_controld...                                [  OK  ]
   Unfencing self...                                       [  OK  ]
   Joining fence domain...                                 [  OK  ]

[root@taft-01 ~]# ls -lZ /var/run/cman*
srw-------. root root unconfined_u:object_r:corosync_var_run_t:s0 /var/run/cman_admin
srw-rw----. root root unconfined_u:object_r:corosync_var_run_t:s0 /var/run/cman_client
-rw-r--r--. root root unconfined_u:object_r:initrc_var_run_t:s0 /var/run/cman.pid

type=AVC msg=audit(1301958789.590:45): avc:  denied  { read } for  pid=2119 comm="fenced" name="fenced.log" dev=dm-0 ino=921008 scontext=unconfined_u:system_r:fenced_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=SYSCALL msg=audit(1301958789.590:45): arch=c000003e syscall=2 success=yes exit=4 a0=34fc002dc0 a1=442 a2=1b6 a3=0 items=1 ppid=1 pid=2119 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="fenced" exe="/usr/sbin/fenced" subj=unconfined_u:system_r:fenced_t:s0 key=(null)
type=CWD msg=audit(1301958789.590:45):  cwd="/"
type=PATH msg=audit(1301958789.590:45): item=0 name="/var/log/cluster/fenced.log" inode=921008 dev=fd:00 mode=0100666 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:var_log_t:s0
type=AVC msg=audit(1301958789.745:46): avc:  denied  { read } for  pid=2139 comm="dlm_controld" name="dlm_controld.log" dev=dm-0 ino=921010 scontext=unconfined_u:system_r:dlm_controld_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=SYSCALL msg=audit(1301958789.745:46): arch=c000003e syscall=2 success=yes exit=4 a0=34fc002dc0 a1=442 a2=1b6 a3=0 items=1 ppid=1 pid=2139 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="dlm_controld" exe="/usr/sbin/dlm_controld" subj=unconfined_u:system_r:dlm_controld_t:s0 key=(null)
type=CWD msg=audit(1301958789.745:46):  cwd="/"
type=PATH msg=audit(1301958789.745:46): item=0 name="/var/log/cluster/dlm_controld.log" inode=921010 dev=fd:00 mode=0100644 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:var_log_t:s0
type=AVC msg=audit(1301958790.838:47): avc:  denied  { read } for  pid=2194 comm="gfs_controld" name="gfs_controld.log" dev=dm-0 ino=921011 scontext=unconfined_u:system_r:gfs_controld_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file
type=SYSCALL msg=audit(1301958790.838:47): arch=c000003e syscall=2 success=yes exit=4 a0=34fc002dc0 a1=442 a2=1b6 a3=0 items=1 ppid=1 pid=2194 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="gfs_controld" exe="/usr/sbin/gfs_controld" subj=unconfined_u:system_r:gfs_controld_t:s0 key=(null)
type=CWD msg=audit(1301958790.838:47):  cwd="/"
type=PATH msg=audit(1301958790.838:47): item=0 name="/var/log/cluster/gfs_controld.log" inode=921011 dev=fd:00 mode=0100644 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:var_log_t:s0

Comment 11 Miroslav Grepl 2011-04-05 08:37:54 UTC
Created attachment 489940 [details]
cman init script patch

Comment 12 Miroslav Grepl 2011-04-05 08:39:24 UTC
Corey,
your cluster log files are mislabeled. Not sure why. Did you run cluster services directly without using the cman service script?

Execute

# restorecon -R -v /var/log/cluster


The next issue is with the cman init script which contains

pidof /usr/sbin/corosync > /var/run/cman.pid

This causes "cmain.pid" is labeled as initrc_var_run_t. So I need to add label to policy and the attached patch is needed for cman init script.

You can test it using

# semanage fcontext -a -t corosync_var_run_t "/var/run/cman.pid"

and apply the patch. Then try to start cluster services again.

Comment 13 Fabio Massimo Di Nitto 2011-04-05 12:50:23 UTC
(In reply to comment #12)
> Corey,
> your cluster log files are mislabeled. Not sure why. Did you run cluster
> services directly without using the cman service script?
> 
> Execute
> 
> # restorecon -R -v /var/log/cluster

I did to debug and try to understand what was happening.

The old dir is /var/log/cluster.old and it should be labeled correctly.

Comment 14 Miroslav Grepl 2011-04-05 13:21:36 UTC
Corey,
could you try to just relabel your logs files

# restorecon -R -v /var/log/cluster

and I believe it will work for you.

Comment 15 Corey Marthaler 2011-04-05 19:49:03 UTC
That worked.

[root@grant-01 ~]# getenforce
Enforcing

[root@grant-01 ~]# service cman start
Starting cluster:
   Checking if cluster has been disabled at boot...        [  OK  ]
   Checking Network Manager...                             [  OK  ]
   Global setup...                                         [  OK  ]
   Loading kernel modules...                               [  OK  ]
   Mounting configfs...                                    [  OK  ]
   Starting cman... Can't determine address family of nodename
Unable to get the configuration
corosync died: Could not read cluster configuration Check cluster logs for details
                                                           [FAILED]

[root@grant-01 ~]# restorecon -R -v /var/log/cluster
restorecon reset /var/log/cluster/gfs_controld.log context system_u:object_r:var_log_t:s0->system_u:object_r:gfs_controld_var_log_t:s0
restorecon reset /var/log/cluster/fenced.log context system_u:object_r:var_log_t:s0->system_u:object_r:fenced_var_log_t:s0
restorecon reset /var/log/cluster/dlm_controld.log context system_u:object_r:var_log_t:s0->system_u:object_r:dlm_controld_var_log_t:s0
restorecon reset /var/log/cluster/corosync.log context system_u:object_r:var_log_t:s0->system_u:object_r:corosync_var_log_t:s0

[root@grant-01 ~]# service cman start
Starting cluster:
   Checking if cluster has been disabled at boot...        [  OK  ]
   Checking Network Manager...                             [  OK  ]
   Global setup...                                         [  OK  ]
   Loading kernel modules...                               [  OK  ]
   Mounting configfs...                                    [  OK  ]
   Starting cman...                                        [  OK  ]
   Waiting for quorum...                                   [  OK  ]
   Starting fenced...                                      [  OK  ]
   Starting dlm_controld...                                [  OK  ]
   Starting gfs_controld...                                [  OK  ]
   Unfencing self...                                       [  OK  ]
   Joining fence domain...                                 [  OK  ]

Comment 16 Daniel Walsh 2011-04-05 20:02:31 UTC
Is /var/log/cluster owned by any package?

rpm -qf /var/log/cluster

Is some application destroying and recreating this directory?

Comment 17 Nate Straz 2011-04-05 20:24:18 UTC
[root@buzz-05 log]# rpm -qf /var/log/cluster
cman-3.0.12-41.el6.x86_64
corosync-1.2.3-34.el6.x86_64

I doubt the directory is being recreated

Comment 18 Miroslav Grepl 2011-04-05 20:33:15 UTC
(In reply to comment #17)
> [root@buzz-05 log]# rpm -qf /var/log/cluster
> cman-3.0.12-41.el6.x86_64
> corosync-1.2.3-34.el6.x86_64
> 
> I doubt the directory is being recreated

Me too. I believe cluster services were started by hand.

Comment 19 Nate Straz 2011-04-05 20:56:15 UTC
(In reply to comment #18)

> Me too. I believe cluster services were started by hand.

I looked through all of the comments and Corey is always using the service script.

Comment 20 Daniel Walsh 2011-04-05 21:12:21 UTC
is /var/log/cluster being ghosted?  Or is it actually owned by cman or corosync package?  IE Did it get created during the install.

Could someone remove the dir and then do a 

yum reinstall corocync cman

And see if the directory gets created with the right context?

If you remove the directory and do a service corosync start, does it get created with the correct context?  service cman start?

Comment 22 Nate Straz 2011-04-05 21:36:00 UTC
Looks like the directory is owned by two packages. cman requires corosync so we could probably drop the double ownership if needed.

[root@buzz-05 ~]# rpm -ql corosync | grep log
/var/log/cluster
[root@buzz-05 ~]# rpm -ql cman | grep log
/etc/logrotate.d/cman
/var/log/cluster

Checking the install process...

[root@dash-01 ~]# ls -lZd /var/log/cluster
ls: cannot access /var/log/cluster: No such file or directory
[root@dash-01 ~]# yum install -y corosync
...
Installed:
  corosync.x86_64 0:1.2.3-33.el6

Dependency Installed:
  corosynclib.x86_64 0:1.2.3-33.el6

Complete!
[root@dash-01 ~]# ls -lZd /var/log/cluster
drwx------. root root system_u:object_r:var_log_t:s0   /var/log/cluster
[root@dash-01 ~]# rmdir /var/log/cluster
[root@dash-01 ~]# yum install -y cman
...
Installed:
  cman.x86_64 0:3.0.12-41.el6

Dependency Installed:
  clusterlib.x86_64 0:3.0.12-41.el6      modcluster.x86_64 0:0.16.2-10.el6
  openais.x86_64 0:1.1.1-7.el6           openaislib.x86_64 0:1.1.1-7.el6

Complete!
[root@dash-01 ~]# ls -lZd /var/log/cluster
drwxr-xr-x. root root system_u:object_r:var_log_t:s0   /var/log/cluster


The directory does not get created with service start, in fact, the service will not start without the directory.

Comment 23 Fabio Massimo Di Nitto 2011-04-06 03:53:54 UTC
(In reply to comment #16)
> Is /var/log/cluster owned by any package?
> 
> rpm -qf /var/log/cluster
> 
> Is some application destroying and recreating this directory?

No, as I wrote in comment #13, I did it manually to move the old logs out of the way and recreated (always manually). The old log dir was moved to logs/cluster.old and still has the correct labelling.

Comment 24 Fabio Massimo Di Nitto 2011-04-06 03:58:00 UTC
(In reply to comment #22)
> Looks like the directory is owned by two packages. cman requires corosync so we
> could probably drop the double ownership if needed.

We could, but it's going to have to wait that F14 is EOL before I do the change upstream and it's not really an issue since rpm allows referencing the same dir more than once. The most important bit is that the dir is not created by any script and that it is owned by its users.

Comment 25 Daniel Walsh 2011-04-06 13:40:16 UTC
My mistake /var/log/cluster looks like it should be labeled var_log_t, it is the subdirs that were created with the wrong context.

Comment 26 Corey Marthaler 2011-04-07 21:26:13 UTC
This still exists in the latest tree/rpm and continues to block cluster testing.

type=AVC msg=audit(1302211468.251:51): avc:  denied  { read } for  pid=1970 comm="corosync" name="corosync.log" dev=dm-0 ino=132258 scontext=unconfined_u:system_r:corosync_t:s0 tcontext=system_u:object_r:var_log_t:s0 tclass=file

[root@taft-01 ~]# uname -ar
Linux taft-01 2.6.32-130.el6.x86_64 #1 SMP Tue Apr 5 19:58:31 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux

[root@taft-01 ~]# rpm -q corosync
corosync-1.2.3-34.el6.x86_64
[root@taft-01 ~]# rpm -q selinux-policy
selinux-policy-3.7.19-82.el6.noarch

Comment 27 Miroslav Grepl 2011-04-08 05:04:53 UTC
What are you exactly doing?

Comment 28 Nate Straz 2011-04-08 11:04:08 UTC
Miroslav,

Try these steps to reproduce the issue:
rm -f /var/log/cluster/*
service cman start

Comment 29 Miroslav Grepl 2011-04-08 12:40:33 UTC
It is supposed to be working and it works for me

# ls -lZ /var/log/cluster/
-rw-r--r--. root root unconfined_u:object_r:corosync_var_log_t:s0 corosync.log
-rw-r--r--. root root unconfined_u:object_r:dlm_controld_var_log_t:s0 dlm_controld.log
-rw-rw-rw-. root root unconfined_u:object_r:fenced_var_log_t:s0 fenced.log
-rw-r--r--. root root unconfined_u:object_r:gfs_controld_var_log_t:s0 gfs_controld.log

# rm -f /var/log/cluster/*
# service cman start
# ls -lZ /var/log/cluster/
-rw-r--r--. root root unconfined_u:object_r:corosync_var_log_t:s0 corosync.log
-rw-r--r--. root root unconfined_u:object_r:dlm_controld_var_log_t:s0 dlm_controld.log
-rw-rw-rw-. root root unconfined_u:object_r:fenced_var_log_t:s0 fenced.log
-rw-r--r--. root root unconfined_u:object_r:gfs_controld_var_log_t:s0 gfs_controld.log

Comment 30 Milos Malik 2011-04-08 14:04:26 UTC
I don't see any AVCs with 3.7.19-82.el6 policy. The automated test contains all cluster services which I found and succeeds in enforcing mode.

Following sequence of commands does not bring any AVCs on those machines I'm using now:
rm -f /var/log/cluster/*
service cman start

These machines were provided today by beaker/inventory (RHEL6.1-20110407.n.0).

Comment 31 Nate Straz 2011-04-08 14:15:04 UTC
This may be an issue with our qarshd policy not getting installed correctly.  It automatically rebuilds at install and with -80 it was choking on ftp_initrc_domtrans and seunshare_domtrans.

make: Entering directory `/usr/share/doc/qarsh-selinux-1.26'
Loading targeted modules: qarshd
libsepol.print_missing_requirements: qarshd's global requirements were not met: type/attribute seunshare_t (No such file or directory).
libsemanage.semanage_link_sandbox: Link packages failed (No such file or directory).
/usr/sbin/semodule:  Failed!
make: *** [tmp/loaded] Error 1
make: Leaving directory `/usr/share/doc/qarsh-selinux-1.26'

Comment 32 Miroslav Grepl 2011-04-08 14:22:39 UTC
Nate,
ftp_initrc_domtrans declaration issue is fixed -82 and seunshare_domtrans declaration issue will be fixed in -83 release. I have been talking about that with Jaroslav.

I am moving back to ON_QA.

Comment 34 errata-xmlrpc 2011-05-19 12:27:23 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2011-0526.html


Note You need to log in before you can comment on or make changes to this bug.