RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 853970 - RHCS cluster node does not auto-join cluster ring after power fencing due to corosync SELinux AVCs (avc: denied { name_bind } for pid=1516 comm="corosync" src=122[89] scontext=system_u:system_r:corosync_t:s0 tcontext=system_u:object_r:*_port_t:s0...
Summary: RHCS cluster node does not auto-join cluster ring after power fencing due to ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: selinux-policy
Version: 6.3
Hardware: All
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Miroslav Grepl
QA Contact: Milos Malik
URL:
Whiteboard:
Depends On: 867628 891986
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-09-03 13:00 UTC by Frantisek Reznicek
Modified: 2015-11-16 01:14 UTC (History)
10 users (show)

Fixed In Version: selinux-policy-3.7.19-190.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-02-21 08:28:33 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:0314 0 normal SHIPPED_LIVE selinux-policy bug fix and enhancement update 2013-02-20 20:35:01 UTC

Description Frantisek Reznicek 2012-09-03 13:00:38 UTC
Description of problem:

RHCS cluster node does not auto-join cluster ring after power fencing due to following corosync SELinux AVCs triggered during boot's service cman start:

type=AVC msg=audit(1346675105.899:4): avc:  denied  { name_bind } for  pid=1516 comm="corosync" src=1229 scontext=system_u:system_r:corosync_t:s0 tcontext=system_u:object_r:zented_port_t:s0 tclass=udp_socket
type=AVC msg=audit(1346675105.899:5): avc:  denied  { name_bind } for  pid=1516 comm="corosync" src=1228 scontext=system_u:system_r:corosync_t:s0 tcontext=system_u:object_r:port_t:s0 tclass=udp_socket

RHCS is configured with fence_xvm fencing, using 3 node with configuration specified in bug 853927 comment 0.



Version-Release number of selected component (if applicable):
ccs-0.16.2-55.el6.x86_64
cluster-glue-libs-1.0.5-6.el6.x86_64
clusterlib-3.0.12.1-32.el6_3.1.x86_64
cman-3.0.12.1-32.el6_3.1.x86_64
libselinux-2.0.94-5.3.el6.x86_64
libselinux-utils-2.0.94-5.3.el6.x86_64
modcluster-0.16.2-18.el6.x86_64
qpid-cpp-server-cluster-0.14-21.el6_3.x86_64
rgmanager-3.0.12.1-12.el6.x86_64
selinux-policy-3.7.19-155.el6_3.noarch
selinux-policy-targeted-3.7.19-155.el6_3.noarch


How reproducible:
100%

Steps to Reproduce:
1. setup rhcs cluster following bug 853927 comment 0 description (RHCS 3 node cluster with fence_xvm+fence_virtd)
2. chkconfig iptables off
   chkconfig rgmanager off
   chkconfig corosync off
   chkconfig cman on
   setenforce 1
   :> /var/log/audit/audit.log
   service auditd restart
3. Fence the machine
   fence_node <node>
4. Machine has to reboot and re-join cluster 

Actual results:
Machine does not re-join cluster after fencing and reboot completed.

Expected results:
Machine should re-join cluster after fencing and reboot completed.

Additional info:

Comment 2 Frantisek Reznicek 2012-09-03 13:38:51 UTC
(In reply to comment #0)
> 
> 
> Version-Release number of selected component (if applicable):
> ccs-0.16.2-55.el6.x86_64
> cluster-glue-libs-1.0.5-6.el6.x86_64
> clusterlib-3.0.12.1-32.el6_3.1.x86_64
> cman-3.0.12.1-32.el6_3.1.x86_64
> libselinux-2.0.94-5.3.el6.x86_64
> libselinux-utils-2.0.94-5.3.el6.x86_64
> modcluster-0.16.2-18.el6.x86_64
> qpid-cpp-server-cluster-0.14-21.el6_3.x86_64
> rgmanager-3.0.12.1-12.el6.x86_64
> selinux-policy-3.7.19-155.el6_3.noarch
> selinux-policy-targeted-3.7.19-155.el6_3.noarch
corosync-1.4.1-7.el6.x86_64

Comment 3 Miroslav Grepl 2012-09-03 17:40:11 UTC
So it uses random ports?

Comment 4 Frantisek Reznicek 2012-09-06 10:22:08 UTC
No in the RHCS configuration with fence-virt fencing, cman/corosync operate on ports 1128 and 1229.

my cluster.conf (bug 853927 comment 0) has definition:
        <cman port="1229">
                <multicast addr="225.0.0.12"/>

Those setting are corresponding with default fence-virtd daemon and fence_xvm fence agent defaults. (see fence_xvm -h for details)

I treat this configuration RHCS + fence_virt as default configuration and as documentation does not instruct user to configure selinux special way.

From documentation is evident that default cman/corosync ports are 5404, 5405.
The defaults for fence-virt are 1228,1229 (fence-virtd, fence-virt, fence_xvm).

There are probably multiple ways how to solve this issue:
1] allow cman/corosync to bind to 1228, 1229
2] change ence-virtd, fence-virt, fence_xvm defaults to 5404, 5405.
3] keep everything as is and document what user should do in case he wants to use default fence-virt default ports

Comment 5 Frantisek Reznicek 2012-09-06 11:06:55 UTC
I confirm that when cman/corosync running on default port 5404, 5405 and fence-virt configured to use such a port (5405), fencing works and no selinux messages are triggered.

I thus tend to resolution 2] (change of default multicast addresses of fence-virtd, fence-virt, fence_xvm and el5 only fence_xvmd from 1229 to 5405).


Current configs:
#cluster.conf
<?xml version="1.0"?>
<cluster config_version="15" name="mycluster_el6vm">
        <clusternodes>
                <clusternode name="192.168.10.11" nodeid="1" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="fence_1"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="192.168.10.12" nodeid="2" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="fence_2"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="192.168.10.13" nodeid="3" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="fence_3"/>
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <cman port="5405">
                <multicast addr="225.0.0.12"/>
        </cman>
        <rm log_level="7">
                <failoverdomains>
                        <failoverdomain name="domain_qpidd_1" restricted="1">
                                <failoverdomainnode name="192.168.10.11" priority="1"/>
                        </failoverdomain>
                        <failoverdomain name="domain_qpidd_2" restricted="1">
                                <failoverdomainnode name="192.168.10.12" priority="1"/>
                        </failoverdomain>
                        <failoverdomain name="domain_qpidd_3" restricted="1">
                                <failoverdomainnode name="192.168.10.13" priority="1"/>
                        </failoverdomain>
                </failoverdomains>
                <resources>
                        <script file="/etc/init.d/qpidd" name="qpidd"/>
                </resources>
                <service domain="domain_qpidd_1" name="qpidd_1">
                        <script ref="qpidd"/>
                </service>
                <service domain="domain_qpidd_2" name="qpidd_2">
                        <script ref="qpidd"/>
                </service>
                <service domain="domain_qpidd_3" name="qpidd_3">
                        <script ref="qpidd"/>
                </service>
        </rm>
        <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="30"/>
        <fencedevices>
                <fencedevice action="reboot" agent="fence_xvm" domain="cluster-rhel6i0" ipport="5405" key_file="/etc/cluster/fence_xvm.key" name="fence_1"/>
                <fencedevice action="reboot" agent="fence_xvm" domain="cluster-rhel6x0" ipport="5405" key_file="/etc/cluster/fence_xvm.key" name="fence_2"/>
                <fencedevice action="reboot" agent="fence_xvm" domain="cluster-rhel6x1" ipport="5405" key_file="/etc/cluster/fence_xvm.key" name="fence_3"/>
        </fencedevices>
</cluster>

# fence_virt.conf
fence_virtd {
        listener = "multicast";
        backend = "libvirt";
        module_path = "/usr/lib64/fence-virt";
}

listeners {
        multicast {
                key_file = "/etc/cluster/fence_xvm.key";
                address = "225.0.0.12";
                port = "5405";
                family = "ipv4";
                interface = "virbr4";
        }
}

backends {
        libvirt { 
                uri = "qemu:///system";
        }
}

Comment 6 Miroslav Grepl 2012-10-09 12:54:37 UTC
Fixed.

Comment 14 Frantisek Reznicek 2013-01-04 11:32:30 UTC
I've retested the scenario based on your request with following results:
 alpha1p] 6.3 + 6.3 selinux (permissive) -> PASS (ix both fail reproduced)
 alpha2p] 6.3 + 6.4 selinux (permissive) -> FAIL  (iXx all fail)
 alpha2e] 6.3 + 6.4 selinux (enforcing) -> FAIL  (iXx all fail)
 beta1p] RHEL6.4-Snapshot-2 + RHEL6.4-Snapshot-2 selinux (permissive) -> FAIL  (iXx all fail)
 beta1e] RHEL6.4-Snapshot-2 + RHEL6.4-Snapshot-2 selinux (enforcing) -> SKIP

-> ASSIGNED

Comment 16 Miroslav Grepl 2013-01-04 13:21:14 UTC
Ok, these are about bind.

Comment 17 Frantisek Reznicek 2013-01-04 18:19:19 UTC
Retested using selinux-policy-3.7.19-190.el6 packages.

 beta1p] RHEL6.4-Snapshot-2+selinux-policy-3.7.19-190.el6 (permissive) -> PASS
 beta1e] RHEL6.4-Snapshot-2+selinux-policy-3.7.19-190.el6 (enforcing) -> PASS

I consider issue fixed by selinux-policy-3.7.19-190.el6.

This defect is blocked by bug 891986 and therefore bug 891986 has to resolved first (selinux-policy has to be rebuilt with TPS and RPMDIFF checks passing).

Comment 19 errata-xmlrpc 2013-02-21 08:28:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0314.html


Note You need to log in before you can comment on or make changes to this bug.