RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1347514 - Enhance corosync policy to include two new daemons
Summary: Enhance corosync policy to include two new daemons
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: selinux-policy
Version: 7.2
Hardware: All
OS: Linux
high
medium
Target Milestone: rc
: ---
Assignee: Lukas Vrabec
QA Contact: Milos Malik
URL:
Whiteboard:
Depends On: 614122 1185000
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-06-17 07:13 UTC by Jan Friesse
Modified: 2016-11-04 02:32 UTC (History)
8 users (show)

Fixed In Version: selinux-policy-3.13.1-95.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-04 02:32:12 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:2283 0 normal SHIPPED_LIVE selinux-policy bug fix and enhancement update 2016-11-03 13:36:25 UTC

Description Jan Friesse 2016-06-17 07:13:29 UTC
Description of problem:
Corosync package is going to be enhanced for two subpackages related to qdevice. corosync-qnetd and corosync-qdevice. Both are daemons so need proper selinux policy.

Actual results:
No corosync-qnetd and corosync-qdevice policy.

Expected results:
Both corosync-qnetd and corosync-qdevice have proper selinux policy.

Additional info:

Expected functionality of corosync-qnetd:
- Ability to bind (default port 5403)/listen/send/receive on both IPv4/IPv6 using NSS (as server)
- Ability to bind/listen/send/receive on unix socket /var/run/corosync-qnetd/corosync-qnetd.sock
- create lock file /var/run/corosync-qnetd/corosync-qnetd.pid
- Read NSS database at /etc/corosync/qnetd/nssdb
- It's running as a newly created user coroqnetd with dynamically allocated UID

Expected functionality of corosync-qdevice
- Ability to connect/send/receive on both IPv4/IPv6 using NSS (as client) to corosync-qnetd
- Ability to bind/listen/send/receive on unix socket /var/run/corosync-qdevice/corosync-qdevice.sock
- create lock file /var/run/corosync-qdevice/corosync-qdevice.pid
- Read NSS database at /etc/corosync/qdevice/net/nssdb
- Use corosync IPC to votequorum and cmap services (similar to corosync-notifyd/pacemaker)

Simple test of functionality:
- Install corosync-qnetd and corosync-qdevice (this will install rest of corosync packages) on one node (let's say it's resolvable name is node1, add record to /etc/hosts)
- /usr/sbin/corosync-qdevice-net-certutil -Q -n Cluster node1 node1
- Edit/create /etc/corosync/corosync.conf with following content:
totem {
        version: 2

        crypto_cipher: none
        crypto_hash: none

        transport: udpu
        cluster_name: Cluster
}

logging {
        to_stderr: yes
        to_logfile: no
        logfile: /var/log/cluster/corosync.log
        to_syslog: on
        timestamp: on
        logger_subsys {
                subsys: QDEVICE
                debug: on
        }
}

quorum {
        provider: corosync_votequorum
        device {
            model: net
            votes: 1
            net {
                tls: on
                host: node1
                algorithm: ffsplit
            }
        }
}

nodelist {
        node {
                ring0_addr: node1
                nodeid: 1
        }
}

- service corosync start
- service corosync-qnetd start
- service corosync-qdevice start

Result in /var/log/messages (or wherever syslog messages goes to):
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Initializing votequorum
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: shm size:1048589; real_size:1052672; rb->word_size:263168
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: shm size:1048589; real_size:1052672; rb->word_size:263168
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: shm size:1048589; real_size:1052672; rb->word_size:263168
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Initializing local socket
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Registering qdevice models
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Configuring qdevice
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Configuring master_wins
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Getting configuration node list
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Initializing qdevice model
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Initializing qdevice_net_instance
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Registering algorithms
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Initializing NSS
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Cast vote timer remains stopped.
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Initializing cmap tracking
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Waiting for ring id
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Votequorum nodelist notify callback:
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:   Ring_id = (1.a00000000021ac8)
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:   Node list (size = 1):
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:     0 nodeid = 1
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Algorithm decided to not send list and result vote is No change
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Votequorum quorum notify callback:
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:   Quorate = 0
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:   Node list (size = 2):
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:     0 nodeid = 1, state = 1
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:     1 nodeid = 0, state = 0
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Algorithm decided to not send list and result vote is No change
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Running qdevice model
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Executing qdevice-net
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Trying connect to qnetd server node1:5403 (timeout = 8000ms)
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Sending preinit msg to qnetd
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Received preinit reply msg
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Sending client auth data.
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Received init reply msg
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Scheduling send of heartbeat every 8000ms
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Algorithm decided to send config node list, send membership node list, send quorum node list and result vote is Wait for reply
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Sending config node list seq = 4
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:   Node list:
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:     0 node_id = 1, data_center_id = 0, node_state = not set
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Sending membership node list seq = 5, ringid = (1.a00000000021ac8).
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:   Node list:
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:     0 node_id = 1, data_center_id = 0, node_state = not set
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Sending quorum node list seq = 6, quorate = 0
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:   Node list:
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:     0 node_id = 1, data_center_id = 0, node_state = member
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Cast vote timer remains stopped.
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Received initial config node list reply
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:   seq = 4
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:   vote = Ask later
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:   ring id = (1.a00000000021ac8)
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Algorithm result vote is Ask later
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Cast vote timer remains stopped.
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Received vote info
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:   seq = 1
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:   vote = ACK
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:   ring id = (1.a00000000021ac8)
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Algorithm result vote is ACK
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Cast vote timer is now scheduled every 5000ms voting ACK.
Jun 17 09:11:11 node-06 corosync[15243]:   [QUORUM] This node is within the primary component and will provide service.
Jun 17 09:11:11 node-06 corosync[15243]:   [QUORUM] Members[1]: 1
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Votequorum quorum notify callback:
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:   Quorate = 1
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:   Node list (size = 2):
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:     0 nodeid = 1, state = 1
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:     1 nodeid = 0, state = 0
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Algorithm decided to send list and result vote is No change
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Sending quorum node list seq = 7, quorate = 1
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:   Node list:
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:     0 node_id = 1, data_center_id = 0, node_state = member
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Received membership node list reply
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:   seq = 5
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:   vote = No change
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:   ring id = (1.a00000000021ac8)
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Algorithm result vote is No change
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Received quorum node list reply
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:   seq = 6
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:   vote = No change
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:   ring id = (1.a00000000021ac8)
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Algorithm result vote is No change
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Received quorum node list reply
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:   seq = 7
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:   vote = No change
Jun 17 09:11:11 node-06 corosync-qdevice[16473]:   ring id = (1.a00000000021ac8)
Jun 17 09:11:11 node-06 corosync-qdevice[16473]: Algorithm result vote is No change

Comment 19 errata-xmlrpc 2016-11-04 02:32:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2283.html


Note You need to log in before you can comment on or make changes to this bug.