RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1134426 - pcs needs a better parser for corosync.conf
Summary: pcs needs a better parser for corosync.conf
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pcs
Version: 7.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Tomas Jelinek
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1142126
TreeView+ depends on / blocked
 
Reported: 2014-08-27 13:42 UTC by Radek Steiger
Modified: 2015-11-19 09:32 UTC (History)
4 users (show)

Fixed In Version: pcs-0.9.140-1.el7
Doc Type: Bug Fix
Doc Text:
Cause: User edits corosync.conf configuration file manually. Consequence: Pcs misbehaves as it is not able to read the file properly. Fix: Implement full-featured parser for corosync.conf file. Result: Pcs is able to read a manually edited corosync.conf file properly.
Clone Of:
Environment:
Last Closed: 2015-11-19 09:32:49 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
corosync.conf (377 bytes, text/plain)
2014-08-27 13:42 UTC, Radek Steiger
no flags Details
Example patch (see the comment) (1.01 KB, patch)
2014-11-24 20:37 UTC, Jan Pokorný [poki]
no flags Details | Diff
proposed fix 1/3 (30.52 KB, patch)
2015-02-19 15:44 UTC, Tomas Jelinek
no flags Details | Diff
proposed fix 2/3 (17.32 KB, patch)
2015-02-19 15:45 UTC, Tomas Jelinek
no flags Details | Diff
proposed fix 3/3 (33.76 KB, patch)
2015-02-19 15:45 UTC, Tomas Jelinek
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:2290 0 normal SHIPPED_LIVE Moderate: pcs security, bug fix, and enhancement update 2015-11-19 09:43:53 UTC

Description Radek Steiger 2014-08-27 13:42:53 UTC
Created attachment 931475 [details]
corosync.conf

Description of problem:

(credits go to Honza Friesse and Tomas Jelinek)

The internal parser for getting nodes from corosync.conf uses following code for grepping node names:

    preg = re.compile(r'.*ring0_addr: (.*)')
    for line in lines:
        match = preg.match(line)
        if match:
            nodes.append (match.group(1))

This basically greps _anything_ containing "ring0_addr:" and can lead to some crazy results. As an example I created a cluster with "ring0_addr:" in cluster name:


[root@virt-041 ~]# pcs cluster setup --name "ring0_addr: blabla" virt-041.cluster-qe.lab.eng.brq.redhat.com virt-042.cluster-qe.lab.eng.brq.redhat.com --start --enable
Shutting down pacemaker/corosync services...
Redirecting to /bin/systemctl stop  pacemaker.service
Redirecting to /bin/systemctl stop  corosync.service
Killing any remaining services...
Removing all cluster configuration files...
virt-041.cluster-qe.lab.eng.brq.redhat.com: Succeeded
virt-041.cluster-qe.lab.eng.brq.redhat.com: Starting Cluster...
virt-042.cluster-qe.lab.eng.brq.redhat.com: Succeeded
virt-042.cluster-qe.lab.eng.brq.redhat.com: Starting Cluster...
virt-041.cluster-qe.lab.eng.brq.redhat.com: Cluster Enabled
virt-042.cluster-qe.lab.eng.brq.redhat.com: Cluster Enabled


The getNodesFromCorosyncConf() function then results in following situations: 

[root@virt-041 ~]# pcs status nodes corosync
Corosync Nodes:
 Online: virt-041.cluster-qe.lab.eng.brq.redhat.com virt-042.cluster-qe.lab.eng.brq.redhat.com 
 Offline: blabla 

[root@virt-041 ~]# pcs status pcsd
  blabla: Offline
  virt-041.cluster-qe.lab.eng.brq.redhat.com: Online
  virt-042.cluster-qe.lab.eng.brq.redhat.com: Online


The related sections from corosync.conf:

[root@virt-041 ~]# grep ring0 /etc/corosync/corosync.conf
cluster_name: ring0_addr: blabla
        ring0_addr: virt-041.cluster-qe.lab.eng.brq.redhat.com
        ring0_addr: virt-042.cluster-qe.lab.eng.brq.redhat.com



Version-Release number of selected component (if applicable):

pcs-0.9.115-32.el7

Comment 6 Jan Pokorný [poki] 2014-11-24 20:37:45 UTC
Created attachment 960932 [details]
Example patch (see the comment)

...but in fact this is just a drop in the sea, proper config format parser is needed to fix the same underlying issue once for all.

Comment 7 Jan Pokorný [poki] 2014-11-24 22:50:03 UTC
FWIW, this is also the reason I added a tunable as a possible
workaround/preventive measure in the pcs-clufter interaction:

https://github.com/jnpkrn/clufter/commit/84d0e6b8bab10abd3f06db8b6f13967f5a809366

Comment 8 Tomas Jelinek 2015-02-19 15:44:56 UTC
Created attachment 993680 [details]
proposed fix 1/3

Comment 9 Tomas Jelinek 2015-02-19 15:45:15 UTC
Created attachment 993681 [details]
proposed fix 2/3

Comment 10 Tomas Jelinek 2015-02-19 15:45:35 UTC
Created attachment 993683 [details]
proposed fix 3/3

Comment 14 Tomas Jelinek 2015-06-04 14:21:55 UTC
Before Fix:
[root@rh71-node1 ~]# rpm -q pcs
pcs-0.9.137-13.el7_1.2.x86_64
[root@rh71-node1:~]# pcs cluster setup --name 'ring0_addr: blabla' rh71-node1 rh71-node2 --start --enable
Shutting down pacemaker/corosync services...
Redirecting to /bin/systemctl stop  pacemaker.service
Redirecting to /bin/systemctl stop  corosync.service
Killing any remaining services...
Removing all cluster configuration files...
rh71-node1: Succeeded
rh71-node2: Succeeded
Starting cluster on nodes: rh71-node1, rh71-node2...
rh71-node1: Starting Cluster...
rh71-node2: Starting Cluster...
rh71-node1: Cluster Enabled
rh71-node2: Cluster Enabled
[root@rh71-node1:~]# pcs status nodes corosync
Corosync Nodes:
 Online: rh71-node1 rh71-node2
 Offline: blabla 
[root@rh71-node1:~]# pcs status pcsd
  blabla: Offline
  rh71-node1: Online
  rh71-node2: Online



After Fix:
[root@rh71-node1:~]# rpm -q pcs
pcs-0.9.140-1.el6.x86_64
[root@rh71-node1:~]# pcs cluster setup --name 'ring0_addr: blabla' rh71-node1 rh71-node2 --start --enable
Shutting down pacemaker/corosync services...
Redirecting to /bin/systemctl stop  pacemaker.service
Redirecting to /bin/systemctl stop  corosync.service
Killing any remaining services...
Removing all cluster configuration files...
rh71-node1: Succeeded
rh71-node2: Succeeded
Starting cluster on nodes: rh71-node1, rh71-node2...
rh71-node1: Starting Cluster...
rh71-node2: Starting Cluster...
rh71-node1: Cluster Enabled
rh71-node2: Cluster Enabled
Synchronizing pcsd certificates on nodes rh71-node1, rh71-node2. pcsd needs to be restarted on the nodes in order to reload the certificates.
[root@rh71-node1:~]# pcs status nodes corosync
Corosync Nodes:
 Online: rh71-node1 rh71-node2 
 Offline: 
[root@rh71-node1:~]# pcs status pcsd
  rh71-node1: Online
  rh71-node2: Online

Comment 18 errata-xmlrpc 2015-11-19 09:32:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-2290.html


Note You need to log in before you can comment on or make changes to this bug.