RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1749263 - Problem to handle links when their linknumbers are not consecutive
Summary: Problem to handle links when their linknumbers are not consecutive
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: corosync
Version: 8.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 8.0
Assignee: Jan Friesse
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-09-05 09:08 UTC by Nina Hostakova
Modified: 2020-04-28 15:56 UTC (History)
3 users (show)

Fixed In Version: corosync-3.0.2-4.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-04-28 15:56:45 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
totem: fix check if all nodes have same number of links (1.52 KB, patch)
2019-09-05 11:50 UTC, Jan Friesse
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:1674 0 None None None 2020-04-28 15:56:55 UTC

Comment 1 Jan Friesse 2019-09-05 11:50:30 UTC
Created attachment 1611929 [details]
totem: fix check if all nodes have same number of links

totem: fix check if all nodes have same number of links

configured links may not come in order in the interfaces array, which
holds an entry for _all_ possible links, not just configured ones.

So iterate through all interfaces, but skip those which are not
configured. This allows to start corosync with a configuration where
link 0 is currently not mentioned, as else it was checked but had
member_count = 0 from it's default initialization, which then made
this code report a false positive for the "Not all nodes have the
same number of links" check even on a correct config.

Signed-off-by: Thomas Lamprecht <t.lamprecht>
Reviewed-by: Christine Caulfield <ccaulfie>
Reviewed-by: Jan Friesse <jfriesse>

Comment 3 Patrik Hagara 2020-01-29 13:17:49 UTC
before
======

see comment#0


after
=====

> [root@virt-089 ~]# rpm -q corosync
> corosync-3.0.3-2.el8.x86_64

ring config:
> [root@virt-089 ~]# cat /etc/corosync/corosync.conf
> totem {
>     version: 2
>     cluster_name: STSRHTS1520
>     transport: knet
>     crypto_cipher: aes256
>     crypto_hash: sha256
> }
> 
> nodelist {
>     node {
>         name: virt-089
>         nodeid: 1
>         ring0_addr: virt-089.ipv4.cluster-qe.lab.eng.brq.redhat.com
>         ring1_addr: virt-089.ipv6.cluster-qe.lab.eng.brq.redhat.com
>         ring2_addr: 192.168.1.89
>     }
> 
>     node {
>         name: virt-090
>         nodeid: 2
>         ring0_addr: virt-090.ipv4.cluster-qe.lab.eng.brq.redhat.com
>         ring1_addr: virt-090.ipv6.cluster-qe.lab.eng.brq.redhat.com
>         ring2_addr: 192.168.1.90
>     }
> }
> 
> quorum {
>     provider: corosync_votequorum
>     two_node: 1
> }
> 
> logging {
>     to_logfile: yes
>     logfile: /var/log/cluster/corosync.log
>     to_syslog: yes
>     timestamp: on
> }


initial link status on running cluster:
> [root@virt-089 ~]# corosync-cfgtool -s
> Printing link status.
> Local node ID 1
> LINK ID 0
> 	addr	= 10.37.166.216
> 	status:
> 		nodeid  1:	link enabled:1	link connected:1
> 		nodeid  2:	link enabled:1	link connected:1
> LINK ID 1
> 	addr	= 2620:52:0:25a4:1800:ff:fe00:59
> 	status:
> 		nodeid  1:	link enabled:0	link connected:1
> 		nodeid  2:	link enabled:1	link connected:1
> LINK ID 2
> 	addr	= 192.168.1.89
> 	status:
> 		nodeid  1:	link enabled:0	link connected:1
> 		nodeid  2:	link enabled:1	link connected:1


remove ring1:
> [root@virt-089 ~]# pcs cluster link remove 1
> Sending updated corosync.conf to nodes...
> virt-089: Succeeded
> virt-090: Succeeded
> virt-089: Corosync configuration reloaded
> [root@virt-089 ~]# cat /etc/corosync/corosync.conf
> totem {
>     version: 2
>     cluster_name: STSRHTS1520
>     transport: knet
>     crypto_cipher: aes256
>     crypto_hash: sha256
> }
> 
> nodelist {
>     node {
>         name: virt-089
>         nodeid: 1
>         ring0_addr: virt-089.ipv4.cluster-qe.lab.eng.brq.redhat.com
>         ring2_addr: 192.168.1.89
>     }
> 
>     node {
>         name: virt-090
>         nodeid: 2
>         ring0_addr: virt-090.ipv4.cluster-qe.lab.eng.brq.redhat.com
>         ring2_addr: 192.168.1.90
>     }
> }
> 
> quorum {
>     provider: corosync_votequorum
>     two_node: 1
> }
> 
> logging {
>     to_logfile: yes
>     logfile: /var/log/cluster/corosync.log
>     to_syslog: yes
>     timestamp: on
> }

-> rings IDs are now non-consecutive

restart cluster, verify it works with non-consecutive rings:
> [root@virt-089 ~]# pcs cluster stop --all --wait
> virt-089: Stopping Cluster (pacemaker)...
> virt-090: Stopping Cluster (pacemaker)...
> virt-089: Stopping Cluster (corosync)...
> virt-090: Stopping Cluster (corosync)...
> [root@virt-089 ~]# pcs cluster start --all --wait
> virt-089: Starting Cluster...
> virt-090: Starting Cluster...
> Waiting for node(s) to start...
> virt-090: Started
> virt-089: Started
> [root@virt-089 ~]# corosync-cfgtool -s
> Printing link status.
> Local node ID 1
> LINK ID 0
> 	addr	= 10.37.166.216
> 	status:
> 		nodeid  1:	link enabled:1	link connected:1
> 		nodeid  2:	link enabled:1	link connected:1
> LINK ID 2
> 	addr	= 192.168.1.89
> 	status:
> 		nodeid  1:	link enabled:0	link connected:1
> 		nodeid  2:	link enabled:1	link connected:1
> [root@virt-089 ~]# systemctl status corosync
> ● corosync.service - Corosync Cluster Engine
>    Loaded: loaded (/usr/lib/systemd/system/corosync.service; disabled; vendor preset: disabled)
>    Active: active (running) since Wed 2020-01-29 14:09:47 CET; 55s ago
>      Docs: man:corosync
>            man:corosync.conf
>            man:corosync_overview
>  Main PID: 19350 (corosync)
>     Tasks: 9
>    Memory: 195.1M
>    CGroup: /system.slice/corosync.service
>            └─19350 /usr/sbin/corosync -f
> 
> Jan 29 14:09:48 virt-089.cluster-qe.lab.eng.brq.redhat.com corosync[19350]:   [TOTEM ] A new membership (1.1b) was formed. Members joined: 2
> Jan 29 14:09:48 virt-089.cluster-qe.lab.eng.brq.redhat.com corosync[19350]:   [CPG   ] downlist left_list: 0 received
> Jan 29 14:09:48 virt-089.cluster-qe.lab.eng.brq.redhat.com corosync[19350]:   [CPG   ] downlist left_list: 0 received
> Jan 29 14:09:48 virt-089.cluster-qe.lab.eng.brq.redhat.com corosync[19350]:   [VOTEQ ] Waiting for all cluster members. Current votes: 1 expected_votes: 2
> Jan 29 14:09:48 virt-089.cluster-qe.lab.eng.brq.redhat.com corosync[19350]:   [QUORUM] This node is within the primary component and will provide service.
> Jan 29 14:09:48 virt-089.cluster-qe.lab.eng.brq.redhat.com corosync[19350]:   [QUORUM] Members[2]: 1 2
> Jan 29 14:09:48 virt-089.cluster-qe.lab.eng.brq.redhat.com corosync[19350]:   [MAIN  ] Completed service synchronization, ready to provide service.
> Jan 29 14:09:48 virt-089.cluster-qe.lab.eng.brq.redhat.com corosync[19350]:   [KNET  ] pmtud: PMTUD link change for host: 2 link: 0 from 469 to 1397
> Jan 29 14:09:48 virt-089.cluster-qe.lab.eng.brq.redhat.com corosync[19350]:   [KNET  ] pmtud: PMTUD link change for host: 2 link: 2 from 469 to 1397
> Jan 29 14:09:48 virt-089.cluster-qe.lab.eng.brq.redhat.com corosync[19350]:   [KNET  ] pmtud: Global data MTU changed to: 1397

another possible corner case was checked in the same way -- removing ring0 -- and confirmed working:
> [root@virt-089 ~]# corosync-cfgtool -s
> Printing link status.
> Local node ID 1
> LINK ID 2
>  	addr	= 192.168.1.89
> 	status:
>  		nodeid  1:	link enabled:0	link connected:1
> 		nodeid  2:	link enabled:1	link connected:1


marking verified in 3.0.3-2.el8

Comment 5 errata-xmlrpc 2020-04-28 15:56:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:1674


Note You need to log in before you can comment on or make changes to this bug.