Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Cause:
Input and output packets are blocked (by netfilter firewall).
Consequence:
Corosync stuck and never creates membership. It's not possible to use cluster.
Fix:
Main problem was hidden in fact, that corosync rely on multicast loop (packets sent to mcast group are returned back to sender). Sadly, this packets are filtered by netfilter, and if policy is block, they are blocked and never arrive back to corosync. Solution is to use sockpair unix dgram socket, used only for local loopback. So packets are sent to multicast group AND to this unix dgram. Multicast group loopback is disabled, but packets are always delivered thru this unix socket to localhost.
Result:
In given scenario, single node cluster is created.
DescriptionJaroslav Kortus
2012-02-06 18:22:42 UTC
Description of problem:
When a normally running cluster is "broken" using iptables dropping everything coming to or from the cluster interface, then the cluster never recovers correctly.
If the packets are dropped outside of the OS (on virt host's virbr) then the cluster recovers correctly (forms 1-member islands).
It may be caused by the fact that drop causes EPERM:
sendmsg(12, {msg_name(16)={sa_family=AF_INET, sin_port=htons(5405), sin_addr=inet_addr("239.192.42.86")}, msg_iov(1)=[{"\203\241\r\236\203\20\264\240\221|u\372\213\214C\6\270\342\2421\23\3425\310\375a\370\203\22\320\356\n"..., 325}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = -1 EPERM (Operation not permitted)
Version-Release number of selected component (if applicable):
corosync-1.4.1-4.el6.x86_64
How reproducible:
100%
Steps to Reproduce:
1. setup a running cluster
2. on all nodes: iptables -I INPUT -i eth1 -j DROP; iptables -I OUTPUT -o eth1 -j DROP
3. see the message appearing in logs "Totem is unable to form a cluster because of an operating system or network fault. The most common cause of this message is that the local firewall is configured improperly."
4. after this nothing more happens
Actual results:
cluster stack is stuck
Expected results:
the same as in case of virbr (i.e. form 1-member islands)
Additional info:
Comment 5RHEL Program Management
2012-07-10 08:33:15 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.
Comment 6RHEL Program Management
2012-07-10 23:10:19 UTC
This request was erroneously removed from consideration in Red Hat Enterprise Linux 6.4, which is currently under development. This request will be evaluated for inclusion in Red Hat Enterprise Linux 6.4.
Created attachment 621513[details]
Proposed patch
Use unix socket for local multicast loop
Instead of rely on multicast loop functionality of kernel, we now use
unix socket created by socketpair to deliver multicast messages to
local node. This handles problems with improperly configured local
firewall. So if output/input to/from ethernet interface is blocked, node
is still able to create single node membership.
Dark side of the patch is fact, that membership is always created, so
"Totem is unable to form a cluster..." will never appear (same applies
to continuous_gather key).
Created attachment 623952[details]
Return back "Totem is unable to form..." message
This patch returns back SUBJ functionality. It rely on fact, that
sendmsg will return error, and if such error is returned for long time,
it's probably because of firewall
On firewalled cluster (no on-node iptables) it produces one-node islands as expected. Syslog says:
corosync[9681]: [TOTEM ] A processor failed, forming new configuration.
Ncorosync[9681]: [QUORUM] Members[2]: 2 3
corosync[9681]: [CMAN ] quorum lost, blocking activity
[QUORUM] This node is within the non-primary component and will NOT provide any services.
[QUORUM] Members[1]: 2
[TOTEM ] A processor joined or left the membership and a new membership was formed.
[CPG ] chosen downlist: sender r(0) ip(192.168.101.2) ; members(old:3 left:2)
[MAIN ] Completed service synchronization, ready to provide service.
Clustat reports one node Online, the rest is offline, cluster inquorate. And it happens in approx. token-timeout time.
If on-node iptables are in place as in comment 0, the following message is added to syslog every 1-2 seconds:
corosync[27900]: [MAIN ] Totem is unable to form a cluster because of an operating system or network fault. The most common cause of this message is that the local firewall is configured improperly.
Cluster is re-formed as soon as the rules are removed.
Marking as verified with corosync-1.4.1-12.el6.x86_64.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
http://rhn.redhat.com/errata/RHBA-2013-0497.html
Description of problem: When a normally running cluster is "broken" using iptables dropping everything coming to or from the cluster interface, then the cluster never recovers correctly. If the packets are dropped outside of the OS (on virt host's virbr) then the cluster recovers correctly (forms 1-member islands). It may be caused by the fact that drop causes EPERM: sendmsg(12, {msg_name(16)={sa_family=AF_INET, sin_port=htons(5405), sin_addr=inet_addr("239.192.42.86")}, msg_iov(1)=[{"\203\241\r\236\203\20\264\240\221|u\372\213\214C\6\270\342\2421\23\3425\310\375a\370\203\22\320\356\n"..., 325}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = -1 EPERM (Operation not permitted) Version-Release number of selected component (if applicable): corosync-1.4.1-4.el6.x86_64 How reproducible: 100% Steps to Reproduce: 1. setup a running cluster 2. on all nodes: iptables -I INPUT -i eth1 -j DROP; iptables -I OUTPUT -o eth1 -j DROP 3. see the message appearing in logs "Totem is unable to form a cluster because of an operating system or network fault. The most common cause of this message is that the local firewall is configured improperly." 4. after this nothing more happens Actual results: cluster stack is stuck Expected results: the same as in case of virbr (i.e. form 1-member islands) Additional info: