Bug 787789 - cman+corosync get stuck if iptables drop is in place
Summary: cman+corosync get stuck if iptables drop is in place
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: corosync
Version: 6.2
Hardware: Unspecified
OS: Unspecified
Target Milestone: rc
: ---
Assignee: Jan Friesse
QA Contact: Cluster QE
Depends On:
TreeView+ depends on / blocked
Reported: 2012-02-06 18:22 UTC by Jaroslav Kortus
Modified: 2013-02-21 07:50 UTC (History)
1 user (show)

Fixed In Version: corosync-1.4.1-11.el6
Doc Type: Bug Fix
Doc Text:
Cause: Input and output packets are blocked (by netfilter firewall). Consequence: Corosync stuck and never creates membership. It's not possible to use cluster. Fix: Main problem was hidden in fact, that corosync rely on multicast loop (packets sent to mcast group are returned back to sender). Sadly, this packets are filtered by netfilter, and if policy is block, they are blocked and never arrive back to corosync. Solution is to use sockpair unix dgram socket, used only for local loopback. So packets are sent to multicast group AND to this unix dgram. Multicast group loopback is disabled, but packets are always delivered thru this unix socket to localhost. Result: In given scenario, single node cluster is created.
Clone Of:
Last Closed: 2013-02-21 07:50:03 UTC
Target Upstream Version:

Attachments (Terms of Use)
Proposed patch (9.33 KB, patch)
2012-10-04 09:58 UTC, Jan Friesse
no flags Details | Diff
Move "Totem is unable to form..." message to main (1.74 KB, patch)
2012-10-09 07:21 UTC, Jan Friesse
no flags Details | Diff
Return back "Totem is unable to form..." message (9.58 KB, patch)
2012-10-09 07:22 UTC, Jan Friesse
no flags Details | Diff

System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:0497 0 normal SHIPPED_LIVE corosync bug fix and enhancement update 2013-02-20 21:18:24 UTC

Description Jaroslav Kortus 2012-02-06 18:22:42 UTC
Description of problem:
When a normally running cluster is "broken" using iptables dropping everything coming to or from the cluster interface, then the cluster never recovers correctly.

If the packets are dropped outside of the OS (on virt host's virbr) then the cluster recovers correctly (forms 1-member islands).

It may be caused by the fact that drop causes EPERM:
sendmsg(12, {msg_name(16)={sa_family=AF_INET, sin_port=htons(5405), sin_addr=inet_addr("")}, msg_iov(1)=[{"\203\241\r\236\203\20\264\240\221|u\372\213\214C\6\270\342\2421\23\3425\310\375a\370\203\22\320\356\n"..., 325}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = -1 EPERM (Operation not permitted)

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. setup a running cluster
2. on all nodes: iptables -I INPUT -i eth1 -j DROP; iptables -I OUTPUT -o eth1 -j DROP
3. see the message appearing in logs "Totem is unable to form a cluster because of an operating system or network fault. The most common cause of this message is that the local firewall is configured improperly."
4. after this nothing more happens
Actual results:
cluster stack is stuck

Expected results:
the same as in case of virbr (i.e. form 1-member islands)

Additional info:

Comment 5 RHEL Program Management 2012-07-10 08:33:15 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.

Comment 6 RHEL Program Management 2012-07-10 23:10:19 UTC
This request was erroneously removed from consideration in Red Hat Enterprise Linux 6.4, which is currently under development.  This request will be evaluated for inclusion in Red Hat Enterprise Linux 6.4.

Comment 10 Jan Friesse 2012-10-04 09:58:43 UTC
Created attachment 621513 [details]
Proposed patch

Use unix socket for local multicast loop

Instead of rely on multicast loop functionality of kernel, we now use
unix socket created by socketpair to deliver multicast messages to
local node. This handles problems with improperly configured local
firewall. So if output/input to/from ethernet interface is blocked, node
is still able to create single node membership.

Dark side of the patch is fact, that membership is always created, so
"Totem is unable to form a cluster..." will never appear (same applies
to continuous_gather key).

Comment 12 Jan Friesse 2012-10-09 07:21:50 UTC
Created attachment 623951 [details]
Move "Totem is unable to form..." message to main

Comment 13 Jan Friesse 2012-10-09 07:22:43 UTC
Created attachment 623952 [details]
Return back "Totem is unable to form..." message

This patch returns back SUBJ functionality. It rely on fact, that
sendmsg will return error, and if such error is returned for long time,
it's probably because of firewall

Comment 15 Jaroslav Kortus 2012-11-28 16:09:05 UTC
On firewalled cluster (no on-node iptables) it produces one-node islands as expected. Syslog says:
corosync[9681]:   [TOTEM ] A processor failed, forming new configuration.
Ncorosync[9681]:   [QUORUM] Members[2]: 2 3
corosync[9681]:   [CMAN  ] quorum lost, blocking activity
[QUORUM] This node is within the non-primary component and will NOT provide any services.
[QUORUM] Members[1]: 2
[TOTEM ] A processor joined or left the membership and a new membership was formed.
[CPG   ] chosen downlist: sender r(0) ip( ; members(old:3 left:2)
[MAIN  ] Completed service synchronization, ready to provide service.

Clustat reports one node Online, the rest is offline, cluster inquorate. And it happens in approx. token-timeout time.

If on-node iptables are in place as in comment 0, the following message is added to syslog every 1-2 seconds:
corosync[27900]:   [MAIN  ] Totem is unable to form a cluster because of an operating system or network fault. The most common cause of this message is that the local firewall is configured improperly.

Cluster is re-formed as soon as the rules are removed.
Marking as verified with corosync-1.4.1-12.el6.x86_64.

Comment 17 errata-xmlrpc 2013-02-21 07:50:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.