Bug 1410110 - cluster recovery takes too long after network failure
Summary: cluster recovery takes too long after network failure
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: pacemaker
Version: 6.9
Hardware: Unspecified
OS: Unspecified
Target Milestone: rc
: 6.9
Assignee: Ken Gaillot
QA Contact: cluster-qe@redhat.com
Depends On:
TreeView+ depends on / blocked
Reported: 2017-01-04 13:39 UTC by michal novacek
Modified: 2017-03-21 09:51 UTC (History)
6 users (show)

Fixed In Version: pacemaker-1.1.15-5.el6
Doc Type: No Doc Update
Doc Text:
Clone Of:
Last Closed: 2017-03-21 09:51:09 UTC
Target Upstream Version:

Attachments (Terms of Use)
'pcs cluster report' output (5.80 MB, application/x-bzip)
2017-01-04 13:39 UTC, michal novacek
no flags Details
virt-056:/var/log/cluster/corosync.log for Jan 4 (575.76 KB, application/x-gzip)
2017-01-17 11:38 UTC, michal novacek
no flags Details

System ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:0629 normal SHIPPED_LIVE pacemaker bug fix update 2017-03-21 12:29:32 UTC

Description michal novacek 2017-01-04 13:39:34 UTC
Created attachment 1237172 [details]
'pcs cluster report' output

Description of problem:
On 16 nodes pacemaker cluster I use iptables to cut of communication of each
node with all the other nodes to simulate switch failure. It causes all nodes
to turn inquorate and see all other nodes as offline. After removing iptables
rules (at the same time on all nodes) no reboot occurs. The problem is that the
cluster recovery back to full functionality takes too long (about fifteen

Version-Release number of selected component (if applicable):

How reproducible: always

Steps to Reproduce:
1. have running configured and settled cluster
2. create iptables rules file on each of the nodes that once run will block
traffic to and from all other nodes of the cluster on ipv6 and ipv4
3. run created script at the same time on all the nodes
4. wait for cluster to settle (all nodes inquorate, no reboot)
5. remove all rules an all nodes at the same time
6. wait for cluster to be fully functional again

Actual results: about 15 minutes recovery

Expected results: less than 5 minutes recovery

Additional info:
Cluster configuration:

> [root@virt-006 ~]# pcs status
Cluster name: STSRHTS23364
Stack: cman
Current DC: virt-056 (version 1.1.15-3.el6-e174ec8) - partition with quorum
Last updated: Wed Jan  4 14:36:53 2017          Last change: Wed Jan  4 11:38:13 2017 by root via cibadmin on virt-062

16 nodes and 17 resources configured

Online: [ virt-006 virt-007 virt-008 virt-009 virt-013 virt-014 virt-016 virt-018 virt-056 virt-057 virt-058 virt-059 virt-060 virt-061 virt-062 virt-067 ]

Full list of resources:

 fence-virt-006 (stonith:fence_xvm):    Started virt-006
 fence-virt-007 (stonith:fence_xvm):    Started virt-007
 fence-virt-008 (stonith:fence_xvm):    Started virt-008
 fence-virt-009 (stonith:fence_xvm):    Started virt-009
 fence-virt-013 (stonith:fence_xvm):    Started virt-013
 fence-virt-014 (stonith:fence_xvm):    Started virt-014
 fence-virt-016 (stonith:fence_xvm):    Started virt-016
 fence-virt-018 (stonith:fence_xvm):    Started virt-018
 fence-virt-056 (stonith:fence_xvm):    Started virt-056
 fence-virt-057 (stonith:fence_xvm):    Started virt-057
 fence-virt-058 (stonith:fence_xvm):    Started virt-058
 fence-virt-059 (stonith:fence_xvm):    Started virt-059
 fence-virt-060 (stonith:fence_xvm):    Started virt-060
 fence-virt-061 (stonith:fence_xvm):    Started virt-061
 fence-virt-062 (stonith:fence_xvm):    Started virt-062
 fence-virt-067 (stonith:fence_xvm):    Started virt-067
 atd    (lsb:atd):      Started virt-006

Failed Actions:
* fence-virt-060_monitor_60000 on virt-060 'not running' (7): call=107, status=complete, exitreason='none',
    last-rc-change='Wed Jan  4 12:03:43 2017', queued=0ms, exec=6ms
* fence-virt-008_monitor_60000 on virt-008 'not running' (7): call=87, status=complete, exitreason='none',
    last-rc-change='Wed Jan  4 12:03:36 2017', queued=0ms, exec=10ms
* fence-virt-009_monitor_60000 on virt-009 'not running' (7): call=87, status=complete, exitreason='none',
    last-rc-change='Wed Jan  4 12:03:34 2017', queued=0ms, exec=1ms

Daemon Status:
  cman: active/disabled
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: inactive/enabled

> [root@virt-006 ~]# pcs config
Cluster Name: STSRHTS23364
Corosync Nodes:
 virt-006 virt-007 virt-008 virt-009 virt-013 virt-014 virt-016 virt-018 virt-056 virt-057 virt-058 virt-059 virt-060 virt-061 virt-062 virt-067
Pacemaker Nodes:
 virt-006 virt-007 virt-008 virt-009 virt-013 virt-014 virt-016 virt-018 virt-056 virt-057 virt-058 virt-059 virt-060 virt-061 virt-062 virt-067

 Resource: atd (class=lsb type=atd)
  Operations: start interval=0s timeout=15 (atd-start-interval-0s)
              stop interval=0s timeout=15 (atd-stop-interval-0s)
              monitor interval=60s start-delay=10s (atd-monitor-interval-60s)

Stonith Devices:
 Resource: fence-virt-006 (class=stonith type=fence_xvm)
  Attributes: delay=5 pcmk_host_check=static-list pcmk_host_list=virt-006 pcmk_host_map=virt-006:virt-006.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-006-monitor-interval-60s)
 Resource: fence-virt-007 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-007 pcmk_host_map=virt-007:virt-007.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-007-monitor-interval-60s)
 Resource: fence-virt-008 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-008 pcmk_host_map=virt-008:virt-008.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-008-monitor-interval-60s)
 Resource: fence-virt-009 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-009 pcmk_host_map=virt-009:virt-009.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-009-monitor-interval-60s)
 Resource: fence-virt-013 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-013 pcmk_host_map=virt-013:virt-013.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-013-monitor-interval-60s)
 Resource: fence-virt-014 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-014 pcmk_host_map=virt-014:virt-014.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-014-monitor-interval-60s)
 Resource: fence-virt-016 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-016 pcmk_host_map=virt-016:virt-016.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-016-monitor-interval-60s)
 Resource: fence-virt-018 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-018 pcmk_host_map=virt-018:virt-018.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-018-monitor-interval-60s)
 Resource: fence-virt-056 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-056 pcmk_host_map=virt-056:virt-056.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-056-monitor-interval-60s)
 Resource: fence-virt-057 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-057 pcmk_host_map=virt-057:virt-057.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-057-monitor-interval-60s)
 Resource: fence-virt-058 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-058 pcmk_host_map=virt-058:virt-058.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-058-monitor-interval-60s)
 Resource: fence-virt-059 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-059 pcmk_host_map=virt-059:virt-059.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-059-monitor-interval-60s)
 Resource: fence-virt-060 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-060 pcmk_host_map=virt-060:virt-060.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-060-monitor-interval-60s)
 Resource: fence-virt-061 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-061 pcmk_host_map=virt-061:virt-061.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-061-monitor-interval-60s)
 Resource: fence-virt-062 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-062 pcmk_host_map=virt-062:virt-062.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-062-monitor-interval-60s)
 Resource: fence-virt-067 (class=stonith type=fence_xvm)
  Attributes: pcmk_host_check=static-list pcmk_host_list=virt-067 pcmk_host_map=virt-067:virt-067.cluster-qe.lab.eng.brq.redhat.com
  Operations: monitor interval=60s (fence-virt-067-monitor-interval-60s)
Fencing Levels:

Location Constraints:
Ordering Constraints:
Colocation Constraints:
Ticket Constraints:

 No alerts defined

Resources Defaults:
 No defaults set
Operations Defaults:
 No defaults set

Cluster Properties:
 cluster-infrastructure: cman
 dc-version: 1.1.15-3.el6-e174ec8
 have-watchdog: false

Comment 7 Andrew Beekhof 2017-01-12 21:30:21 UTC
if it is always reproducable, very

Comment 9 michal novacek 2017-01-17 11:38:39 UTC
Created attachment 1241679 [details]
virt-056:/var/log/cluster/corosync.log for Jan 4

Comment 12 Ken Gaillot 2017-01-30 17:22:06 UTC
I have backported upstream commits 31db95be, df497ff, de5c6c73, 64c77a7, and 3a94d53c, which at least partially resolve this issue. I am not certain this is a full solution, but given the current deadlines for 6.9, I think it is important to get these in the release. We will consider this the fix for this bz; if the problem recurs with the new packages, please open a new bz.

Documentation: Since this has not been reported by a customer, and only affects recovery time rather than data integrity, I do not think we need a release note for this.

Comment 14 michal novacek 2017-01-31 14:49:41 UTC
I have verified that the recovery of the cluster in switch failure scenario will take less than five minutes with pacemaker-1.1.15-5.el6

Comment 17 errata-xmlrpc 2017-03-21 09:51:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.