Bug 1118188 - galera resource-agent
Summary: galera resource-agent
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: resource-agents
Version: 7.1
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: ---
Assignee: David Vossel
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On: 1116166
Blocks: 1139277
TreeView+ depends on / blocked
 
Reported: 2014-07-10 07:38 UTC by Jan Kurik
Modified: 2016-04-26 15:23 UTC (History)
13 users (show)

Fixed In Version: resource-agents-3.9.5-26.el7_0.6
Doc Type: Bug Fix
Doc Text:
This update introduces the galera resource agent for managing multi-master MySQL instances with Pacemaker.
Clone Of: 1116166
Environment:
Last Closed: 2014-12-04 08:57:02 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:1957 normal SHIPPED_LIVE resource-agents enhancement update 2014-12-04 13:56:50 UTC

Description Jan Kurik 2014-07-10 07:38:37 UTC
This bug has been copied from bug #1116166 and has been proposed
to be backported to 7.0 z-stream (EUS).

Comment 10 David Vossel 2014-07-30 19:16:50 UTC
There is a failure scenario we need to account for that has been brought to my attention. If the SST sync takes longer than the promotion op's timeout value, the promotion will fail and pacemaker will in turn stop the mysqld daemon in an attempt to recover the resource.  Users with large databases could experience this with the current default timeout settings.

We don't want this to happen. If the SST is still in progress, having pacemaker stop the mysqld instance could corrupt the database on the local node.

There's a simple fix for this

1. set the on-fail=block option for the galera resources promote operation.
2. Be very generous when setting the promotion timeout. By default promote gets 2 minutes. Maybe we should consider something much larger like 5 minutes.
3. Be certain we are promoting the instances in series, not parallel by using the ordered=true option. This will prevent galera instances from potentially waiting on each other to sync with the same donor node.

Example: 
pcs resource create db galera enable_creation=true wsrep_cluster_address=gcom,://node1,node2,node3 meta master-max=3 ordered=true op promote timeout=300s on-fail=block --master

Now , if the galera instance does fail during the promote Pacemaker will not attempt to recover the resource (which is what we want). If this failure is experienced, after verifying the failed node's database is not still in the process of syncing, the user should double the galera promotion timeout value and then execute resource cleanup to remove the failure which will allow pacemaker to manage the database again.

-- Vossel

Comment 20 errata-xmlrpc 2014-12-04 08:57:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2014-1957.html


Note You need to log in before you can comment on or make changes to this bug.