RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1344225 - garbd resource-agent
Summary: garbd resource-agent
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: resource-agents
Version: 7.3
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Oyvind Albrigtsen
QA Contact: Asaf Hirshberg
URL:
Whiteboard:
Depends On: 1328018
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-06-09 08:32 UTC by Marcel Kolaja
Modified: 2016-08-02 18:23 UTC (History)
9 users (show)

Fixed In Version: resource-agents-3.9.5-54.el7_2.12
Doc Type: Enhancement
Doc Text:
This update adds the Galera arbitrator (garbd) as a new resource agent. It can be used in Galera clusters with an even number of nodes, where it can act as an odd node and a quorum arbitrator, preventing split-brain situations. Use pacemaker to set up garbd on a node.
Clone Of: 1328018
Environment:
Last Closed: 2016-08-02 18:23:41 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1535 0 normal SHIPPED_LIVE resource-agents bug fix and enhancement update 2016-08-02 22:19:47 UTC

Description Marcel Kolaja 2016-06-09 08:32:09 UTC
This bug has been copied from bug #1328018 and has been proposed
to be backported to 7.2 z-stream (EUS).

Comment 4 Damien Ciabrini 2016-06-09 09:47:06 UTC
Instruction for testing (from #1328018):

1. create a 3-node pacemaker cluster

pcs cluster setup --name garbd node1 node2 node3
pcs property set stonith-enabled=false

2. we'll run a 2-node galera + 1 garbd. choose a name for the galera
cluster and set in /etc/my.cnf.d/galera.cnf

# grep wsrep_cluster_name /etc/my.cnf.d/galera.cnf
wsrep_cluster_name="galeracluster"

3. the galera and garbd resource need coordination,they must start on
specific nodes and one after the other. prepare the commands in a file

pcs cluster cib garbd.xml

3.a. create the galera and garbd resources. the port number in
wsrep_cluster_address is mandatory, so is the galera cluster name

pcs -f garbd.xml resource create galera galera enable_creation=true wsrep_cluster_address='gcomm://node1,node2' meta master-max=2 ordered=true --master --disabled
pcs -f garbd.xml resource create garbd garbd wsrep_cluster_address='gcomm://node1:4567,node2:4567' wsrep_cluster_name="galeracluster"

3.b. prevent galera to run on node3, it will be for garbd garbd

pcs -f garbd.xml property set --node node3 arbitrator=1
pcs -f garbd.xml constraint location galera-master rule resource-discovery=exclusive score=0 not_defined arbitrator

3.c. make sure garbd starts after galera and do not run where galera runs

pcs -f garbd.xml constraint order promote galera-master then start garbd
pcs -f garbd.xml constraint colocation add garbd with galera-master -INFINITY

5. import the commands and start galera, garbd will start in sequence

pcs cluster cib-push garbd.xml
pcs resource enable galera-master

Once started, you can verify that the galera cluster includes the arbitrator

# mysql -e "show status like 'wsrep_cluster_size';"
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 3     |
+--------------------+-------+

Comment 5 Asaf Hirshberg 2016-06-22 04:48:53 UTC
Code verified Using resource-agents-3.9.5-72.el7.x86_64

[heat-admin@overcloud-controller-0 ~]$ cat /usr/lib/ocf/resource.d/heartbeat/garbd 
#!/bin/sh
#
# Copyright (c) 2015 Damien Ciabrini <dciabrin>
#                    All Rights Reserved.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of version 2 of the GNU General Public License as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it would be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
#
# Further, this software is distributed without any warranty that it is
# free of the rightful claim of any third person regarding infringement
# or the like.  Any license provided herein, whether implied or
# otherwise, applies only to this software file.  Patent licenses, if
# any, provided herein do not apply to combinations of this program with
# other software, or any other product whatsoever.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write the Free Software Foundation,
# Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA.
#

##
# README.
#
# Resource agent for garbd, the Galera arbitrator
#
# You can use this agent if you run an even number of galera nodes,
# and you want an additional node to avoid split-brain situations.
#
# garbd requires that a Galera cluster is running, so make sure to
# add a proper ordering constraint to the cluster, e.g.:
#
#   pcs constraint order galera-master then garbd
#
# If you add garbd to the cluster while Galera is not running, you
# might want to disable it before setting up ordering constraint, e.g.:
#
#   pcs resource create garbd garbd \
#      wsrep_cluster_address=gcomm://node1:4567,node2:4567 \
#      meta target-role=stopped
#
# Use location constraints to avoid running galera and garbd on
# the same node, e.g.:
#
#   pcs constraint colocation add garbd with galera-master -INFINITY
#   pcs constraint location garbd prefers node3=INFINITY
#
##

#######################################################################
# Initialization:

: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs

#######################################################################
# Set default paramenter values

OCF_RESKEY_binary_default="/usr/sbin/garbd"
OCF_RESKEY_log_default="/var/log/garbd.log"
OCF_RESKEY_pid_default="/var/run/garbd.pid"
OCF_RESKEY_user_default="mysql"
if [ "X${HOSTOS}" = "XOpenBSD" ];then
    OCF_RESKEY_group_default="_mysql"
else
    OCF_RESKEY_group_default="mysql"
fi

: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
: ${OCF_RESKEY_log=${OCF_RESKEY_log_default}}
: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
: ${OCF_RESKEY_group=${OCF_RESKEY_group_default}}

usage() {
  cat <<UEND
usage: $0 (start|stop|validate-all|meta-data|status|monitor)

$0 manages a Galera arbitrator.
...
...
# What kind of method was invoked?
case "$1" in
  start)    garbd_start;;
  stop)     garbd_stop;;
  status)   garbd_status err;;
  monitor)  garbd_monitor err;;
  promote)  garbd_promote;;
  demote)   garbd_demote;;
  validate-all) exit $OCF_SUCCESS;;

 *)     usage
        exit $OCF_ERR_UNIMPLEMENTED;;
esac
[heat-admin@overcloud-controller-0 ~]$ 
[heat-admin@overcloud-controller-0 ~]$ rpm -qa |grep resource-agent
resource-agents-3.9.5-72.el7.x86_64

Comment 7 errata-xmlrpc 2016-08-02 18:23:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-1535.html


Note You need to log in before you can comment on or make changes to this bug.