Bug 1738206 - Improve galera resource agent to detect eventual multiple bootstrap'd clusters
Summary: Improve galera resource agent to detect eventual multiple bootstrap'd clusters
Keywords:
Status: NEW
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: resource-agents
Version: 8.0
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Damien Ciabrini
QA Contact: cluster-qe
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-08-06 13:59 UTC by Luca Miccini
Modified: 2023-08-10 15:40 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 4657881 0 None None None 2020-07-29 17:41:55 UTC

Description Luca Miccini 2019-08-06 13:59:54 UTC
Description of problem:

Coming from https://bugzilla.redhat.com/show_bug.cgi?id=1733392.

From comment #9:

~~~
What I would *suspect* happened, or at least what I would hope was the cause of this, was that the node that was brought online again was started manually on the commandline, and included a flag like --wsrep-new-cluster which indicated that this node should make a new cluster, indicating the problem was caused by operator error.  This condition should not be possible when starting the cluster using pacemaker as it always ensures there is a single master, and if one already exists will ensure that the node being started joins that cluster.    If indeed the restarted node was started with --wsrep-new-cluster, the issue would be that there were two separate Galera clusters running, instead of one.    If this were the case, I still would consider it a bug that the Pacemaker resource agent did not detect this, because this is a big deal of an issue and the RA should have the capability to do this.
  
For the PIDONE side of this, we of course want to confirm exactly what happened that the galera cluster seemed to be operating in split brain, to confirm that there were in fact two distinct clusters started and that the pacemaker resource agent was not the cause and hopefully that this was an unfortunate case of operator error.   If so, we will loop back and see if there's a way the resource agent can be enhanced to detect this condition.
~~~

Bogdan (https://bugzilla.redhat.com/show_bug.cgi?id=1733392#c24) suggested to have a look at https://review.opendev.org/#/c/318162/2/files/fuel-ha-utils/ocf/mysql-wss

Michael's take:

~~~
We should look into integrating something like that patch here.    I read through and it seems like it relies upon looking for the "wsrep-new-cluster" key in the ps listing, would it be more resilient if we instead did a 'mysql -e "SHOW STATUS LIKE 'wsrep_cluster_state_uuid'"' to see if this node is part of a different cluster.   Theoretically, there can be multiple galera clusters running and none of the nodes would have a particular PS listing if the original bootstrap nodes for those clusters were stopped or restarted.
~~~

Damien's:

~~~
Agreed, we currently only verify that a galera server connected to a galera cluster,
but not that the cluster was the one we were expecting to connect to.

Also, I'm not sure if we root caused the original reason why we ended up running
two cluster. In particular, I wonder if this is the result of a bad bootstrap from
the resource agent, or if it was a combination of manual steps that ended up
starting two different galera cluster and our resource agent didn't catch that
condition.

Let's see how we can efficiently track that in the resource agent
~~~

Comment 1 John Ruemker 2020-03-18 13:47:28 UTC
Stripping [RFE] from Summary to reflect that the current agent exhibits problematic and incorrect behavior that should be corrected to avoid customer impact.


Note You need to log in before you can comment on or make changes to this bug.