Bug 1738206

Summary: Improve galera resource agent to detect eventual multiple bootstrap'd clusters
Product: Red Hat Enterprise Linux 8 Reporter: Luca Miccini <lmiccini>
Component: resource-agentsAssignee: Damien Ciabrini <dciabrin>
Status: NEW --- QA Contact: cluster-qe <cluster-qe>
Severity: high Docs Contact:
Priority: high    
Version: 8.0CC: agk, bdobreli, cluster-maint, dciabrin, dhill, fdinitto, mbayer, sbradley
Target Milestone: rcKeywords: Triaged
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Luca Miccini 2019-08-06 13:59:54 UTC
Description of problem:

Coming from https://bugzilla.redhat.com/show_bug.cgi?id=1733392.

From comment #9:

~~~
What I would *suspect* happened, or at least what I would hope was the cause of this, was that the node that was brought online again was started manually on the commandline, and included a flag like --wsrep-new-cluster which indicated that this node should make a new cluster, indicating the problem was caused by operator error.  This condition should not be possible when starting the cluster using pacemaker as it always ensures there is a single master, and if one already exists will ensure that the node being started joins that cluster.    If indeed the restarted node was started with --wsrep-new-cluster, the issue would be that there were two separate Galera clusters running, instead of one.    If this were the case, I still would consider it a bug that the Pacemaker resource agent did not detect this, because this is a big deal of an issue and the RA should have the capability to do this.
  
For the PIDONE side of this, we of course want to confirm exactly what happened that the galera cluster seemed to be operating in split brain, to confirm that there were in fact two distinct clusters started and that the pacemaker resource agent was not the cause and hopefully that this was an unfortunate case of operator error.   If so, we will loop back and see if there's a way the resource agent can be enhanced to detect this condition.
~~~

Bogdan (https://bugzilla.redhat.com/show_bug.cgi?id=1733392#c24) suggested to have a look at https://review.opendev.org/#/c/318162/2/files/fuel-ha-utils/ocf/mysql-wss

Michael's take:

~~~
We should look into integrating something like that patch here.    I read through and it seems like it relies upon looking for the "wsrep-new-cluster" key in the ps listing, would it be more resilient if we instead did a 'mysql -e "SHOW STATUS LIKE 'wsrep_cluster_state_uuid'"' to see if this node is part of a different cluster.   Theoretically, there can be multiple galera clusters running and none of the nodes would have a particular PS listing if the original bootstrap nodes for those clusters were stopped or restarted.
~~~

Damien's:

~~~
Agreed, we currently only verify that a galera server connected to a galera cluster,
but not that the cluster was the one we were expecting to connect to.

Also, I'm not sure if we root caused the original reason why we ended up running
two cluster. In particular, I wonder if this is the result of a bad bootstrap from
the resource agent, or if it was a combination of manual steps that ended up
starting two different galera cluster and our resource agent didn't catch that
condition.

Let's see how we can efficiently track that in the resource agent
~~~

Comment 1 John Ruemker 2020-03-18 13:47:28 UTC
Stripping [RFE] from Summary to reflect that the current agent exhibits problematic and incorrect behavior that should be corrected to avoid customer impact.