Bug 1685222
Summary: | Stop of cluster with enabled sbd leads sometimes to fence of the cluster nodes (re-check order of stopping cluster daemons) | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 8 | Reporter: | Miroslav Lisik <mlisik> | ||||
Component: | sbd | Assignee: | Klaus Wenninger <kwenning> | ||||
Status: | CLOSED ERRATA | QA Contact: | cluster-qe <cluster-qe> | ||||
Severity: | unspecified | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | 8.0 | CC: | aherr, cfeist, cluster-maint, jfriesse, kgaillot, nhostako, tojeline, toneata | ||||
Target Milestone: | rc | Keywords: | ZStream | ||||
Target Release: | 8.0 | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | sbd-1.4.0-10.el8 | Doc Type: | If docs needed, set a value | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | |||||||
: | 1691484 (view as bug list) | Environment: | |||||
Last Closed: | 2019-11-05 20:46:42 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | 1691484 | ||||||
Bug Blocks: | |||||||
Attachments: |
|
Hmm ... that may be a non-trivial issue. We probably can assume that start of shutdown of all pacemaker-instances will be synchronous enough. But of course we can't assume that all pacemaker-instances will terminate simultaneously. Actually it is not about the pacemaker-instances but rather about the corosync-instances as they will make the residual cluster non-quorate at some point which after the configured timeout will lead to self-fencing. One way might be disabling sbd-observation somehow in such a shutdown scenario but imagining what happens if that cluster-shutdown coincides with a cluster-split I would vote against this approach. I guess atm pcs tries to bring down the cluster-stack individually on all of the nodes!? If pcs would first bring down pacemaker on all nodes persisting corosync would preserve quorum. (This approach might be rethought for corosync-1 based cluster-stack where quorum is implemented inside pacemaker. Cluster-stacks with cman are of course different again - but keeping cman running till all pacemaker-instances are gone will probably do the trick here.) If there are then still issues we would have to introduce detection of graceful local pacemaker shutdown into sbd (which atm - at least as far as I see and remember isn't there) to bring sbd into the state it had prior to first pacemaker detection. @Miro: I remember having seen similar issues in the past randomly. Do you observe the issue more frequently recently so that we could expect some kind of regressing here? Maybe just indirectly because of some timing-changes due to corosync-3 or pacemaker-2 ... (In reply to Klaus Wenninger from comment #1) > I guess atm pcs tries to bring down the cluster-stack individually on all of > the nodes!? No, it does not. > If pcs would first bring down pacemaker on all nodes persisting corosync > would preserve > quorum. This is what pcs is already doing, see bz1180506. (In reply to Tomas Jelinek from comment #2) > > > If pcs would first bring down pacemaker on all nodes persisting corosync > > would preserve > > quorum. > > This is what pcs is already doing, see bz1180506. Ok matches what we are seeing in the logs around 17:35:55. Sorry for not having looked close enough. virt-142 & virt-141 are bringing down pacemaker more or less immediately while there is no noticable action on the DC (virt-143) between 17:35:55 & 17:36:04 (corosync detects the 2 peers having rebooted). Question is what it is doing then ... Transition 27 is completed 17:35:55 and the next transition-calculation (28) is triggered by the 2 nodes disappearing at 17:36:08 and is considered complete right after. But what is that node doing in between? There is no pending transition ... But anyway virt-142 & virt-141 should probably detect graceful shutdown of their local pacemaker-instance and properly reinitialize sbd to wait for a pacemaker-instance to come up without timeout instead of timeouting waiting for a connection to the local pacemaker-instance. Guess there shouldn't be any danger in introducing that behaviour.
> @Miro:
> I remember having seen similar issues in the past randomly.
> Do you observe the issue more frequently recently so that we could expect
> some
> kind of regressing here? Maybe just indirectly because of some
> timing-changes due
> to corosync-3 or pacemaker-2 ...
I observe this issue more frequently on rhel 8.0 than rhel7.6. On rhel7.6 with SBD_WATCHDOG_TIMEOUT=2, I get only messages like this:
warning: inquisitor_child: pcmk health check: UNHEALTH
warning: inquisitor_child: Servant pcmk is outdated (age: 356680)
Fence of a node is still reproducible on not powerfull VMs (1CPU, 2GB RAM) with rhel7.6 and SBD_WATCHDOG_TIMEOUT=1.
(In reply to Miroslav Lisik from comment #4) > > @Miro: > > I remember having seen similar issues in the past randomly. > > Do you observe the issue more frequently recently so that we could expect > > some > > kind of regressing here? Maybe just indirectly because of some > > timing-changes due > > to corosync-3 or pacemaker-2 ... > > I observe this issue more frequently on rhel 8.0 than rhel7.6. On rhel7.6 > with SBD_WATCHDOG_TIMEOUT=2, I get only messages like this: > > warning: inquisitor_child: pcmk health check: UNHEALTH > warning: inquisitor_child: Servant pcmk is outdated (age: 356680) > > Fence of a node is still reproducible on not powerfull VMs (1CPU, 2GB RAM) > with rhel7.6 and SBD_WATCHDOG_TIMEOUT=1. On the vm I just tested with I sometimes get "warning: inquisitor_child: Servant pcmk is outdated (age: 4)". So I guess 5s is quite on the edge. Have to have a deeper look into how that disarming is actually working and what could take that long. Most of the time i also get message with 'age: 4' but sometimes is this number suspiciously high. I agree with the idea of sbd having intelligence about a graceful pacemaker shutdown. Speaking generally, there are many reasons shutdown could take much different times on different nodes, the most obvious being some resources take longer to stop than others. The DC will always wait for all the other nodes to shut down (at least if there are no problems) before shutting down itself. (In reply to Ken Gaillot from comment #7) > I agree with the idea of sbd having intelligence about a graceful pacemaker > shutdown. That is kind of working already but seems to take a few seconds under certain circumstances. If we want low watchdog-timeouts some research is required and the detection has to be made more robust. > > Speaking generally, there are many reasons shutdown could take much > different times on different nodes, the most obvious being some resources > take longer to stop than others. The DC will always wait for all the other > nodes to shut down (at least if there are no problems) before shutting down > itself. But in this case the nodes are long gone and even the log on the DC shows that the DC has detected that but it still takes till corosync detects them to be gone (due to sbd rebooting) for the DC to shut down. Anyway nothing to rely on that all pacemaker-instances shut down within watchdog-timeout ... was just curious as I didnt see a reason ... (In reply to Klaus Wenninger from comment #8) > > Speaking generally, there are many reasons shutdown could take much > > different times on different nodes, the most obvious being some resources > > take longer to stop than others. The DC will always wait for all the other > > nodes to shut down (at least if there are no problems) before shutting down > > itself. > > But in this case the nodes are long gone and even the log on the DC shows > that the DC has detected that but it still takes till corosync detects > them to be gone (due to sbd rebooting) for the DC to shut down. > Anyway nothing to rely on that all pacemaker-instances shut down within > watchdog-timeout ... was just curious as I didnt see a reason ... Shutdown isn't considered complete until a node leaves both the crmd membership and the corosync membership. (In reply to Ken Gaillot from comment #9) > > Shutdown isn't considered complete until a node leaves both the crmd > membership and the corosync membership. But isn't what we see here rather the nodes disappearing completely from corosync as they are rebooted and not their signing of from cpg? (In reply to Klaus Wenninger from comment #10) > (In reply to Ken Gaillot from comment #9) > > > > > Shutdown isn't considered complete until a node leaves both the crmd > > membership and the corosync membership. > > But isn't what we see here rather the nodes disappearing completely > from corosync as they are rebooted and not their signing of from cpg? I forgot, shutdown would be considered complete after *either* loss before e7d9622, which I just checked and now realize is in RHEL 8 only. https://github.com/ClusterLabs/sbd/pull/72 implements proper detection and evaluation of connection-drop. This bug has been copied as 8.0.0 z-stream bug#1734061 zstream and now must be resolved in the current update release, set blocker flag. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:3344 |
Created attachment 1540678 [details] stop logs Description of problem: Stop of cluster with enabled sbd leads sometimes to fence of the cluster nodes. Version-Release number of selected component (if applicable): # rpm -q libqb libknet1 sbd corosync pacemaker pcs systemd libqb-1.0.3-7.el8.x86_64 libknet1-1.4-3.el8.x86_64 sbd-1.3.1-18.el8.x86_64 corosync-3.0.0-2.el8.x86_64 pacemaker-2.0.1-4.el8.x86_64 pcs-0.10.1-4.el8.x86_64 systemd-239-13.el8.x86_64 How reproducible: difficult Steps to Reproduce: 1. Create 3-node cluster # pcs host auth -u hacluster -p $PASSWORD virt-14{1..3} ... # pcs cluster setup HAcluster virt-14{1..3} ... 2. Enable sbd with watchdog=/dev/null (optionally lower SBD_WATCHDOG_TIMEOUT below 5s) # pcs stonith sbd enable watchdog=/dev/null --no-watchdog-validation ... 3. Start the cluster # pcs cluster start --all --wait ... 4. Stop the cluster and watch the logs # pcs cluster stop --all ... Actual results: One or more nodes are rebooted. Expected results: No nodes are rebooted during cluster stop. Additional info: See attached logs for more info. $ grep sbd *stop-03.log virt-141_stop-03.log:Mar 4 17:35:59 virt-141 sbd[7610]: warning: inquisitor_child: Servant pcmk is outdated (age: 4) virt-141_stop-03.log:Mar 4 17:36:02 virt-141 sbd[7610]: warning: inquisitor_child: Latency: No liveness for 4 s exceeds threshold of 3 s (healthy servants: 0) virt-141_stop-03.log:Mar 4 17:36:02 virt-141 sbd[7610]: warning: inquisitor_child: Latency: No liveness for 4 s exceeds threshold of 3 s (healthy servants: 0) virt-141_stop-03.log:Mar 4 17:36:03 virt-141 sbd[7610]: warning: inquisitor_child: Latency: No liveness for 5 s exceeds threshold of 3 s (healthy servants: 0) virt-142_stop-03.log:Mar 4 17:35:59 virt-142 sbd[15742]: warning: inquisitor_child: Servant pcmk is outdated (age: 4) virt-142_stop-03.log:Mar 4 17:36:02 virt-142 sbd[15742]: warning: inquisitor_child: Latency: No liveness for 4 s exceeds threshold of 3 s (healthy servants: 0) virt-142_stop-03.log:Mar 4 17:36:03 virt-142 sbd[15742]: warning: inquisitor_child: Latency: No liveness for 5 s exceeds threshold of 3 s (healthy servants: 0) virt-142_stop-03.log:Mar 4 17:36:03 virt-142 sbd[15742]: warning: inquisitor_child: Latency: No liveness for 5 s exceeds threshold of 3 s (healthy servants: 0) virt-143_stop-03.log:Mar 4 17:36:10 virt-143 sbd[15109]: warning: inquisitor_child: Servant pcmk is outdated (age: 4) virt-143_stop-03.log:Mar 4 17:36:13 virt-143 sbd[15109]: warning: inquisitor_child: Latency: No liveness for 4 s exceeds threshold of 3 s (healthy servants: 0) virt-143_stop-03.log:Mar 4 17:36:13 virt-143 sbd[15109]: warning: inquisitor_child: Latency: No liveness for 4 s exceeds threshold of 3 s (healthy servants: 0) virt-143_stop-03.log:Mar 4 17:36:14 virt-143 sbd[15109]: warning: inquisitor_child: Latency: No liveness for 5 s exceeds threshold of 3 s (healthy servants: 0) virt-143_stop-03.log:Mar 4 17:36:14 virt-143 sbd[15109]: warning: inquisitor_child: Latency: No liveness for 5 s exceeds threshold of 3 s (healthy servants: 0)