Bug 1413951

Summary: Add support for shared storage with sbd
Product: Red Hat Enterprise Linux 7 Reporter: Klaus Wenninger <kwenning>
Component: sbdAssignee: Klaus Wenninger <kwenning>
Status: CLOSED ERRATA QA Contact: cluster-qe <cluster-qe>
Severity: unspecified Docs Contact: Steven J. Levine <slevine>
Priority: unspecified    
Version: 7.3CC: cfeist, kwenning, mlisik, mmazoure, mnovacek, royoung, sbradley, slevine, tlavigne
Target Milestone: rcKeywords: Rebase
Target Release: 7.4   
Hardware: Unspecified   
OS: Unspecified   
Fixed In Version: sbd-1.3.0-2.el7 Doc Type: Release Note
Doc Text:
Support added for using shared storage with the SBD daemon Red Hat Enterprise Linux 7.4 provides support for using the SBD (Storage-Based Death) daemon with a shared block device. This allows you to enable fencing by means of a shared block-device in addition to fencing by means of a watchdog device, which had previously been supported. The `fence-agents` package now provides the `fence_sbd` fence agent which is needed to trigger and control the actual fencing by means of an RHCS-style fence agent. SBD is not supported on Pacemaker remote nodes.
Story Points: ---
Clone Of:
: 1413958 1414053 (view as bug list) Environment:
Last Closed: 2017-08-01 16:22:40 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On: 1337236, 1455631    
Bug Blocks: 1410192, 1413958, 1414053    

Description Klaus Wenninger 2017-01-17 12:26:16 UTC
Description of problem:
SBD provided with RHEL doesn't support usage of shared storage - just watchdog

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:

Actual results:

Expected results:

Additional info:

Comment 1 Klaus Wenninger 2017-01-17 12:36:19 UTC
Since we don't support sbd on remote-nodes we won't support shared block devices
with sbd there either.
As with sbd support in general on remote-nodes use of shared block devices there
is not explicitly disabled and in fact seems to be working as expected if
parameters '-n {remote_node_name}' is added to sbd-config.

Comment 2 Klaus Wenninger 2017-01-17 15:37:32 UTC
Guess documentation has to state that we don't support shared block devices
on pacemaker-remote in quite an unambiguous way.

Comment 4 Steven J. Levine 2017-04-19 15:22:13 UTC
Adding myself as docs contact for release note edit.

Comment 5 Steven J. Levine 2017-04-19 19:42:14 UTC

I updated the description for the release note by adding a title and then adding a sentence to set the context -- that we have added this support. I also included the caveat about support on Pacemaker remote nodes.

Does this look ok to you?


Comment 6 Klaus Wenninger 2017-04-19 21:03:56 UTC
When we are going into that degree of details release-note wise we might as well add information that fence-agents package now provides fence_sbd needed to trigger/control the actual fencing by means of a rhcs-style fence-agent.

'poison-pill fencing' is a commonly used and very descriptive term for this kind of fencing. So we might add something like '... enable poison-pill fencing by means ...'.

And I guess s/means a shared/means of a shared/.


Comment 7 Klaus Wenninger 2017-05-18 09:48:09 UTC
(In reply to Klaus Wenninger from comment #0)
> Additional info:

When testing with a KVM-VM you can detach the block device.
This leads to following message ...

May 18 11:32:19 node2 sbd[13165]:   /dev/vdb:  warning: open_device: Opening device /dev/vdb failed.
May 18 11:32:21 node2 sbd[2328]:  warning: inquisitor_child: Majority of devices lost - surviving on pacemaker

So as long as there is still a cluster visible to sbd it won't suicide.
After stopping the other cluster-nodes the suicide will be triggered.
Be aware that you don't have no-quorum-action set to suicide because then you won't know if the suicide was triggered via sbd or pacemaker.

Comment 19 errata-xmlrpc 2017-08-01 16:22:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.