RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1164402 - Support for sbd configuration is needed in pcs
Summary: Support for sbd configuration is needed in pcs
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pcs
Version: 7.1
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: rc
: ---
Assignee: Ondrej Mular
QA Contact: cluster-qe@redhat.com
Milan Navratil
URL:
Whiteboard:
Depends On: 1135153
Blocks: 1376496 1380352
TreeView+ depends on / blocked
 
Reported: 2014-11-14 23:50 UTC by Chris Feist
Modified: 2016-11-03 20:53 UTC (History)
9 users (show)

Fixed In Version: pcs-0.9.152-8.el7
Doc Type: Release Note
Doc Text:
*Pacemaker* now supports *SBD* fencing configuration The *SBD* daemon integrates with *Pacemaker*, a watchdog device, to arrange for nodes to reliably self-terminate when fencing is required. This update adds the "pcs stonith sbd" command to configure *SBD* in *Pacemaker*, and it is now also possible to configure *SBD* from the web UI. *SBD* fencing can be particularly useful in environments where traditional fencing mechanisms are not possible. For information on using *SBD* with *Pacemaker*, see the following Red Hat Knowledgebase article: https://access.redhat.com/articles/2212861.
Clone Of:
: 1380352 (view as bug list)
Environment:
Last Closed: 2016-11-03 20:53:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2016:2596 0 normal SHIPPED_LIVE Moderate: pcs security, bug fix, and enhancement update 2016-11-03 12:11:34 UTC

Description Chris Feist 2014-11-14 23:50:00 UTC
Support for sbd configuration is need in pcs

Comment 10 Frank Danapfel 2016-05-04 12:34:17 UTC
When you add the support for sbd to pcs please also eliminate the "WARNING: no stonith devices and stonith-enabled is not false" that is printed in "pcs status" if sbd is used as the only STONITH device:

[root@node ~]# systemctl status sbd
● sbd.service - Shared-storage based fencing daemon
   Loaded: loaded (/usr/lib/systemd/system/sbd.service; enabled; vendor preset: disabled)
   Active: active (running) since Mi 2016-05-04 12:34:44 CEST; 1min 10s ago
  Process: 11219 ExecStart=/usr/sbin/sbd $SBD_OPTS -p /var/run/sbd.pid watch (code=exited, status=0/SUCCESS)
 Main PID: 11220 (sbd)
   CGroup: /system.slice/sbd.service
           ├─11220 sbd: inquisitor
           └─11221 sbd: watcher: Pacemaker

Mai 04 12:34:43 lv9089 systemd[1]: Starting Shared-storage based fencing daemon...
Mai 04 12:34:44 lv9089 systemd[1]: Started Shared-storage based fencing daemon.

[root@node ~]# pcs property
Cluster Properties:
 cluster-infrastructure: corosync
 cluster-name: cluster1
 dc-version: 1.1.13-10.el7_2.2-44eb2dd
 have-watchdog: true
 stonith-watchdog-timeout: 10s

[root@lv9089 ~]# pcs status
Cluster name: cluster1
WARNING: no stonith devices and stonith-enabled is not false <=============
Last updated: Wed May  4 14:27:46 2016          Last change: Wed May  4 12:35:08 2016 by hacluster via crmd on node1hb
Stack: corosync
Current DC: node1hb (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum
2 nodes and 0 resources configured

Online: [ node1hb node2hb ]

Full list of resources:


PCSD Status:
  node1hb: Online
  node2hb: Online

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled

It would be helpful if "pcs status" would show when sbd is used as STONITH device.

Since the cluster nodename is not necessarily identical to the hostname the pcs commands to configure sbd should also have the option to set
SBD_OPTS="-n <nodename>" as documented in the Troubleshooting section of
http://blog.clusterlabs.org/blog/2015/sbd-fun-and-profit/

Comment 11 Ondrej Mular 2016-06-09 12:30:04 UTC
upstream patches:
https://github.com/ClusterLabs/pcs/commit/35a2e7e76bd2962f941b9d6ebb15db803b4b4a1c
https://github.com/ClusterLabs/pcs/commit/4f46e38d4ee3fa92fbb12780270a2b32927cc1f1
https://github.com/ClusterLabs/pcs/commit/52d7d8981aca0917c5ceeab08da8db5163b745a9
https://github.com/ClusterLabs/pcs/commit/c45a2e7e952d0938b259dfa2a908bfdb5313ded8
https://github.com/ClusterLabs/pcs/commit/8e18c9c2718418eb31924d89f97228b43b6a258c
https://github.com/ClusterLabs/pcs/commit/d6316bc23b5c32c4d36c656c819390bdc53665b8

TEST:

CLI
--------------------------------------------------------
new pcs subcommand: 'pcs stonith sbd'

cluster nodes: rhel72-node4, rhel72-node5, rhel72-node6
requirements:
 * HW watchdog on all nodes
 * sbd installed on all nodes

Show status of SBD (SBD is disabled):
[root@rhel72-node4 ~]# pcs stonith sbd status 
SBD STATUS
<node name>: <installed> | <enabled> | <running>
rhel72-node5: YES |  NO |  NO
rhel72-node4: YES |  NO |  NO
rhel72-node6: YES |  NO |  NO

Enabling SBD:
[root@rhel72-node4 ~]# pcs stonith sbd enable
Running SBD pre-enabling checks...
rhel72-node6: SBD pre-enabling checks done
rhel72-node4: SBD pre-enabling checks done
rhel72-node5: SBD pre-enabling checks done
Distributing SBD config...
rhel72-node4: SBD config saved
rhel72-node5: SBD config saved
rhel72-node6: SBD config saved
Enabling SBD service...
rhel72-node4: sbd enabled
rhel72-node5: sbd enabled
rhel72-node6: sbd enabled
Warning: Cluster restart is required in order to apply these changes.

after restarting cluster:
[root@rhel72-node4 ~]# pcs stonith sbd status 
SBD STATUS
<node name>: <installed> | <enabled> | <running>
rhel72-node5: YES | YES | YES
rhel72-node6: YES | YES | YES
rhel72-node4: YES | YES | YES

Show SBD configuration:
[root@rhel72-node4 ~]# pcs stonith sbd config 
SBD_WATCHDOG_TIMEOUT=5
SBD_PACEMAKER=yes
SBD_STARTMODE=clean
SBD_DELAY_START=no

Watchdogs:
  rhel72-node5: /dev/watchdog
  rhel72-node4: /dev/watchdog
  rhel72-node6: /dev/watchdog


Disabling SBD:
[root@rhel72-node4 ~]# pcs stonith sbd disable 
Disabling SBD service...
rhel72-node4: sbd disabled
rhel72-node5: sbd disabled
rhel72-node6: sbd disabled
Warning: Cluster restart is required in order to apply these changes.


WEB UI
--------------------------------------------------------
In cluster management under the fence devices tab there is new SBD link next
to add and remove links. After clicking on this link alert will be displayed
if managing cluster on which SBD is not supported. Otherwise dialog with
information about SBD service on all nodes will show up. If SBD is enabled or
running there will be also info about current SBD configuration and watchdogs.

Comment 12 Ivan Devat 2016-06-22 12:04:03 UTC
Before fix:
[vm-rhel72-1 ~] $ rpm -q pcs
pcs-0.9.143-15.el7.x86_64

Sbd not supported.

After Fix:
requirements:
 * HW watchdog on all nodes
 * sbd installed on all nodes

[vm-rhel72-1 ~] $ rpm -q pcs
pcs-0.9.152-1.el7.x86_64

[vm-rhel72-1 ~] $ pcs stonith sbd status
SBD STATUS
<node name>: <installed> | <enabled> | <running>
vm-rhel72-1: YES |  NO |  NO
vm-rhel72-3: YES |  NO |  NO
[vm-rhel72-1 ~] $ pcs stonith sbd enable
Running SBD pre-enabling checks...
vm-rhel72-1: SBD pre-enabling checks done
vm-rhel72-3: SBD pre-enabling checks done
Distributing SBD config...
vm-rhel72-1: SBD config saved
vm-rhel72-3: SBD config saved
Enabling SBD service...
vm-rhel72-1: sbd enabled
vm-rhel72-3: sbd enabled
Warning: Cluster restart is required in order to apply these changes.
[vm-rhel72-1 ~] $ pcs cluster stop --all && pcs cluster start --all
vm-rhel72-1: Stopping Cluster (pacemaker)...
vm-rhel72-3: Stopping Cluster (pacemaker)...
vm-rhel72-1: Stopping Cluster (corosync)...
vm-rhel72-3: Stopping Cluster (corosync)...
vm-rhel72-1: Starting Cluster...
vm-rhel72-3: Starting Cluster...
[vm-rhel72-1 ~] $ pcs stonith sbd status
SBD STATUS
<node name>: <installed> | <enabled> | <running>
vm-rhel72-1: YES | YES | YES
vm-rhel72-3: YES | YES | YES
[vm-rhel72-1 ~] $ pcs stonith sbd config
SBD_WATCHDOG_TIMEOUT=5
SBD_PACEMAKER=yes
SBD_STARTMODE=clean
SBD_DELAY_START=no

Watchdogs:
  vm-rhel72-1: /dev/watchdog
  vm-rhel72-3: /dev/watchdog
[vm-rhel72-1 ~] $ pcs stonith sbd disable
Disabling SBD service...
vm-rhel72-3: sbd disabled
vm-rhel72-1: sbd disabled
Warning: Cluster restart is required in order to apply these changes.

In cluster management under the fence devices tab there is new SBD link next
to add and remove links. After clicking on this link alert will be displayed
if managing cluster on which SBD is not supported. Otherwise dialog with
information about SBD service on all nodes will show up. If SBD is enabled or
running there will be also info about current SBD configuration and watchdogs.

Comment 14 Ondrej Mular 2016-06-30 11:55:26 UTC
aditional fix
https://github.com/ClusterLabs/pcs/commit/a74f7075d5dedc9f45973e36ba2f7d2988d

Fix of communication between nodes.

Comment 22 Tomas Jelinek 2016-08-24 08:45:05 UTC
fixed enabling auto_tie_breaker:
https://github.com/ClusterLabs/pcs/commit/1a2ea2cb88144acf9d6bf9650e2245c54de5a962

Comment 28 errata-xmlrpc 2016-11-03 20:53:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2596.html


Note You need to log in before you can comment on or make changes to this bug.