Bug 1164402
| Summary: | Support for sbd configuration is needed in pcs | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Chris Feist <cfeist> | |
| Component: | pcs | Assignee: | Ondrej Mular <omular> | |
| Status: | CLOSED ERRATA | QA Contact: | cluster-qe <cluster-qe> | |
| Severity: | unspecified | Docs Contact: | Milan Navratil <mnavrati> | |
| Priority: | high | |||
| Version: | 7.1 | CC: | cluster-maint, fdanapfe, idevat, jpokorny, lmiksik, mlisik, mnavrati, rsteiger, tojeline | |
| Target Milestone: | rc | Keywords: | FutureFeature | |
| Target Release: | --- | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | pcs-0.9.152-8.el7 | Doc Type: | Release Note | |
| Doc Text: |
*Pacemaker* now supports *SBD* fencing configuration
The *SBD* daemon integrates with *Pacemaker*, a watchdog device, to arrange for nodes to reliably self-terminate when fencing is required. This update adds the "pcs stonith sbd" command to configure *SBD* in *Pacemaker*, and it is now also possible to configure *SBD* from the web UI. *SBD* fencing can be particularly useful in environments where traditional fencing mechanisms are not possible. For information on using *SBD* with *Pacemaker*, see the following Red Hat Knowledgebase article: https://access.redhat.com/articles/2212861.
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 1380352 (view as bug list) | Environment: | ||
| Last Closed: | 2016-11-03 20:53:44 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | 1135153 | |||
| Bug Blocks: | 1376496, 1380352 | |||
|
Description
Chris Feist
2014-11-14 23:50:00 UTC
When you add the support for sbd to pcs please also eliminate the "WARNING: no stonith devices and stonith-enabled is not false" that is printed in "pcs status" if sbd is used as the only STONITH device:
[root@node ~]# systemctl status sbd
● sbd.service - Shared-storage based fencing daemon
Loaded: loaded (/usr/lib/systemd/system/sbd.service; enabled; vendor preset: disabled)
Active: active (running) since Mi 2016-05-04 12:34:44 CEST; 1min 10s ago
Process: 11219 ExecStart=/usr/sbin/sbd $SBD_OPTS -p /var/run/sbd.pid watch (code=exited, status=0/SUCCESS)
Main PID: 11220 (sbd)
CGroup: /system.slice/sbd.service
├─11220 sbd: inquisitor
└─11221 sbd: watcher: Pacemaker
Mai 04 12:34:43 lv9089 systemd[1]: Starting Shared-storage based fencing daemon...
Mai 04 12:34:44 lv9089 systemd[1]: Started Shared-storage based fencing daemon.
[root@node ~]# pcs property
Cluster Properties:
cluster-infrastructure: corosync
cluster-name: cluster1
dc-version: 1.1.13-10.el7_2.2-44eb2dd
have-watchdog: true
stonith-watchdog-timeout: 10s
[root@lv9089 ~]# pcs status
Cluster name: cluster1
WARNING: no stonith devices and stonith-enabled is not false <=============
Last updated: Wed May 4 14:27:46 2016 Last change: Wed May 4 12:35:08 2016 by hacluster via crmd on node1hb
Stack: corosync
Current DC: node1hb (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum
2 nodes and 0 resources configured
Online: [ node1hb node2hb ]
Full list of resources:
PCSD Status:
node1hb: Online
node2hb: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
It would be helpful if "pcs status" would show when sbd is used as STONITH device.
Since the cluster nodename is not necessarily identical to the hostname the pcs commands to configure sbd should also have the option to set
SBD_OPTS="-n <nodename>" as documented in the Troubleshooting section of
http://blog.clusterlabs.org/blog/2015/sbd-fun-and-profit/
upstream patches: https://github.com/ClusterLabs/pcs/commit/35a2e7e76bd2962f941b9d6ebb15db803b4b4a1c https://github.com/ClusterLabs/pcs/commit/4f46e38d4ee3fa92fbb12780270a2b32927cc1f1 https://github.com/ClusterLabs/pcs/commit/52d7d8981aca0917c5ceeab08da8db5163b745a9 https://github.com/ClusterLabs/pcs/commit/c45a2e7e952d0938b259dfa2a908bfdb5313ded8 https://github.com/ClusterLabs/pcs/commit/8e18c9c2718418eb31924d89f97228b43b6a258c https://github.com/ClusterLabs/pcs/commit/d6316bc23b5c32c4d36c656c819390bdc53665b8 TEST: CLI -------------------------------------------------------- new pcs subcommand: 'pcs stonith sbd' cluster nodes: rhel72-node4, rhel72-node5, rhel72-node6 requirements: * HW watchdog on all nodes * sbd installed on all nodes Show status of SBD (SBD is disabled): [root@rhel72-node4 ~]# pcs stonith sbd status SBD STATUS <node name>: <installed> | <enabled> | <running> rhel72-node5: YES | NO | NO rhel72-node4: YES | NO | NO rhel72-node6: YES | NO | NO Enabling SBD: [root@rhel72-node4 ~]# pcs stonith sbd enable Running SBD pre-enabling checks... rhel72-node6: SBD pre-enabling checks done rhel72-node4: SBD pre-enabling checks done rhel72-node5: SBD pre-enabling checks done Distributing SBD config... rhel72-node4: SBD config saved rhel72-node5: SBD config saved rhel72-node6: SBD config saved Enabling SBD service... rhel72-node4: sbd enabled rhel72-node5: sbd enabled rhel72-node6: sbd enabled Warning: Cluster restart is required in order to apply these changes. after restarting cluster: [root@rhel72-node4 ~]# pcs stonith sbd status SBD STATUS <node name>: <installed> | <enabled> | <running> rhel72-node5: YES | YES | YES rhel72-node6: YES | YES | YES rhel72-node4: YES | YES | YES Show SBD configuration: [root@rhel72-node4 ~]# pcs stonith sbd config SBD_WATCHDOG_TIMEOUT=5 SBD_PACEMAKER=yes SBD_STARTMODE=clean SBD_DELAY_START=no Watchdogs: rhel72-node5: /dev/watchdog rhel72-node4: /dev/watchdog rhel72-node6: /dev/watchdog Disabling SBD: [root@rhel72-node4 ~]# pcs stonith sbd disable Disabling SBD service... rhel72-node4: sbd disabled rhel72-node5: sbd disabled rhel72-node6: sbd disabled Warning: Cluster restart is required in order to apply these changes. WEB UI -------------------------------------------------------- In cluster management under the fence devices tab there is new SBD link next to add and remove links. After clicking on this link alert will be displayed if managing cluster on which SBD is not supported. Otherwise dialog with information about SBD service on all nodes will show up. If SBD is enabled or running there will be also info about current SBD configuration and watchdogs. Before fix: [vm-rhel72-1 ~] $ rpm -q pcs pcs-0.9.143-15.el7.x86_64 Sbd not supported. After Fix: requirements: * HW watchdog on all nodes * sbd installed on all nodes [vm-rhel72-1 ~] $ rpm -q pcs pcs-0.9.152-1.el7.x86_64 [vm-rhel72-1 ~] $ pcs stonith sbd status SBD STATUS <node name>: <installed> | <enabled> | <running> vm-rhel72-1: YES | NO | NO vm-rhel72-3: YES | NO | NO [vm-rhel72-1 ~] $ pcs stonith sbd enable Running SBD pre-enabling checks... vm-rhel72-1: SBD pre-enabling checks done vm-rhel72-3: SBD pre-enabling checks done Distributing SBD config... vm-rhel72-1: SBD config saved vm-rhel72-3: SBD config saved Enabling SBD service... vm-rhel72-1: sbd enabled vm-rhel72-3: sbd enabled Warning: Cluster restart is required in order to apply these changes. [vm-rhel72-1 ~] $ pcs cluster stop --all && pcs cluster start --all vm-rhel72-1: Stopping Cluster (pacemaker)... vm-rhel72-3: Stopping Cluster (pacemaker)... vm-rhel72-1: Stopping Cluster (corosync)... vm-rhel72-3: Stopping Cluster (corosync)... vm-rhel72-1: Starting Cluster... vm-rhel72-3: Starting Cluster... [vm-rhel72-1 ~] $ pcs stonith sbd status SBD STATUS <node name>: <installed> | <enabled> | <running> vm-rhel72-1: YES | YES | YES vm-rhel72-3: YES | YES | YES [vm-rhel72-1 ~] $ pcs stonith sbd config SBD_WATCHDOG_TIMEOUT=5 SBD_PACEMAKER=yes SBD_STARTMODE=clean SBD_DELAY_START=no Watchdogs: vm-rhel72-1: /dev/watchdog vm-rhel72-3: /dev/watchdog [vm-rhel72-1 ~] $ pcs stonith sbd disable Disabling SBD service... vm-rhel72-3: sbd disabled vm-rhel72-1: sbd disabled Warning: Cluster restart is required in order to apply these changes. In cluster management under the fence devices tab there is new SBD link next to add and remove links. After clicking on this link alert will be displayed if managing cluster on which SBD is not supported. Otherwise dialog with information about SBD service on all nodes will show up. If SBD is enabled or running there will be also info about current SBD configuration and watchdogs. aditional fix https://github.com/ClusterLabs/pcs/commit/a74f7075d5dedc9f45973e36ba2f7d2988d Fix of communication between nodes. upstream patches: https://github.com/ClusterLabs/pcs/commit/f2da8ad476c31b466ca73095aac81a5a81c0bac3 https://github.com/ClusterLabs/pcs/commit/9367e7162b7bf7efad7f34dbffef92a40a7075b7 additional fixes: https://github.com/ClusterLabs/pcs/commit/57b618777d14d11e49f429c701eb8cc0312a https://github.com/ClusterLabs/pcs/commit/f7b9fc15072cd3efc827c19beadf2c358142 https://github.com/ClusterLabs/pcs/commit/0dbdff4628d527fb745fcefe9f945620dc9f https://github.com/ClusterLabs/pcs/commit/1c5ccd3be588ecd9c18784586b5c6350067a https://github.com/ClusterLabs/pcs/commit/733e2833758964fb94f8b65f2946cecc965a https://github.com/ClusterLabs/pcs/commit/9951f3262ef176cc776e117dc9cfd907871a https://github.com/ClusterLabs/pcs/commit/d79592e05158b226cb74012d57bc28df4719 https://github.com/ClusterLabs/pcs/commit/17e4c58388421f283ebb633c5c9232d6236a https://github.com/ClusterLabs/pcs/commit/1ed4c2e3bc38137c1c669c04fd23aac1200f fixed enabling auto_tie_breaker: https://github.com/ClusterLabs/pcs/commit/1a2ea2cb88144acf9d6bf9650e2245c54de5a962 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2016-2596.html |