Bug 1174244

Summary: pcs stonith show / pcs stonith should show the node a stonith resource is running on
Product: Red Hat Enterprise Linux 6 Reporter: Frank Danapfel <fdanapfe>
Component: pcsAssignee: Tomas Jelinek <tojeline>
Status: CLOSED ERRATA QA Contact: cluster-qe <cluster-qe>
Severity: medium Docs Contact:
Priority: medium    
Version: 6.6CC: cluster-maint, lmiccini, rsteiger, tojeline
Target Milestone: rc   
Target Release: 6.7   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: pcs-0.9.138-1.el6 Doc Type: Bug Fix
Doc Text:
* After the user displayed the list of STONITH devices or resources, their locations were not included. Now, the list also contains the locations of the devices and resources. (BZ#1174244)
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-07-22 06:15:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
proposed fix none

Description Frank Danapfel 2014-12-15 13:39:53 UTC
Description of problem:
Currently "pcs stonith show" only shows if a stonith resource is started or stoppped, but not on which node it is running. since in most cases it is important that the stonith resource does not run on the node which it is supposed to fence it would be usefull if "pcs stonith show" would aslo show the node the stonith resource is currently running on.

Version-Release number of selected component (if applicable):
pcs-0.9.123-9.el6

How reproducible:
always

Steps to Reproduce:
1. configure some stonith resources
2. run "pcs stonith" or pcs stonith show"
3.

Actual results:
[root]# pcs stonith show
 st-ipmi-node1 (stonith:fence_ipmilan):        Started 
 st-ipmi-node2 (stonith:fence_ipmilan):        Started

Expected results:
[root]# pcs stonith show
 st-ipmi-node1 (stonith:fence_ipmilan):        Started  node2
 st-ipmi-node2 (stonith:fence_ipmilan):        Started  node1

Additional info:
"pcs status already provides the information omn which nodes the stonith resources are running so it should be easy to integrate this info in the output of "pcs stonith":

[root@ls3244 ~]# pcs status
Cluster name: testcluster
Last updated: Mon Dec 15 14:37:28 2014
Last change: Mon Dec 15 14:36:32 2014
Stack: cman
Current DC: node1 - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured
6 Resources configured


Online: [ node1 node2 ]

Full list of resources:

 st-ipmi-node1 (stonith:fence_ipmilan):        Started node2 
 st-ipmi-node2 (stonith:fence_ipmilan):        Started node1
...

Comment 2 Tomas Jelinek 2015-01-13 16:30:02 UTC
Created attachment 979670 [details]
proposed fix

Test:

Before fix:
[root@rh66-node1:~]# pcs resource 
 dummy  (ocf::heartbeat:Dummy): Started 
 delay  (ocf::heartbeat:Delay): Started 

After fix:
[root@rh66-node1:~]# pcs resource 
 dummy  (ocf::heartbeat:Dummy): Started rh66-node1 
 delay  (ocf::heartbeat:Delay): Started rh66-node2

Comment 3 Tomas Jelinek 2015-01-27 13:52:08 UTC
Before Fix:
[root@rh66-node1 ~]# rpm -q pcs
pcs-0.9.123-9.el6.x86_64

[root@rh66-node1:~]# pcs resource
 dummy  (ocf::heartbeat:Dummy): Started
 delay  (ocf::heartbeat:Delay): Started
[root@rh66-node1:~]# pcs stonith
 xvmNode1       (stonith:fence_xvm):    Started
 xvmNode2       (stonith:fence_xvm):    Started



After Fix:
[root@rh66-node1:~]# rpm -q pcs
pcs-0.9.138-1.el6.x86_64

[root@rh66-node1:~]# pcs resource
 dummy  (ocf::heartbeat:Dummy): Started rh66-node3 
 delay  (ocf::heartbeat:Delay): Started rh66-node1 
[root@rh66-node1:~]# pcs stonith
 xvmNode1       (stonith:fence_xvm):    Started rh66-node1 
 xvmNode2       (stonith:fence_xvm):    Started rh66-node2

Comment 7 errata-xmlrpc 2015-07-22 06:15:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-1446.html