Bug 1290830

Summary: [RFE] pcs command is missing a way to retrieve the status of a single resource
Product: Red Hat Enterprise Linux 8 Reporter: Raoul Scarazzini <rscarazz>
Component: pcsAssignee: Miroslav Lisik <mlisik>
Status: CLOSED ERRATA QA Contact: cluster-qe <cluster-qe>
Severity: medium Docs Contact: Steven J. Levine <slevine>
Priority: medium    
Version: 8.0CC: cfeist, clumens, cluster-maint, fdinitto, idevat, kgaillot, michele, mlisik, mmazoure, nhostako, omular, sbradley, slevine, tojeline
Target Milestone: rcKeywords: FutureFeature, Triaged
Target Release: 8.5   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: pcs-0.10.8-2.el8 Doc Type: Enhancement
Doc Text:
.New pcs resource status display commands The `pcs resource status` and the `pcs stonith status` commands now support the following options: * You can display the status of resources configured on a specific node with the `pcs resource status node=_node_id_` command and the `pcs stonith status node=_node_id_` command. You can use these commands to display the status of resources on both cluster and remote nodes. * You can display the status of a single resource with the `pcs resource status _resource_id_` and the `pcs stonith status _resource_id_` commands. * You can display the status of all resources with a specified tag with the `pcs resource status _tag_id_` and the `pcs stonith status _tag_id_` commands.
Story Points: ---
Clone Of:
: 1300597 (view as bug list) Environment:
Last Closed: 2021-11-09 17:33:12 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1300597, 1682129    
Bug Blocks: 1477664    
Attachments:
Description Flags
proposed fix + tests none

Description Raoul Scarazzini 2015-12-11 15:05:47 UTC
Description of problem:

The pcs command is missing a way to retrieve the status of a single resource. At the moment you can launch pcs status obtaining the status of all the resources, obtaining something like this:

[heat-admin@overcloud-controller-0 ~]$ sudo pcs status
...
...
 openstack-cinder-volume        (systemd:openstack-cinder-volume):      Started
 Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-core-clone [openstack-core]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
...
...

And you can work around with grep, but especially with cloned resource it is very difficult to extract from the output the status of the resource since this could be showed in one, two or three lines.
So it would be great to have something like:

[heat-admin@overcloud-controller-0 ~]$ sudo pcs status openstack-core-clone

showing an output like:

 Clone Set: openstack-core-clone [openstack-core]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]

or in case of failures/stopped resources:

 Clone Set: openstack-core-clone [openstack-core]
     Started: [ overcloud-controller-0 overcloud-controller-1 ]
     Stopped: [ overcloud-controller-2 ]

Maybe it would be also great to have an exit status based upon the status of the resource: Started == 0, Stopped == 1, FAILED == 2.

Comment 3 Lars Kellogg-Stedman 2016-01-14 12:48:20 UTC
I would like to piggy back on this bug to request a small number of additional PCS commands.  While working on the RHEL-OSP upgrade processes between 5->6 and 6->7, I often wanted a way to *programmatically* query the state of a Pacemaker managed service.  Specifically, I would like:

- To reliably test whether or not a resource exists.  This could be as simple as an appropriate return code from 'pcs resource status <resourcename>', so that I could do something like:

  if ! pcs resource status --quiet sshd; then
    pcs resource create sshd ...
  fi

  The above example suggests a '--quiet' flag to inhibit command output when one is only interested in existence.

- Test if a resource is started:

  pcs resource is-started sshd && echo "Resource is started"

- Test if a resource is stopped:

  pcs resource is-stopped sshd && echo "Resource is stopped"

- Test if a resource is failed:

  pcs resource is-failed sshd && echo "Resource is failed"

- Test if a resource is enabled:

  pcs resource is-enabled sshd && echo "Resource is enabled"

- Test if a resource is managed:

  pcs resource is-managed sshd && echo "Resource is managed"

For cloned resources or other multi-instance resources, maybe there should be --all and --any flags:

    pcs resource is-active --any sshd && echo "At least one sshd is active"
    pcs resource is-active --all sshd && echo "All sshd resources are active"

Comment 4 Chris Feist 2016-01-14 13:10:12 UTC
We also want to add the ability to wait for the resource to reach the desired state.  So something like this:

pcs resource is-active sshd --wait=30

Will not return until sshd is active or 30 seconds have passed.

Comment 6 Tomas Jelinek 2016-06-01 15:50:01 UTC
Pacemaker feature which we depend on will not be ready in the 7.3 timeframe.

Comment 10 Tomas Jelinek 2017-03-16 10:52:52 UTC
The pacemaker feature we depend on got postponed to 7.5.

Comment 30 Miroslav Lisik 2021-05-07 11:15:31 UTC
Created attachment 1780673 [details]
proposed fix + tests

Updated commands:
pcs resource [status [<resource id | tag id>] [node=<node>] [--hide-inactive]]
pcs stonith [status [<resource id | tag id>] [node=<node>] [--hide-inactive]]
pcs status resources [<resource id | tag id>] [node=<node>] [--hide-inactive]

Test:
pcs resource status <resource id>

Comment 34 Miroslav Lisik 2021-06-14 13:37:45 UTC
Test:

[root@r8-node-01 ~]# rpm -q pcs
pcs-0.10.8-2.el8.x86_64

[root@r8-node-01 ~]# pcs resource
  * d-01        (ocf::pacemaker:Dummy):  Stopped (disabled)
  * d-03        (ocf::pacemaker:Dummy):  Started r8-node-03
  * Clone Set: d-02-clone [d-02]:
    * Started: [ r8-node-01 r8-node-02 r8-node-03 ]
[root@r8-node-01 ~]# pcs tag
tagged
  d-01
  d-02
  d-03

[root@r8-node-01 ~]# pcs resource status d-01
  * d-01        (ocf::pacemaker:Dummy):  Stopped (disabled)
[root@r8-node-01 ~]# pcs status resources d-02
  * Clone Set: d-02-clone [d-02]:
    * Started: [ r8-node-01 r8-node-02 r8-node-03 ]
[root@r8-node-01 ~]# pcs resource status tagged
  * d-01        (ocf::pacemaker:Dummy):  Stopped (disabled)
  * d-03        (ocf::pacemaker:Dummy):  Started r8-node-03
  * Clone Set: d-02-clone [d-02]:
    * Started: [ r8-node-01 r8-node-02 r8-node-03 ]

Comment 54 errata-xmlrpc 2021-11-09 17:33:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Low: pcs security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:4142