Bug 1290830 - pcs command is missing a way to retrieve the status of a single resource
pcs command is missing a way to retrieve the status of a single resource
Status: NEW
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pcs (Show other bugs)
7.0
Unspecified Unspecified
medium Severity medium
: rc
: ---
Assigned To: Tomas Jelinek
cluster-qe@redhat.com
: FutureFeature
Depends On: 1300597
Blocks:
  Show dependency treegraph
 
Reported: 2015-12-11 10:05 EST by Raoul Scarazzini
Modified: 2017-08-30 06:26 EDT (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1300597 (view as bug list)
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Raoul Scarazzini 2015-12-11 10:05:47 EST
Description of problem:

The pcs command is missing a way to retrieve the status of a single resource. At the moment you can launch pcs status obtaining the status of all the resources, obtaining something like this:

[heat-admin@overcloud-controller-0 ~]$ sudo pcs status
...
...
 openstack-cinder-volume        (systemd:openstack-cinder-volume):      Started
 Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-core-clone [openstack-core]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
...
...

And you can work around with grep, but especially with cloned resource it is very difficult to extract from the output the status of the resource since this could be showed in one, two or three lines.
So it would be great to have something like:

[heat-admin@overcloud-controller-0 ~]$ sudo pcs status openstack-core-clone

showing an output like:

 Clone Set: openstack-core-clone [openstack-core]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]

or in case of failures/stopped resources:

 Clone Set: openstack-core-clone [openstack-core]
     Started: [ overcloud-controller-0 overcloud-controller-1 ]
     Stopped: [ overcloud-controller-2 ]

Maybe it would be also great to have an exit status based upon the status of the resource: Started == 0, Stopped == 1, FAILED == 2.
Comment 3 Lars Kellogg-Stedman 2016-01-14 07:48:20 EST
I would like to piggy back on this bug to request a small number of additional PCS commands.  While working on the RHEL-OSP upgrade processes between 5->6 and 6->7, I often wanted a way to *programmatically* query the state of a Pacemaker managed service.  Specifically, I would like:

- To reliably test whether or not a resource exists.  This could be as simple as an appropriate return code from 'pcs resource status <resourcename>', so that I could do something like:

  if ! pcs resource status --quiet sshd; then
    pcs resource create sshd ...
  fi

  The above example suggests a '--quiet' flag to inhibit command output when one is only interested in existence.

- Test if a resource is started:

  pcs resource is-started sshd && echo "Resource is started"

- Test if a resource is stopped:

  pcs resource is-stopped sshd && echo "Resource is stopped"

- Test if a resource is failed:

  pcs resource is-failed sshd && echo "Resource is failed"

- Test if a resource is enabled:

  pcs resource is-enabled sshd && echo "Resource is enabled"

- Test if a resource is managed:

  pcs resource is-managed sshd && echo "Resource is managed"

For cloned resources or other multi-instance resources, maybe there should be --all and --any flags:

    pcs resource is-active --any sshd && echo "At least one sshd is active"
    pcs resource is-active --all sshd && echo "All sshd resources are active"
Comment 4 Chris Feist 2016-01-14 08:10:12 EST
We also want to add the ability to wait for the resource to reach the desired state.  So something like this:

pcs resource is-active sshd --wait=30

Will not return until sshd is active or 30 seconds have passed.
Comment 6 Tomas Jelinek 2016-06-01 11:50:01 EDT
Pacemaker feature which we depend on will not be ready in the 7.3 timeframe.
Comment 10 Tomas Jelinek 2017-03-16 06:52:52 EDT
The pacemaker feature we depend on got postponed to 7.5.

Note You need to log in before you can comment on or make changes to this bug.