Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
.New pcs resource status display commands
The `pcs resource status` and the `pcs stonith status` commands now support the following options:
* You can display the status of resources configured on a specific node with the `pcs resource status node=_node_id_` command and the `pcs stonith status node=_node_id_` command. You can use these commands to display the status of resources on both cluster and remote nodes.
* You can display the status of a single resource with the `pcs resource status _resource_id_` and the `pcs stonith status _resource_id_` commands.
* You can display the status of all resources with a specified tag with the `pcs resource status _tag_id_` and the `pcs stonith status _tag_id_` commands.
DescriptionRaoul Scarazzini
2015-12-11 15:05:47 UTC
Description of problem:
The pcs command is missing a way to retrieve the status of a single resource. At the moment you can launch pcs status obtaining the status of all the resources, obtaining something like this:
[heat-admin@overcloud-controller-0 ~]$ sudo pcs status
...
...
openstack-cinder-volume (systemd:openstack-cinder-volume): Started
Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-core-clone [openstack-core]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
...
...
And you can work around with grep, but especially with cloned resource it is very difficult to extract from the output the status of the resource since this could be showed in one, two or three lines.
So it would be great to have something like:
[heat-admin@overcloud-controller-0 ~]$ sudo pcs status openstack-core-clone
showing an output like:
Clone Set: openstack-core-clone [openstack-core]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
or in case of failures/stopped resources:
Clone Set: openstack-core-clone [openstack-core]
Started: [ overcloud-controller-0 overcloud-controller-1 ]
Stopped: [ overcloud-controller-2 ]
Maybe it would be also great to have an exit status based upon the status of the resource: Started == 0, Stopped == 1, FAILED == 2.
Comment 3Lars Kellogg-Stedman
2016-01-14 12:48:20 UTC
I would like to piggy back on this bug to request a small number of additional PCS commands. While working on the RHEL-OSP upgrade processes between 5->6 and 6->7, I often wanted a way to *programmatically* query the state of a Pacemaker managed service. Specifically, I would like:
- To reliably test whether or not a resource exists. This could be as simple as an appropriate return code from 'pcs resource status <resourcename>', so that I could do something like:
if ! pcs resource status --quiet sshd; then
pcs resource create sshd ...
fi
The above example suggests a '--quiet' flag to inhibit command output when one is only interested in existence.
- Test if a resource is started:
pcs resource is-started sshd && echo "Resource is started"
- Test if a resource is stopped:
pcs resource is-stopped sshd && echo "Resource is stopped"
- Test if a resource is failed:
pcs resource is-failed sshd && echo "Resource is failed"
- Test if a resource is enabled:
pcs resource is-enabled sshd && echo "Resource is enabled"
- Test if a resource is managed:
pcs resource is-managed sshd && echo "Resource is managed"
For cloned resources or other multi-instance resources, maybe there should be --all and --any flags:
pcs resource is-active --any sshd && echo "At least one sshd is active"
pcs resource is-active --all sshd && echo "All sshd resources are active"
We also want to add the ability to wait for the resource to reach the desired state. So something like this:
pcs resource is-active sshd --wait=30
Will not return until sshd is active or 30 seconds have passed.
Created attachment 1780673[details]
proposed fix + tests
Updated commands:
pcs resource [status [<resource id | tag id>] [node=<node>] [--hide-inactive]]
pcs stonith [status [<resource id | tag id>] [node=<node>] [--hide-inactive]]
pcs status resources [<resource id | tag id>] [node=<node>] [--hide-inactive]
Test:
pcs resource status <resource id>
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Low: pcs security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2021:4142
Description of problem: The pcs command is missing a way to retrieve the status of a single resource. At the moment you can launch pcs status obtaining the status of all the resources, obtaining something like this: [heat-admin@overcloud-controller-0 ~]$ sudo pcs status ... ... openstack-cinder-volume (systemd:openstack-cinder-volume): Started Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-core-clone [openstack-core] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] ... ... And you can work around with grep, but especially with cloned resource it is very difficult to extract from the output the status of the resource since this could be showed in one, two or three lines. So it would be great to have something like: [heat-admin@overcloud-controller-0 ~]$ sudo pcs status openstack-core-clone showing an output like: Clone Set: openstack-core-clone [openstack-core] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] or in case of failures/stopped resources: Clone Set: openstack-core-clone [openstack-core] Started: [ overcloud-controller-0 overcloud-controller-1 ] Stopped: [ overcloud-controller-2 ] Maybe it would be also great to have an exit status based upon the status of the resource: Started == 0, Stopped == 1, FAILED == 2.