Bug 1290830
Summary: | [RFE] pcs command is missing a way to retrieve the status of a single resource | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 8 | Reporter: | Raoul Scarazzini <rscarazz> | ||||
Component: | pcs | Assignee: | Miroslav Lisik <mlisik> | ||||
Status: | CLOSED ERRATA | QA Contact: | cluster-qe <cluster-qe> | ||||
Severity: | medium | Docs Contact: | Steven J. Levine <slevine> | ||||
Priority: | medium | ||||||
Version: | 8.0 | CC: | cfeist, clumens, cluster-maint, fdinitto, idevat, kgaillot, michele, mlisik, mmazoure, nhostako, omular, sbradley, slevine, tojeline | ||||
Target Milestone: | rc | Keywords: | FutureFeature, Triaged | ||||
Target Release: | 8.5 | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | pcs-0.10.8-2.el8 | Doc Type: | Enhancement | ||||
Doc Text: |
.New pcs resource status display commands
The `pcs resource status` and the `pcs stonith status` commands now support the following options:
* You can display the status of resources configured on a specific node with the `pcs resource status node=_node_id_` command and the `pcs stonith status node=_node_id_` command. You can use these commands to display the status of resources on both cluster and remote nodes.
* You can display the status of a single resource with the `pcs resource status _resource_id_` and the `pcs stonith status _resource_id_` commands.
* You can display the status of all resources with a specified tag with the `pcs resource status _tag_id_` and the `pcs stonith status _tag_id_` commands.
|
Story Points: | --- | ||||
Clone Of: | |||||||
: | 1300597 (view as bug list) | Environment: | |||||
Last Closed: | 2021-11-09 17:33:12 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | 1300597, 1682129 | ||||||
Bug Blocks: | 1477664 | ||||||
Attachments: |
|
Description
Raoul Scarazzini
2015-12-11 15:05:47 UTC
I would like to piggy back on this bug to request a small number of additional PCS commands. While working on the RHEL-OSP upgrade processes between 5->6 and 6->7, I often wanted a way to *programmatically* query the state of a Pacemaker managed service. Specifically, I would like: - To reliably test whether or not a resource exists. This could be as simple as an appropriate return code from 'pcs resource status <resourcename>', so that I could do something like: if ! pcs resource status --quiet sshd; then pcs resource create sshd ... fi The above example suggests a '--quiet' flag to inhibit command output when one is only interested in existence. - Test if a resource is started: pcs resource is-started sshd && echo "Resource is started" - Test if a resource is stopped: pcs resource is-stopped sshd && echo "Resource is stopped" - Test if a resource is failed: pcs resource is-failed sshd && echo "Resource is failed" - Test if a resource is enabled: pcs resource is-enabled sshd && echo "Resource is enabled" - Test if a resource is managed: pcs resource is-managed sshd && echo "Resource is managed" For cloned resources or other multi-instance resources, maybe there should be --all and --any flags: pcs resource is-active --any sshd && echo "At least one sshd is active" pcs resource is-active --all sshd && echo "All sshd resources are active" We also want to add the ability to wait for the resource to reach the desired state. So something like this: pcs resource is-active sshd --wait=30 Will not return until sshd is active or 30 seconds have passed. Pacemaker feature which we depend on will not be ready in the 7.3 timeframe. The pacemaker feature we depend on got postponed to 7.5. Created attachment 1780673 [details]
proposed fix + tests
Updated commands:
pcs resource [status [<resource id | tag id>] [node=<node>] [--hide-inactive]]
pcs stonith [status [<resource id | tag id>] [node=<node>] [--hide-inactive]]
pcs status resources [<resource id | tag id>] [node=<node>] [--hide-inactive]
Test:
pcs resource status <resource id>
Test: [root@r8-node-01 ~]# rpm -q pcs pcs-0.10.8-2.el8.x86_64 [root@r8-node-01 ~]# pcs resource * d-01 (ocf::pacemaker:Dummy): Stopped (disabled) * d-03 (ocf::pacemaker:Dummy): Started r8-node-03 * Clone Set: d-02-clone [d-02]: * Started: [ r8-node-01 r8-node-02 r8-node-03 ] [root@r8-node-01 ~]# pcs tag tagged d-01 d-02 d-03 [root@r8-node-01 ~]# pcs resource status d-01 * d-01 (ocf::pacemaker:Dummy): Stopped (disabled) [root@r8-node-01 ~]# pcs status resources d-02 * Clone Set: d-02-clone [d-02]: * Started: [ r8-node-01 r8-node-02 r8-node-03 ] [root@r8-node-01 ~]# pcs resource status tagged * d-01 (ocf::pacemaker:Dummy): Stopped (disabled) * d-03 (ocf::pacemaker:Dummy): Started r8-node-03 * Clone Set: d-02-clone [d-02]: * Started: [ r8-node-01 r8-node-02 r8-node-03 ] Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Low: pcs security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:4142 |