Bug 2109852
| Summary: | No output from pcs resource disable --simulate --brief | ||
|---|---|---|---|
| Product: | [Fedora] Fedora | Reporter: | Andrew Price <anprice> |
| Component: | pcs | Assignee: | Tomas Jelinek <tojeline> |
| Status: | CLOSED ERRATA | QA Contact: | Fedora Extras Quality Assurance <extras-qa> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 37 | CC: | anprice, cfeist, idevat, mlisik, omular, tojeline |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | pcs-0.11.6-1.fc39 pcs-0.11.6-1.fc37 pcs-0.11.6-1.fc38 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-06-22 13:33:05 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
This bug appears to have been reported against 'rawhide' during the Fedora Linux 37 development cycle. Changing version to 37. Upstream patch: https://github.com/ClusterLabs/pcs/commit/3e479bdb68dc900523a743e7dcb759b501385555 This is actually expected behavior. See original bz1833114 for details. The only issue here is that the documentation is not clear, which the patch addresses. FEDORA-2023-e4cb7a5bda has been submitted as an update to Fedora 39. https://bodhi.fedoraproject.org/updates/FEDORA-2023-e4cb7a5bda FEDORA-2023-e4cb7a5bda has been pushed to the Fedora 39 stable repository. If problem still persists, please make note of it in this bug report. FEDORA-2023-b86fd9ad80 has been submitted as an update to Fedora 38. https://bodhi.fedoraproject.org/updates/FEDORA-2023-b86fd9ad80 FEDORA-2023-ae96dd6105 has been submitted as an update to Fedora 37. https://bodhi.fedoraproject.org/updates/FEDORA-2023-ae96dd6105 FEDORA-2023-b86fd9ad80 has been pushed to the Fedora 38 testing repository. Soon you'll be able to install the update with the following command: `sudo dnf upgrade --enablerepo=updates-testing --refresh --advisory=FEDORA-2023-b86fd9ad80` You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2023-b86fd9ad80 See also https://fedoraproject.org/wiki/QA:Updates_Testing for more information on how to test updates. FEDORA-2023-ae96dd6105 has been pushed to the Fedora 37 testing repository. Soon you'll be able to install the update with the following command: `sudo dnf upgrade --enablerepo=updates-testing --refresh --advisory=FEDORA-2023-ae96dd6105` You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2023-ae96dd6105 See also https://fedoraproject.org/wiki/QA:Updates_Testing for more information on how to test updates. FEDORA-2023-ae96dd6105 has been pushed to the Fedora 37 stable repository. If problem still persists, please make note of it in this bug report. FEDORA-2023-b86fd9ad80 has been pushed to the Fedora 38 stable repository. If problem still persists, please make note of it in this bug report. |
[root@rawhide1 ~]# pcs resource enable --wait gfs2-fs1 Waiting for the cluster to apply configuration changes... resource 'gfs2-fs1' is running on nodes 'rawhide1', 'rawhide2', 'rawhide3' [root@rawhide1 ~]# pcs resource disable --simulate --brief gfs2-fs1 [root@rawhide1 ~]# The docs say "If --brief is also specified, only a list of affected resources will be printed." so I expected the gfs2-fs1 resource to be listed. Removing --brief shows the gfs2-fs1 resource as stopped in the full output: [root@rawhide1 ~]# pcs resource disable --simulate gfs2-fs1 3 of 19 resource instances DISABLED and 0 BLOCKED from further action due to failure Current cluster status: * Node List: * Online: [ rawhide1 rawhide2 rawhide3 ] * Full List of Resources: * xvm-fencing (stonith:fence_xvm): Started rawhide1 * Clone Set: locking-clone [locking]: * Started: [ rawhide1 rawhide2 rawhide3 ] * Clone Set: fs1-group-clone [fs1-group]: * Started: [ rawhide1 rawhide2 rawhide3 ] * Clone Set: fs2-group-clone [fs2-group]: * Started: [ rawhide1 rawhide2 rawhide3 ] Transition Summary: * Stop gfs2-fs1:0 ( rawhide1 ) due to node availability * Stop gfs2-fs1:1 ( rawhide3 ) due to node availability * Stop gfs2-fs1:2 ( rawhide2 ) due to node availability Executing Cluster Transition: * Pseudo action: fs1-group-clone_stop_0 * Pseudo action: fs1-group:0_stop_0 * Resource action: gfs2-fs1 stop on rawhide1 * Pseudo action: fs1-group:1_stop_0 * Resource action: gfs2-fs1 stop on rawhide3 * Pseudo action: fs1-group:2_stop_0 * Resource action: gfs2-fs1 stop on rawhide2 * Pseudo action: fs1-group:0_stopped_0 * Pseudo action: fs1-group:1_stopped_0 * Pseudo action: fs1-group:2_stopped_0 * Pseudo action: fs1-group-clone_stopped_0 Revised Cluster Status: * Node List: * Online: [ rawhide1 rawhide2 rawhide3 ] * Full List of Resources: * xvm-fencing (stonith:fence_xvm): Started rawhide1 * Clone Set: locking-clone [locking]: * Started: [ rawhide1 rawhide2 rawhide3 ] * Clone Set: fs1-group-clone [fs1-group]: * Resource Group: fs1-group:0: * vg_gfs2_1-lv_gfs2_1 (ocf::heartbeat:LVM-activate): Started rawhide1 * gfs2-fs1 (ocf::heartbeat:Filesystem): Stopped (disabled) * Resource Group: fs1-group:1: * vg_gfs2_1-lv_gfs2_1 (ocf::heartbeat:LVM-activate): Started rawhide3 * gfs2-fs1 (ocf::heartbeat:Filesystem): Stopped (disabled) * Resource Group: fs1-group:2: * vg_gfs2_1-lv_gfs2_1 (ocf::heartbeat:LVM-activate): Started rawhide2 * gfs2-fs1 (ocf::heartbeat:Filesystem): Stopped (disabled) * Clone Set: fs2-group-clone [fs2-group]: * Started: [ rawhide1 rawhide2 rawhide3 ] (Edited to add:) [root@rawhide1 ~]# rpm -q pcs pcs-0.11.3-1.fc37.x86_64