Bug 1369029
| Summary: | Bad node(s) in error message(s) when executing maintenance/unmaintenance commands with nodes | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Miroslav Lisik <mlisik> |
| Component: | pcs | Assignee: | Ondrej Mular <omular> |
| Status: | CLOSED ERRATA | QA Contact: | cluster-qe <cluster-qe> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | medium | ||
| Version: | 6.8 | CC: | cfeist, cluster-maint, idevat, omular, tojeline |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | pcs-0.9.154-1.el6 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2017-03-21 11:04:12 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Fixed in upstream: https://github.com/ClusterLabs/pcs/commit/1a95e216d0d6f5a71e2c8111c2b8323efa3d9fe4 See bz1247088 comment 14 for details Before Fix: [vm-rhel67-1 ~] $ rpm -q pcs pcs-0.9.148-7.el6_8.1.x86_64 [vm-rhel67-1 ~] $ pcs status nodes pacemaker Pacemaker Nodes: Online: vm-rhel67-1 vm-rhel67-2 Standby: Maintenance: Offline: Pacemaker Remote Nodes: Online: Standby: Maintenance: Offline: [vm-rhel67-1 ~] $ pcs node maintenance vm-rhel67-2 bad1 bad2 Error: Node 'vm-rhel67-2' does not appear to exist in configuration Error: Node 'vm-rhel67-2' does not appear to exist in configuration [vm-rhel67-1 ~] $ pcs status nodes pacemaker Pacemaker Nodes: Online: vm-rhel67-1 Standby: Maintenance: vm-rhel67-2 Offline: Pacemaker Remote Nodes: Online: Standby: Maintenance: Offline: After Fix: [vm-rhel67-1 ~] $ rpm -q pcs pcs-0.9.154-1.el6.x86_64 [vm-rhel67-1 ~] $ pcs status nodes pacemaker Pacemaker Nodes: Online: vm-rhel67-1 vm-rhel67-2 Standby: Maintenance: Offline: Pacemaker Remote Nodes: Online: Standby: Maintenance: Offline: You have new mail. [vm-rhel67-1 ~] $ pcs node maintenance vm-rhel67-2 bad1 bad2 Error: Node 'bad1' does not appear to exist in configuration Error: Node 'bad2' does not appear to exist in configuration [vm-rhel67-1 ~] $ pcs status nodes pacemaker Pacemaker Nodes: Online: vm-rhel67-1 vm-rhel67-2 Standby: Maintenance: Offline: Pacemaker Remote Nodes: Online: Standby: Maintenance: Offline: Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2017-0707.html |
Description of problem: Bad node(s) in error message(s) when executing maintenance/unmaintenance commands with more nodes which aren't in cluster. Version-Release number of selected component (if applicable): pcs-0.9.148-7.el6.x86_64 How reproducible: always Steps to Reproduce: Cluster nodes: [root@virt-242 ~]# pcs status nodes config Corosync Nodes: virt-242 virt-243 virt-262 Pacemaker Nodes: virt-242 virt-243 virt-262 1. Try these commands: a) [root@virt-242 ~]# pcs node maintenance virt-00{1..3} Error: Node 'virt-001' does not appear to exist in configuration Error: Node 'virt-001' does not appear to exist in configuration Error: Node 'virt-001' does not appear to exist in configuration b) [root@virt-242 ~]# pcs node unmaintenance virt-00{1..3} Error: Node 'virt-001' does not appear to exist in configuration Error: Node 'virt-001' does not appear to exist in configuration Error: Node 'virt-001' does not appear to exist in configuration c) [root@virt-242 ~]# pcs node maintenance virt-242 virt-00{1..3} Error: Node 'virt-242' does not appear to exist in configuration Error: Node 'virt-242' does not appear to exist in configuration Error: Node 'virt-242' does not appear to exist in configuration [root@virt-242 ~]# echo $? 1 [root@virt-242 ~]# pcs cluster cib | grep maintenance <nvpair id="nodes-virt-242-maintenance" name="maintenance" value="on"/> NOTE: The maintenance mode was set despite the exit error code. Actual results: Bad node in error messages. First given node is always in each error message. Expected results: Right node names in error messages. Additional info: If cluster node and other hosts are mixed, maintenance mode is set on the cluster nodes and error messages are written for others. Command exits with error code, which is inconsistent.