Hide Forgot
Description of problem: Bad node(s) in error message(s) when executing maintenance/unmaintenance commands with more nodes which aren't in cluster. Version-Release number of selected component (if applicable): pcs-0.9.148-7.el6.x86_64 How reproducible: always Steps to Reproduce: Cluster nodes: [root@virt-242 ~]# pcs status nodes config Corosync Nodes: virt-242 virt-243 virt-262 Pacemaker Nodes: virt-242 virt-243 virt-262 1. Try these commands: a) [root@virt-242 ~]# pcs node maintenance virt-00{1..3} Error: Node 'virt-001' does not appear to exist in configuration Error: Node 'virt-001' does not appear to exist in configuration Error: Node 'virt-001' does not appear to exist in configuration b) [root@virt-242 ~]# pcs node unmaintenance virt-00{1..3} Error: Node 'virt-001' does not appear to exist in configuration Error: Node 'virt-001' does not appear to exist in configuration Error: Node 'virt-001' does not appear to exist in configuration c) [root@virt-242 ~]# pcs node maintenance virt-242 virt-00{1..3} Error: Node 'virt-242' does not appear to exist in configuration Error: Node 'virt-242' does not appear to exist in configuration Error: Node 'virt-242' does not appear to exist in configuration [root@virt-242 ~]# echo $? 1 [root@virt-242 ~]# pcs cluster cib | grep maintenance <nvpair id="nodes-virt-242-maintenance" name="maintenance" value="on"/> NOTE: The maintenance mode was set despite the exit error code. Actual results: Bad node in error messages. First given node is always in each error message. Expected results: Right node names in error messages. Additional info: If cluster node and other hosts are mixed, maintenance mode is set on the cluster nodes and error messages are written for others. Command exits with error code, which is inconsistent.
Fixed in upstream: https://github.com/ClusterLabs/pcs/commit/1a95e216d0d6f5a71e2c8111c2b8323efa3d9fe4 See bz1247088 comment 14 for details
Before Fix: [vm-rhel67-1 ~] $ rpm -q pcs pcs-0.9.148-7.el6_8.1.x86_64 [vm-rhel67-1 ~] $ pcs status nodes pacemaker Pacemaker Nodes: Online: vm-rhel67-1 vm-rhel67-2 Standby: Maintenance: Offline: Pacemaker Remote Nodes: Online: Standby: Maintenance: Offline: [vm-rhel67-1 ~] $ pcs node maintenance vm-rhel67-2 bad1 bad2 Error: Node 'vm-rhel67-2' does not appear to exist in configuration Error: Node 'vm-rhel67-2' does not appear to exist in configuration [vm-rhel67-1 ~] $ pcs status nodes pacemaker Pacemaker Nodes: Online: vm-rhel67-1 Standby: Maintenance: vm-rhel67-2 Offline: Pacemaker Remote Nodes: Online: Standby: Maintenance: Offline: After Fix: [vm-rhel67-1 ~] $ rpm -q pcs pcs-0.9.154-1.el6.x86_64 [vm-rhel67-1 ~] $ pcs status nodes pacemaker Pacemaker Nodes: Online: vm-rhel67-1 vm-rhel67-2 Standby: Maintenance: Offline: Pacemaker Remote Nodes: Online: Standby: Maintenance: Offline: You have new mail. [vm-rhel67-1 ~] $ pcs node maintenance vm-rhel67-2 bad1 bad2 Error: Node 'bad1' does not appear to exist in configuration Error: Node 'bad2' does not appear to exist in configuration [vm-rhel67-1 ~] $ pcs status nodes pacemaker Pacemaker Nodes: Online: vm-rhel67-1 vm-rhel67-2 Standby: Maintenance: Offline: Pacemaker Remote Nodes: Online: Standby: Maintenance: Offline:
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2017-0707.html