RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1245721 - Add option to put node into maintenance mode
Summary: Add option to put node into maintenance mode
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: pcs
Version: 6.6
Hardware: Unspecified
OS: Linux
medium
unspecified
Target Milestone: rc
: 6.8
Assignee: Ondrej Mular
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1315366
TreeView+ depends on / blocked
 
Reported: 2015-07-22 15:13 UTC by Frank Danapfel
Modified: 2016-05-10 19:27 UTC (History)
8 users (show)

Fixed In Version: pcs-0.9.145-1.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1247088 1315366 (view as bug list)
Environment:
Last Closed: 2016-05-10 19:27:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
proposed fix (25.27 KB, patch)
2015-10-21 13:59 UTC, Tomas Jelinek
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1247088 1 None None None 2021-01-20 06:05:38 UTC
Red Hat Product Errata RHBA-2016:0739 0 normal SHIPPED_LIVE pcs bug fix update 2016-05-10 22:29:32 UTC

Internal Links: 1247088

Description Frank Danapfel 2015-07-22 15:13:14 UTC
Description of problem:
Currently it is only possible to put a node into/out of standby mode using "pcs cluster [standby|unstandby]" which causes all resources currently running on the node to be moved to other nodes.

But sometimes it is desirable to be able to put the node in maintenace mode, where all resources keep running on the node but the cluster does not act in case some change to the resources occurs. It is possible to achieve this by running pcs resource (un)manage for each resource running on the node, but when there are a lot of resources running on a node this can be a tedious job.

Version-Release number of selected component (if applicable):
pcs-0.9.123-9.el6_6.2.x86_64

How reproducible:
always

Steps to Reproduce:
1.
2.
3.

Actual results:
no option to put node into maintenance mode

Expected results:
pcs cluster [maintenance|unmaintenance] commands (or something similar) to put a node into/out of maintenance mode.

Additional info:
It is possible to achieve this currently by running
crm_attribute -t nodes -N <nodename> -n maintenance -v on
and
crm_attribute -t nodes -N <nodename> -n maintenance -v off
but since customers shouldn't use crm_commands directly it would be nice to be able to do this via pcs as well.

Comment 4 Chris Feist 2015-08-06 01:28:04 UTC
This can be enabled/disabled with the following crm_attribute commands:

crm_attribute -t nodes -N <nodename> -n maintenance -v on
crm_attribute -t nodes -N <nodename> -n maintenance -v off

We will add a pcs command to simplify that creation.

Comment 5 Jan Pokorný [poki] 2015-08-28 17:01:07 UTC
I don't think there is an "ask interactively" precedence in pcs
(putting auth aside), but in case of any resource running (or just
defined ... or just always) it would be good to ask the user if she
indeed wants "pcs cluster standby --all" or rather enable the maintenance
mode.  Or just apply this logic without asking as this should definitely
be more sane thing to do.

Comment 6 Jan Pokorný [poki] 2015-08-28 17:13:21 UTC
In case you don't like "standby --all" -> maintenance mode trigger
proposal, you may, at least, consider translating that command into
"pcs property set stop_all_resources=true" that would likely avoid 
inevitable issue with iterated switch toggling -- evacuation
of the resources to the rest of living nodes (until they are
standby'd as well) connected with unnecessary and unhelpful
utilization peaks there (there are some risks implied).

Comment 7 Frank Danapfel 2015-08-31 13:23:25 UTC
Jan, I didn't suggest that there should be a dialog or something asking the if somebody wants to put a node in standby mode or maintenance mode. All I'm asking for  is to be able to enable/disable maintenance mode via pcs, without having to fall back to using "crm_attribute".

Comment 8 Jan Pokorný [poki] 2015-09-01 15:11:08 UTC
Frank, I was talking about possibly bad behavior of "pcs cluster
standby --all" which could not be translated into enabling maintenance
mode as pcs hasn't be able to do that yet.  With this bug resolved
it will be, hence proposing translating that command to maintenance
mode enablement (implicitly or explicitly).  Is it more clear now?

Comment 10 Tomas Jelinek 2015-10-21 13:59:35 UTC
Created attachment 1085139 [details]
proposed fix

Comment 11 Tomas Jelinek 2015-11-04 11:28:23 UTC
Before Fix:
[root@rh67-node1 ~]# rpm -q pcs
pcs-0.9.139-9.el6_7.1.x86_64
Pcs does not have ability to put a node to maintenance mode.



After Fix:
[root@rh67-node1:~]# rpm -q pcs
pcs-0.9.145-1.el6.x86_64
[root@rh67-node1:~]# pcs status nodes
Pacemaker Nodes:
 Online: rh67-node1 rh67-node2 
 Standby: 
 Maintenance: 
 Offline: 
Pacemaker Remote Nodes:
 Online: 
 Standby: 
 Maintenance: 
 Offline: 
[root@rh67-node1:~]# pcs node maintenance
[root@rh67-node1:~]# pcs status nodes
Pacemaker Nodes:
 Online: rh67-node2 
 Standby: 
 Maintenance: rh67-node1 
 Offline: 
Pacemaker Remote Nodes:
 Online: 
 Standby: 
 Maintenance: 
 Offline: 
[root@rh67-node1:~]# pcs node unmaintenance
[root@rh67-node1:~]# pcs status nodes
Pacemaker Nodes:
 Online: rh67-node1 rh67-node2 
 Standby: 
 Maintenance: 
 Offline: 
Pacemaker Remote Nodes:
 Online: 
 Standby: 
 Maintenance: 
 Offline: 
[root@rh67-node1:~]# pcs node maintenance rh67-node2
[root@rh67-node1:~]# pcs status nodes
Pacemaker Nodes:
 Online: rh67-node1 
 Standby: 
 Maintenance: rh67-node2 
 Offline: 
Pacemaker Remote Nodes:
 Online: 
 Standby: 
 Maintenance: 
 Offline: 
[root@rh67-node1:~]# pcs node unmaintenance rh67-node2
[root@rh67-node1:~]# pcs status nodes
Pacemaker Nodes:
 Online: rh67-node1 rh67-node2 
 Standby: 
 Maintenance: 
 Offline: 
Pacemaker Remote Nodes:
 Online: 
 Standby: 
 Maintenance: 
 Offline: 
[root@rh67-node1:~]# pcs node maintenance --all
[root@rh67-node1:~]# pcs status nodes
Pacemaker Nodes:
 Online: 
 Standby: 
 Maintenance: rh67-node1 rh67-node2 
 Offline: 
Pacemaker Remote Nodes:
 Online: 
 Standby: 
 Maintenance: 
 Offline: 
[root@rh67-node1:~]# pcs node unmaintenance --all
[root@rh67-node1:~]# pcs status nodes
Pacemaker Nodes:
 Online: rh67-node1 rh67-node2 
 Standby: 
 Maintenance: 
 Offline: 
Pacemaker Remote Nodes:
 Online: 
 Standby: 
 Maintenance: 
 Offline:

Comment 16 errata-xmlrpc 2016-05-10 19:27:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0739.html


Note You need to log in before you can comment on or make changes to this bug.