Bug 1563673 - [RFE] Add timeout when draining a node for update
Summary: [RFE] Add timeout when draining a node for update
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Cluster Version Operator
Version: 3.6.1
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 3.9.0
Assignee: Scott Dodson
QA Contact: Weihua Meng
Depends On:
Blocks: 1724792 1573478
TreeView+ depends on / blocked
Reported: 2018-04-04 12:23 UTC by Thom Carlin
Modified: 2019-06-28 16:04 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1573478 (view as bug list)
Last Closed: 2018-06-27 18:01:34 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1562961 0 medium CLOSED Unable to force delete of zombie resources (including projects) 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHSA-2018:2013 0 None None None 2018-06-27 18:02:09 UTC

Internal Links: 1562961

Description Thom Carlin 2018-04-04 12:23:51 UTC
Description of problem:

During update, Ansible may hang awaiting node draining

Version-Release number of the following components:
rpm -q openshift-ansible
rpm -q ansible
ansible --version
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, May  3 2017, 07:55:04) [GCC 4.8.5 20150623 (Red Hat 4.8.5-14)]

How reproducible:

100% for particular configuration

Steps to Reproduce:
1. ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade_nodes.yml

Actual results:

Please include the entire output from the last TASK line through the end of output if an error is generated

No longer available

Hangs at "Drain node" step
* I believe this is "Drain and upgrade nodes" in /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/upgrade_nodes.yml but the RFE should cover all cases (e.g. docker_upgrade, upgrade_control_plane, and upgrade_nodes)
Expected results:

Node upgrade without intervention

Additional info:
Please attach logs from ansible-playbook with the -vvv flag

Had to use https://bugzilla.redhat.com/show_bug.cgi?id=1562961#c2 twice (once for node2 and again for node4)

The node2 case was due to logging pod hanging on Termination
The node4 case was due to CNS pod (gluster-s3) hanging on Termination

Comment 1 Juan Vallejo 2018-04-04 15:09:13 UTC
Per the help output of `drain`[1], `--force` must be used in order to allow deletion to proceed for a few cases involving managed pods.

@Scott, the playbook could be updated to run drain with --force and a --timeout

1. https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/drain.go#L166-L167

Comment 4 Weihua Meng 2018-04-10 01:27:47 UTC

This is already shipped.

Docs about this:

The openshift_upgrade_nodes_drain_timeout variable allows you to specify the length of time to wait before giving up.

Comment 7 errata-xmlrpc 2018-06-27 18:01:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.