Bug 1563673 - [RFE] Add timeout when draining a node for update
Summary: [RFE] Add timeout when draining a node for update
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Cluster Version Operator
Version: 3.6.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 3.9.0
Assignee: Scott Dodson
QA Contact: Weihua Meng
URL:
Whiteboard:
Depends On:
Blocks: 1724792 1573478
TreeView+ depends on / blocked
 
Reported: 2018-04-04 12:23 UTC by Thom Carlin
Modified: 2021-12-10 15:54 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
: 1573478 (view as bug list)
Environment:
Last Closed: 2018-06-27 18:01:34 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1562961 0 medium CLOSED Unable to force delete of zombie resources (including projects) 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHSA-2018:2013 0 None None None 2018-06-27 18:02:09 UTC

Internal Links: 1562961

Description Thom Carlin 2018-04-04 12:23:51 UTC
Description of problem:

During update, Ansible may hang awaiting node draining

Version-Release number of the following components:
rpm -q openshift-ansible
openshift-ansible-3.6.173.0.96-1.git.0.2954b4a.el7.noarch
rpm -q ansible
ansible-2.4.2.0-2.el7.noarch
ansible --version
ansible 2.4.2.0
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, May  3 2017, 07:55:04) [GCC 4.8.5 20150623 (Red Hat 4.8.5-14)]

How reproducible:

100% for particular configuration

Steps to Reproduce:
1. ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade_nodes.yml

Actual results:

Please include the entire output from the last TASK line through the end of output if an error is generated

No longer available

Hangs at "Drain node" step
* I believe this is "Drain and upgrade nodes" in /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/upgrade_nodes.yml but the RFE should cover all cases (e.g. docker_upgrade, upgrade_control_plane, and upgrade_nodes)
Expected results:

Node upgrade without intervention

Additional info:
Please attach logs from ansible-playbook with the -vvv flag

Had to use https://bugzilla.redhat.com/show_bug.cgi?id=1562961#c2 twice (once for node2 and again for node4)

The node2 case was due to logging pod hanging on Termination
The node4 case was due to CNS pod (gluster-s3) hanging on Termination

Comment 1 Juan Vallejo 2018-04-04 15:09:13 UTC
Per the help output of `drain`[1], `--force` must be used in order to allow deletion to proceed for a few cases involving managed pods.

@Scott, the playbook could be updated to run drain with --force and a --timeout

1. https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/drain.go#L166-L167

Comment 4 Weihua Meng 2018-04-10 01:27:47 UTC
Fixed.
openshift-ansible-3.9.14-1.git.3.c62bc34.el7.noarch.rpm

This is already shipped.

Docs about this:
https://docs.openshift.com/container-platform/3.9/upgrading/automated_upgrades.html#customizing-node-upgrades

The openshift_upgrade_nodes_drain_timeout variable allows you to specify the length of time to wait before giving up.

Comment 7 errata-xmlrpc 2018-06-27 18:01:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2013


Note You need to log in before you can comment on or make changes to this bug.