Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1563673 - [RFE] Add timeout when draining a node for update
[RFE] Add timeout when draining a node for update
Status: CLOSED ERRATA
Product: OpenShift Container Platform
Classification: Red Hat
Component: Upgrade (Show other bugs)
3.6.1
Unspecified Unspecified
medium Severity medium
: ---
: 3.9.0
Assigned To: Scott Dodson
Weihua Meng
:
Depends On:
Blocks: 1542093 1573478
  Show dependency treegraph
 
Reported: 2018-04-04 08:23 EDT by Thom Carlin
Modified: 2018-10-31 06:15 EDT (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Story Points: ---
Clone Of:
: 1573478 (view as bug list)
Environment:
Last Closed: 2018-06-27 14:01:34 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2013 None None None 2018-06-27 14:02 EDT

  None (edit)
Description Thom Carlin 2018-04-04 08:23:51 EDT
Description of problem:

During update, Ansible may hang awaiting node draining

Version-Release number of the following components:
rpm -q openshift-ansible
openshift-ansible-3.6.173.0.96-1.git.0.2954b4a.el7.noarch
rpm -q ansible
ansible-2.4.2.0-2.el7.noarch
ansible --version
ansible 2.4.2.0
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, May  3 2017, 07:55:04) [GCC 4.8.5 20150623 (Red Hat 4.8.5-14)]

How reproducible:

100% for particular configuration

Steps to Reproduce:
1. ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/upgrades/v3_6/upgrade_nodes.yml

Actual results:

Please include the entire output from the last TASK line through the end of output if an error is generated

No longer available

Hangs at "Drain node" step
* I believe this is "Drain and upgrade nodes" in /usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/upgrades/upgrade_nodes.yml but the RFE should cover all cases (e.g. docker_upgrade, upgrade_control_plane, and upgrade_nodes)
Expected results:

Node upgrade without intervention

Additional info:
Please attach logs from ansible-playbook with the -vvv flag

Had to use https://bugzilla.redhat.com/show_bug.cgi?id=1562961#c2 twice (once for node2 and again for node4)

The node2 case was due to logging pod hanging on Termination
The node4 case was due to CNS pod (gluster-s3) hanging on Termination
Comment 1 Juan Vallejo 2018-04-04 11:09:13 EDT
Per the help output of `drain`[1], `--force` must be used in order to allow deletion to proceed for a few cases involving managed pods.

@Scott, the playbook could be updated to run drain with --force and a --timeout

1. https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/drain.go#L166-L167
Comment 4 Weihua Meng 2018-04-09 21:27:47 EDT
Fixed.
openshift-ansible-3.9.14-1.git.3.c62bc34.el7.noarch.rpm

This is already shipped.

Docs about this:
https://docs.openshift.com/container-platform/3.9/upgrading/automated_upgrades.html#customizing-node-upgrades

The openshift_upgrade_nodes_drain_timeout variable allows you to specify the length of time to wait before giving up.
Comment 7 errata-xmlrpc 2018-06-27 14:01:34 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2013

Note You need to log in before you can comment on or make changes to this bug.