Bug 1729243 - machine-controller does not wait for nodes to drain
Summary: machine-controller does not wait for nodes to drain
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Cloud Compute
Version: 4.1.z
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 4.1.z
Assignee: Michael Gugino
QA Contact: Jianwei Hou
URL:
Whiteboard: 4.1.8
Depends On: 1729512
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-07-11 16:41 UTC by Michael Gugino
Modified: 2019-07-31 02:45 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1729510 1729512 (view as bug list)
Environment:
Last Closed: 2019-07-31 02:44:55 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:1866 None None None 2019-07-31 02:45:02 UTC
Github openshift cluster-api-provider-aws pull 238 None None None 2019-07-12 18:38:06 UTC

Description Michael Gugino 2019-07-11 16:41:35 UTC
Description of problem:
When deleting a machine object, the machine-controller first attempts to cordon and drain the node.  Unfortunately, a bug in the library github.com/openshift/kubernetes-drain prevents the machine-controller for waiting for a successful drain.  This bug causes the library to believe the pod has been successfully evicted or deleted before such eviction/deletion has actually taken place.

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1.  Delete worker machine object on 4.1 cluster on IPI.

Actual results:
Drain reports complete before pods are actually evicted or deleted.

Expected results:
We should wait to ensure services aren't interrupted.

Additional info:
Logs from modified machine controller: https://gist.github.com/michaelgugino/bb8b4129094c683681d87cb63a4e5875

Modified machine-controller code: https://github.com/openshift/cluster-api-provider-aws/pull/234

Comment 1 Michael Gugino 2019-07-11 16:51:38 UTC
PR for kubernetes-drain: https://github.com/openshift/kubernetes-drain/pull/1

Comment 2 Michael Gugino 2019-07-12 12:52:25 UTC
PR for kubernetes-drain: https://github.com/openshift/kubernetes-drain/pull/1

merged.

Need to distribute fix to machine-api libraries next.

Comment 3 Michael Gugino 2019-07-12 18:37:21 UTC
PR for aws created: https://github.com/openshift/cluster-api-provider-aws/pull/238

Comment 4 Michael Gugino 2019-07-12 18:39:02 UTC
Waiting on QE for 4.2: https://bugzilla.redhat.com/show_bug.cgi?id=1729512

Comment 5 Wei Sun 2019-07-24 07:06:22 UTC
The fix is merged to 4.1.0-0.nightly-2019-07-24-051320 ,please check if we could verify it.

Comment 7 Jianwei Hou 2019-07-25 02:31:38 UTC
Verified in 4.1.0-0.nightly-2019-07-24-213555 on AWS IPI.

Delete a machine, the machine-controller showed the node it linked to was drained and all pods were successfully evicted.


I0725 02:23:17.315018       1 info.go:20] cordoned node "ip-10-0-151-178.ap-northeast-1.compute.internal"
I0725 02:23:17.382934       1 info.go:16] ignoring DaemonSet-managed pods: tuned-dzdk2, dns-default-zlmx5, node-ca-z48ck, machine-config-daemon-tdwkq, node-exporter-wm42s, multus-kqlfr, ovs-6br46, sdn-mmptb; deleting pods with local storage: alertmanager-main-0, prometheus-adapter-5bf57f848d-gs7q9, prometheus-k8s-1
I0725 02:23:17.468188       1 info.go:20] pod "alertmanager-main-0" removed (evicted)
I0725 02:23:25.492696       1 info.go:20] pod "router-default-5485b67db6-9hcc9" removed (evicted)
I0725 02:23:25.503699       1 info.go:20] pod "prometheus-k8s-1" removed (evicted)
I0725 02:23:25.508283       1 info.go:20] pod "prometheus-adapter-5bf57f848d-gs7q9" removed (evicted)
I0725 02:23:25.508330       1 info.go:20] drained node "ip-10-0-151-178.ap-northeast-1.compute.internal"
I0725 02:23:25.508346       1 controller.go:284] drain successful for machine "jhou1-6jvjf-worker-ap-northeast-1c-fsq2l"
I0725 02:23:25.508387       1 actuator.go:245] deleting machine
I0725 02:23:25.753564       1 utils.go:151] Cleaning up extraneous instance for machine: i-03ede39a95edaeb77, state: running, launchTime: 2019-07-25 01:57:38 +0000 UTC
I0725 02:23:25.753602       1 utils.go:155] Terminating i-03ede39a95edaeb77 instance
I0725 02:23:25.892025       1 controller.go:212] Deleting node "ip-10-0-151-178.ap-northeast-1.compute.internal" for machine "jhou1-6jvjf-worker-ap-northeast-1c-fsq2l"

Comment 9 errata-xmlrpc 2019-07-31 02:44:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:1866


Note You need to log in before you can comment on or make changes to this bug.