Description of problem: When deleting a machine object, the machine-controller first attempts to cordon and drain the node. Unfortunately, a bug in the library github.com/openshift/kubernetes-drain prevents the machine-controller for waiting for a successful drain. This bug causes the library to believe the pod has been successfully evicted or deleted before such eviction/deletion has actually taken place. Version-Release number of selected component (if applicable): How reproducible: 100% Steps to Reproduce: 1. Delete worker machine object on 4.1 cluster on IPI. Actual results: Drain reports complete before pods are actually evicted or deleted. Expected results: We should wait to ensure services aren't interrupted. Additional info: Logs from modified machine controller: https://gist.github.com/michaelgugino/bb8b4129094c683681d87cb63a4e5875 Modified machine-controller code: https://github.com/openshift/cluster-api-provider-aws/pull/234
PR for kubernetes-drain: https://github.com/openshift/kubernetes-drain/pull/1
PR for kubernetes-drain: https://github.com/openshift/kubernetes-drain/pull/1 merged. Need to distribute fix to machine-api libraries next.
PR for aws created: https://github.com/openshift/cluster-api-provider-aws/pull/238
Waiting on QE for 4.2: https://bugzilla.redhat.com/show_bug.cgi?id=1729512
The fix is merged to 4.1.0-0.nightly-2019-07-24-051320 ,please check if we could verify it.
Verified in 4.1.0-0.nightly-2019-07-24-213555 on AWS IPI. Delete a machine, the machine-controller showed the node it linked to was drained and all pods were successfully evicted. I0725 02:23:17.315018 1 info.go:20] cordoned node "ip-10-0-151-178.ap-northeast-1.compute.internal" I0725 02:23:17.382934 1 info.go:16] ignoring DaemonSet-managed pods: tuned-dzdk2, dns-default-zlmx5, node-ca-z48ck, machine-config-daemon-tdwkq, node-exporter-wm42s, multus-kqlfr, ovs-6br46, sdn-mmptb; deleting pods with local storage: alertmanager-main-0, prometheus-adapter-5bf57f848d-gs7q9, prometheus-k8s-1 I0725 02:23:17.468188 1 info.go:20] pod "alertmanager-main-0" removed (evicted) I0725 02:23:25.492696 1 info.go:20] pod "router-default-5485b67db6-9hcc9" removed (evicted) I0725 02:23:25.503699 1 info.go:20] pod "prometheus-k8s-1" removed (evicted) I0725 02:23:25.508283 1 info.go:20] pod "prometheus-adapter-5bf57f848d-gs7q9" removed (evicted) I0725 02:23:25.508330 1 info.go:20] drained node "ip-10-0-151-178.ap-northeast-1.compute.internal" I0725 02:23:25.508346 1 controller.go:284] drain successful for machine "jhou1-6jvjf-worker-ap-northeast-1c-fsq2l" I0725 02:23:25.508387 1 actuator.go:245] deleting machine I0725 02:23:25.753564 1 utils.go:151] Cleaning up extraneous instance for machine: i-03ede39a95edaeb77, state: running, launchTime: 2019-07-25 01:57:38 +0000 UTC I0725 02:23:25.753602 1 utils.go:155] Terminating i-03ede39a95edaeb77 instance I0725 02:23:25.892025 1 controller.go:212] Deleting node "ip-10-0-151-178.ap-northeast-1.compute.internal" for machine "jhou1-6jvjf-worker-ap-northeast-1c-fsq2l"
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:1866