Bug 1729243
Summary: | machine-controller does not wait for nodes to drain | |||
---|---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Michael Gugino <mgugino> | |
Component: | Cloud Compute | Assignee: | Michael Gugino <mgugino> | |
Status: | CLOSED ERRATA | QA Contact: | Jianwei Hou <jhou> | |
Severity: | unspecified | Docs Contact: | ||
Priority: | unspecified | |||
Version: | 4.1.z | CC: | agarcial, eparis, sponnaga, wsun | |
Target Milestone: | --- | |||
Target Release: | 4.1.z | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | 4.1.8 | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | ||
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1729510 1729512 (view as bug list) | Environment: | ||
Last Closed: | 2019-07-31 02:44:55 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1729512 | |||
Bug Blocks: |
Description
Michael Gugino
2019-07-11 16:41:35 UTC
PR for kubernetes-drain: https://github.com/openshift/kubernetes-drain/pull/1 PR for kubernetes-drain: https://github.com/openshift/kubernetes-drain/pull/1 merged. Need to distribute fix to machine-api libraries next. PR for aws created: https://github.com/openshift/cluster-api-provider-aws/pull/238 Waiting on QE for 4.2: https://bugzilla.redhat.com/show_bug.cgi?id=1729512 The fix is merged to 4.1.0-0.nightly-2019-07-24-051320 ,please check if we could verify it. Verified in 4.1.0-0.nightly-2019-07-24-213555 on AWS IPI. Delete a machine, the machine-controller showed the node it linked to was drained and all pods were successfully evicted. I0725 02:23:17.315018 1 info.go:20] cordoned node "ip-10-0-151-178.ap-northeast-1.compute.internal" I0725 02:23:17.382934 1 info.go:16] ignoring DaemonSet-managed pods: tuned-dzdk2, dns-default-zlmx5, node-ca-z48ck, machine-config-daemon-tdwkq, node-exporter-wm42s, multus-kqlfr, ovs-6br46, sdn-mmptb; deleting pods with local storage: alertmanager-main-0, prometheus-adapter-5bf57f848d-gs7q9, prometheus-k8s-1 I0725 02:23:17.468188 1 info.go:20] pod "alertmanager-main-0" removed (evicted) I0725 02:23:25.492696 1 info.go:20] pod "router-default-5485b67db6-9hcc9" removed (evicted) I0725 02:23:25.503699 1 info.go:20] pod "prometheus-k8s-1" removed (evicted) I0725 02:23:25.508283 1 info.go:20] pod "prometheus-adapter-5bf57f848d-gs7q9" removed (evicted) I0725 02:23:25.508330 1 info.go:20] drained node "ip-10-0-151-178.ap-northeast-1.compute.internal" I0725 02:23:25.508346 1 controller.go:284] drain successful for machine "jhou1-6jvjf-worker-ap-northeast-1c-fsq2l" I0725 02:23:25.508387 1 actuator.go:245] deleting machine I0725 02:23:25.753564 1 utils.go:151] Cleaning up extraneous instance for machine: i-03ede39a95edaeb77, state: running, launchTime: 2019-07-25 01:57:38 +0000 UTC I0725 02:23:25.753602 1 utils.go:155] Terminating i-03ede39a95edaeb77 instance I0725 02:23:25.892025 1 controller.go:212] Deleting node "ip-10-0-151-178.ap-northeast-1.compute.internal" for machine "jhou1-6jvjf-worker-ap-northeast-1c-fsq2l" Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:1866 |