Bug 1828003

Summary: Problem with deleting a node without draining it
Product: OpenShift Container Platform Reporter: Daniel <dmaizel>
Component: Cloud ComputeAssignee: Alberto <agarcial>
Cloud Compute sub component: BareMetal Provider QA Contact: Daniel <dmaizel>
Status: CLOSED ERRATA Docs Contact:
Severity: medium    
Priority: medium CC: aos-bugs, beth.white, jokerman, lshilin, mgugino, minmli, mpatel, prabinov, rbartal, schoudha, scuppett, stbenjam, vsibirsk
Version: 4.4Keywords: Triaged
Target Milestone: ---   
Target Release: 4.5.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-07-13 17:31:39 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
worker description none

Description Daniel 2020-04-26 08:55:06 UTC
Description of problem:
After creating a pod on a worker and then trying to delete the worker without draining it, the pod is successfully moved to another worker and gets stuck in 'Terminating' state on the deleted worker. The deleted worker can't be deleted and is still listed when checking the nodes but it's unreachable.

If trying to drain the node with the running pod after the mentioned above,
The pod is moved to another worker, And get's stuck in a 'Terminating' state on the current node. Hence, the current node is not deleted properly. 

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Create a pod on worker.

2. Delete the worker without draining. cmd: $ oc adm drain
$ oc annotate machine <worker-CONSUMER-name> machine.openshift.io/cluster-api-delete-machine=yes -n openshift-machine-api
$ oc delete bmh <worker-name> -n openshift-machine-api

3. The pod status is stuck on terminating, So we're unable to delete the worker node. When trying delete the worker node, the status became NotReady, SchedulingDisabled.  

Actual results:
$ oc get nodes:
Status of deleted worker node: NotReady, SchedulingDisabled.  
$ oc get machine -o wide -n openshift-machine-api:
The node is still listed.

Expected results:
Node is successfully deleted and is not listed anywhere(
$ oc get nodes
$ oc get bmh -n openshift-machine-api
$ oc get machine -o wide -n openshift-machine-api
).

Additional info:

Comment 1 Mrunal Patel 2020-04-26 15:09:46 UTC
Can you provide information on the crio and kubelet versions in use?

Comment 2 Mrunal Patel 2020-04-26 17:11:36 UTC
Also, we need the kubelet and crio logs as well if you have them.

Comment 6 Lubov 2020-04-27 06:58:51 UTC
Created attachment 1682043 [details]
worker description

Comment 7 MinLi 2020-04-27 07:28:03 UTC
I can not reproduce the above steps, in step 2:

$ oc delete bmh ip-10-0-168-12.us-east-2.compute.internal -n openshift-machine-api
Error from server (NotFound): baremetalhosts.metal3.io "ip-10-0-168-12.us-east-2.compute.internal" not found

It seems that baremetalhosts don't exist!

Comment 8 Stephen Cuppett 2020-04-27 12:38:14 UTC
Setting target release to current development version (4.5) for investigation. Where fixes (if any) are required/requested for prior versions, cloned BZs will be created when appropriate.

Comment 10 Michael Gugino 2020-05-19 01:58:05 UTC
The issue here is that the cluster-api-provider-baremetal revendored github.com/openshift/cluster-api.  In 4.4 and newer, the relevant machine-api code has moved to github.com/openshift/machine-api-operator and should be vendored instead.  github.com/openshift/cluster-api is no longer in development.

Comment 11 Honza Pokorny 2020-05-26 16:38:41 UTC
*** Bug 1837505 has been marked as a duplicate of this bug. ***

Comment 12 Stephen Benjamin 2020-05-28 13:00:45 UTC
*** Bug 1840577 has been marked as a duplicate of this bug. ***

Comment 16 errata-xmlrpc 2020-07-13 17:31:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409