Bug 1878880 - Misleading log message when deleting Node
Summary: Misleading log message when deleting Node
Keywords:
Status: POST
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Cloud Compute
Version: 4.6
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: 4.6.0
Assignee: Zane Bitter
QA Contact: sunzhaohua
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-14 18:31 UTC by Zane Bitter
Modified: 2020-09-22 16:59 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Github openshift cluster-api-provider-aws pull 353 None open Bug 1878880: re-vendor machine-api-operator at e0db6b65 2020-09-22 16:48:48 UTC
Github openshift cluster-api-provider-azure pull 167 None open Bug 1878880: re-vendor machine-api-operator at e0db6b65 2020-09-22 16:53:59 UTC
Github openshift cluster-api-provider-baremetal pull 116 None open Bug 1878880: re-vendor machine-api-operator at e0db6b65 2020-09-22 16:55:39 UTC
Github openshift cluster-api-provider-gcp pull 121 None open Bug 1878880: re-vendor machine-api-operator at e0db6b65 2020-09-22 16:57:15 UTC
Github openshift cluster-api-provider-openstack pull 124 None open Bug 1878880: re-vendor machine-api-operator at e0db6b65 2020-09-22 16:59:02 UTC
Github openshift machine-api-operator pull 681 None closed Bug 1878880: Fix confusing log messages about deleting Node 2020-09-22 02:43:18 UTC

Description Zane Bitter 2020-09-14 18:31:18 UTC
When the Machine controller deletes a Node, the log messages are confusing due to the parameters being passed in the wrong order. Log messages should be in the form:


  <Machine name>: ...

but when deleting a Node you see one in the form:

  <Node name>: deleting node <Machine name> for machine

This is independent of the platform in use.

Comment 2 sunzhaohua 2020-09-18 09:31:18 UTC
Verify failed, still is <Node name>: deleting node <Machine name> for machine
clusterversion: 4.6.0-0.nightly-2020-09-18-002612

$ oc get node
NAME                                                       STATUS   ROLES    AGE    VERSION
zhsungcp918-cd2rn-master-0.c.openshift-qe.internal         Ready    master   137m   v1.19.0+b4ffb45
zhsungcp918-cd2rn-master-1.c.openshift-qe.internal         Ready    master   137m   v1.19.0+b4ffb45
zhsungcp918-cd2rn-master-2.c.openshift-qe.internal         Ready    master   138m   v1.19.0+b4ffb45
zhsungcp918-cd2rn-worker-a-zkgff.c.openshift-qe.internal   Ready    worker   128m   v1.19.0+b4ffb45
zhsungcp918-cd2rn-worker-b-r64rd.c.openshift-qe.internal   Ready    worker   128m   v1.19.0+b4ffb45
zhsungcp918-cd2rn-worker-c-pnmw7.c.openshift-qe.internal   Ready    worker   128m   v1.19.0+b4ffb45

$ oc get machine
NAME                               PHASE     TYPE            REGION        ZONE            AGE
zhsungcp918-cd2rn-master-0         Running   n1-standard-4   us-central1   us-central1-a   142m
zhsungcp918-cd2rn-master-1         Running   n1-standard-4   us-central1   us-central1-b   142m
zhsungcp918-cd2rn-master-2         Running   n1-standard-4   us-central1   us-central1-c   142m
zhsungcp918-cd2rn-worker-a-zkgff   Running   n1-standard-4   us-central1   us-central1-a   131m
zhsungcp918-cd2rn-worker-b-r64rd   Running   n1-standard-4   us-central1   us-central1-b   131m
zhsungcp918-cd2rn-worker-c-pnmw7   Running   n1-standard-4   us-central1   us-central1-c   131m

$ oc delete machine zhsungcp918-cd2rn-worker-c-pnmw7
machine.machine.openshift.io "zhsungcp918-cd2rn-worker-c-pnmw7" deleted

I0918 09:04:00.936404       1 controller.go:247] zhsungcp918-cd2rn-worker-c-pnmw7.c.openshift-qe.internal: deleting node "zhsungcp918-cd2rn-worker-c-pnmw7" for machine

Comment 3 Zane Bitter 2020-09-18 14:34:42 UTC
The Machine controller is vendored in each cluster-api-provider's tree, so this is only fixed for vSphere (which is in-tree in MAO). Other providers will get the fix as they update their vendored packages.

Comment 4 Michael McCune 2020-09-22 16:43:45 UTC
i am working on proposing pull requests for the other providers now. i should have these posted today.


Note You need to log in before you can comment on or make changes to this bug.