Description of problem: kubeletconfig's description will show multiple lines for finalizers when upgrade from 4.4.8->4.5 Version-Release number of selected component (if applicable): $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.5.0-0.nightly-2020-06-17-234944 True False 36m Cluster version is 4.5.0-0.nightly-2020-06-17-234944 How reproducible: always Steps to Reproduce: 1.create a kubeletconfig 2.upgrade cluster from 4.4.8->4.5 3.$ oc get kubeletconfig -o yaml Actual results: 2.upgrade success 3.$ oc get kubeletconfig -o yaml apiVersion: v1 items: - apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: creationTimestamp: "2020-06-18T06:03:39Z" finalizers: - 92753447-3ee6-4db0-9cf4-5e2a65ad9b7b - fa95d191-8e90-4e2f-994b-8e9277c5d6d4 generation: 1 name: custom-kubelet resourceVersion: "150767" selfLink: /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/custom-kubelet uid: a6d4d4d0-1645-469a-bd35-9c2a64a0f3f5 spec: kubeletConfig: evictionHard: imagefs.available: 10% imagefs.inodesFree: 5% memory.available: 200Mi nodefs.available: 5% nodefs.inodesFree: 4% evictionPressureTransitionPeriod: 0s evictionSoft: imagefs.available: 15% imagefs.inodesFree: 10% memory.available: 500Mi nodefs.available: 10% nodefs.inodesFree: 5% evictionSoftGracePeriod: imagefs.available: 1m30s imagefs.inodesFree: 1m30s memory.available: 1m30s nodefs.available: 1m30s nodefs.inodesFree: 1m30s imageGCHighThresholdPercent: 80 imageGCLowThresholdPercent: 75 imageMinimumGCAge: 5m maxPods: 240 podsPerCore: 80 machineConfigPoolSelector: matchLabels: custom-kubelet: small-pods status: conditions: - lastTransitionTime: "2020-06-18T06:03:39Z" message: Success status: "True" type: Success - lastTransitionTime: "2020-06-18T06:14:34Z" message: Success status: "True" Expected results: after upgrade, kubeletconfig's description should not show multiple lines for finalizers Additional info:
Will look into it in upcoming sprint
verified in version : 4.7.0-0.nightly-2020-10-17-034503 $ oc get kubeletconfig -o yaml apiVersion: v1 items: - apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: creationTimestamp: "2020-10-21T09:35:54Z" finalizers: - 99-worker-generated-kubelet generation: 1 ... spec: kubeletConfig: maxPods: 220 machineConfigPoolSelector: matchLabels: custom-kubelet: max-pods status: conditions: - lastTransitionTime: "2020-10-21T09:35:54Z" message: Success status: "True" type: Success $ oc get mc | grep "99-worker-generated-kubelet" 99-worker-generated-kubelet 638d40a6ab4757de236afb935ae1db7832f19e70 3.1.0 4m28s $ oc get mc 99-worker-generated-kubelet -o yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: annotations: machineconfiguration.openshift.io/generated-by-controller-version: 638d40a6ab4757de236afb935ae1db7832f19e70 creationTimestamp: "2020-10-21T09:35:54Z" generation: 1 labels: machineconfiguration.openshift.io/role: worker ... name: 99-worker-generated-kubelet ownerReferences: - apiVersion: machineconfiguration.openshift.io/v1 blockOwnerDeletion: true controller: true kind: KubeletConfig name: custom-kubelet-performance uid: 504578ff-89c6-4bf8-8aed-dea2d330a625 resourceVersion: "70728" selfLink: /apis/machineconfiguration.openshift.io/v1/machineconfigs/99-worker-generated-kubelet uid: eaabf244-3b2a-45a1-8be6-077b67782027