Description of problem: when check kubeletconfig's description, it shows duplicate lines for finalizers Version-Release number of selected component (if applicable): 4.3.0-0.nightly-2019-11-18-062034 How reproducible: always Steps to Reproduce: 1.$ oc label mcp worker custom-kubelet=max-pods 2.$ oc create -f custom-kubelet-maxpods.yaml yaml file is like: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-kubelet spec: machineConfigPoolSelector: matchLabels: custom-kubelet: max-pods kubeletConfig: maxPods: 220 3.$ oc get kubeletconfig custom-kubelet -o yaml apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: creationTimestamp: "2019-11-19T08:02:07Z" finalizers: - 99-worker-935d0eac-d152-4f2a-a754-7f8086c591a1-kubelet - 99-worker-935d0eac-d152-4f2a-a754-7f8086c591a1-kubelet - 99-worker-935d0eac-d152-4f2a-a754-7f8086c591a1-kubelet - 99-worker-935d0eac-d152-4f2a-a754-7f8086c591a1-kubelet - 99-worker-935d0eac-d152-4f2a-a754-7f8086c591a1-kubelet generation: 1 name: custom-kubelet resourceVersion: "127181" selfLink: /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/custom-kubelet uid: a8fbbcbd-3d88-46be-a9bd-0b2965725865 spec: kubeletConfig: maxPods: 220 machineConfigPoolSelector: matchLabels: custom-kubelet: max-pods status: conditions: - lastTransitionTime: "2019-11-19T08:02:07Z" message: Success status: "True" type: Success - lastTransitionTime: "2019-11-19T08:04:58Z" message: Success status: "True" type: Success - lastTransitionTime: "2019-11-19T08:25:59Z" message: Success status: "True" type: Success - lastTransitionTime: "2019-11-19T08:28:06Z" message: Success status: "True" type: Success - lastTransitionTime: "2019-11-19T08:32:36Z" message: Success status: "True" type: Success Actual results: 3.show duplicate lines for finalizers and status Expected results: 3.not show duplicate lines Additional info:
verified! version : 4.3.0-0.nightly-2019-12-05-001549
I think this bug has not been fixed. When I created a kubeletconfig and waited long enough (e.g. several hours and the cluster may go through upgrade or temporary unavailability ), this bug would reproduce. version: 4.3.0-0.nightly-2019-12-09-005356
You will need to test against 4.4 or master, and then we will need to backport this to 4.3.
This bug didn't reproduce after observing for more than 48 hours. verified! version : 4.4.0-0.nightly-2019-12-13-005728
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0062