Description of problem: Once you create the KubeletConfig with long enough name, the kubelet-config-controller will fail to add a finalizer to it with the error message: I0105 11:51:13.216535 1 kubelet_config_controller.go:285] Error syncing kubeletconfig worker-performance: KubeletConfig.machineconfiguration.openshift.io "worker-performance" is invalid: metadata.finalizers: Invalid value: "99-worker-performance-d45e4219-ca42-486f-a829-55d979867ff3-kubelet": name part must be no more than 63 characters E0105 11:52:35.187060 1 kubelet_config_controller.go:290] KubeletConfig.machineconfiguration.openshift.io "worker-performance" is invalid: metadata.finalizers: Invalid value: "99-worker-performance-d45e4219-ca42-486f-a829-55d979867ff3-kubelet": name part must be no more than 63 characters I0105 11:52:35.187087 1 kubelet_config_controller.go:291] Dropping kubeletconfig "worker-performance" out of the queue: KubeletConfig.machineconfiguration.openshift.io "worker-performance" is invalid: metadata.finalizers: Invalid value: "99-worker-performance-d45e4219-ca42-486f-a829-55d979867ff3-kubelet": name part must be no more than 63 characters Version-Release number of selected component (if applicable): # oc version Client Version: 4.4.0-0.ci-2020-01-02-031352 Server Version: 4.4.0-0.ci-2020-01-02-031352 Kubernetes Version: v1.17.0 How reproducible: Always Steps to Reproduce: 1. Create custom KubeletConfig with long enough name(in our case it was worker-performance) 2. 3. Actual results: The kubelet-config controller fails to add a finalizer to the KubeletConfig, and as result it can not reconcile the object. Expected results: The controller should succeed to add the finalizer and reconcile the object. Additional info: W/A use a shorter name for the KubeletConfig object.
Failed test. finalizers still fails to get added, even it's a short name like:"custom-kubelet" Error: message: 'could not add finalizers to KubeletConfig: could not add finalizers ENV: $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.4.0-0.nightly-2020-02-03-005212 True False 82m Cluster version is 4.4.0-0.nightly-2020-02-03-005212 $ oc version Client Version: unknown Server Version: 4.4.0-0.nightly-2020-02-03-005212 Kubernetes Version: v1.17.1 $ rpm -qa|grep openshift-client openshift-clients-4.4.0-202001310654.git.1.e04481f.el7.x86_64 Steps: 1.oc label machineconfigpool worker custom-kubelet=max-pods 2.oc create -f custom-kubelet-maxpods.yaml [1] 3. $ oc get kubeletconfig custom-kubelet -o yaml apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: creationTimestamp: "2020-02-03T07:38:36Z" finalizers: - 74306e70-142c-4385-8ce0-b52303fc231f generation: 1 name: custom-kubelet resourceVersion: "46742" selfLink: /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/custom-kubelet uid: 397e9ff3-f8df-44a6-b6ae-6ec32d58872f spec: kubeletConfig: maxPods: 220 machineConfigPoolSelector: matchLabels: custom-kubelet: max-pods status: conditions: - lastTransitionTime: "2020-02-03T07:38:36Z" message: 'could not add finalizers to KubeletConfig: could not add finalizers to KubeletConfig: %v' status: "False" type: Failure - lastTransitionTime: "2020-02-03T07:38:36Z" message: Success status: "True" type: Success [1] $ cat custom-kubelet-maxpods.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-kubelet spec: machineConfigPoolSelector: matchLabels: custom-kubelet: max-pods kubeletConfig: maxPods: 220
verified with version 4.4.0-0.nightly-2020-02-23-191320 Create custom KubeletConfig with long enough name(e.g. custom-kubelet-performance) $ oc get kubeletconfig custom-kubelet-performance -o yaml apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: creationTimestamp: "2020-02-26T02:44:15Z" finalizers: - 710a9c09-0cf6-4954-acc0-e721454f8fa4 generation: 1 name: custom-kubelet-performance resourceVersion: "777908" selfLink: /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/custom-kubelet-performance uid: d27b9fd3-6dbb-4114-9d87-41af46737cf0 spec: kubeletConfig: maxPods: 220 machineConfigPoolSelector: matchLabels: custom-kubelet: max-pods status: conditions: - lastTransitionTime: "2020-02-26T02:44:15Z" message: Success status: "True" type: Success
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0581