Hide Forgot
Description of problem: when delete a kubeletconfig ,return time is too long(nearly 10 minutes) Version-Release number of selected component (if applicable): 4.0.0-0.nightly-2019-03-04-234414 oc v4.0.0-0.182.0 How reproducible: always Steps to Reproduce: 1.edit machineconfigpool worker, add label"custom-kubelet: max-pods-worker" #oc edit machineconfigpool worker for example: metadata: creationTimestamp: 2019-03-07T07:10:04Z generation: 1 labels: custom-kubelet: max-pods-worker(add this line) ... 2.create a kubeletconfig cr oc create -f worker-kube-config.yaml yaml file as: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods-worker spec: machineConfigPoolSelector: matchLabels: custom-kubelet: max-pods-worker kubeletConfig: maxPods: 249 3.oc get machineconfig generate a new machineconfig named "99-worker-74b4fa0b-4091-11e9-96ec-067bf345463e-kubelet" 4.login all worker node, check /etc/kubernetes/kubelet.conf has the kubelet configuration "maxPods: 249" 5.oc delete kubeletconfig set-max-pods-worker Actual results: 5.show message "kubeletconfig.machineconfiguration.openshift.io "set-max-pods-worker" deleted", and return time of this command is nearly 10 minutes. Expected results: show message "kubeletconfig.machineconfiguration.openshift.io "set-max-pods-worker" deleted", and return immediately Additional info: when take step 5, check machineconfig in another terminal,the 99-worker-74b4fa0b-4091-11e9-96ec-067bf345463e-kubelet disappear immediately. #oc logs machine-config-controller-8cc654f5d-rlckl -n openshift-machine-config-operator | grep set-max-pods-worker error log: I0307 09:45:50.893105 1 kubelet_config_controller.go:412] Applied KubeletConfig set-max-pods-worker on MachineConfigPool worker I0307 09:45:51.048254 1 kubelet_config_controller.go:245] Error syncing kubeletconfig set-max-pods-worker: Operation cannot be fulfilled on kubeletconfigs.machineconfiguration.openshift.io "set-max-pods-worker": the object has been modified; please apply your changes to the latest version and try again I0307 09:45:51.698293 1 kubelet_config_controller.go:412] Applied KubeletConfig set-max-pods-worker on MachineConfigPool worker E0307 09:45:51.859637 1 kubelet_config_controller.go:250] Operation cannot be fulfilled on kubeletconfigs.machineconfiguration.openshift.io "set-max-pods-worker": the object has been modified; please apply your changes to the latest version and try again I0307 09:45:51.859670 1 kubelet_config_controller.go:251] Dropping kubeletconfig "set-max-pods-worker" out of the queue: Operation cannot be fulfilled on kubeletconfigs.machineconfiguration.openshift.io "set-max-pods-worker": the object has been modified; please apply your changes to the latest version and try again I0307 09:45:52.500112 1 kubelet_config_controller.go:412] Applied KubeletConfig set-max-pods-worker on MachineConfigPool worker I0307 09:45:52.648116 1 kubelet_config_controller.go:245] Error syncing kubeletconfig set-max-pods-worker: Operation cannot be fulfilled on kubeletconfigs.machineconfiguration.openshift.io "set-max-pods-worker": the object has been modified; please apply your changes to the latest version and try again I0307 09:45:53.292696 1 kubelet_config_controller.go:412] Applied KubeletConfig set-max-pods-worker on MachineConfigPool worker I0307 09:45:53.455527 1 kubelet_config_controller.go:245] Error syncing kubeletconfig set-max-pods-worker: Operation cannot be fulfilled on kubeletconfigs.machineconfiguration.openshift.io "set-max-pods-worker": the object has been modified; please apply your changes to the latest version and try again
MinLi: Are you apply this configuration to the master nodes? The master nodes run the Machine Config Controller and would likely take some time to come back and reconcile the deletion.
PR merged. https://github.com/openshift/machine-config-operator/pull/536
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758