Description of problem: Not able to allow unsafe sysctl on node via kubeletConfig. Version-Release number of selected component (if applicable): $ oc version --short Client Version: v4.0.22 Server Version: v1.12.4+befe71b $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.0.0-0.nightly-2019-03-19-004004 True False 20h Cluster version is 4.0.0-0.nightly-2019-03-19-004004 How reproducible: Always Steps to Reproduce: Following doc https://github.com/openshift/openshift-docs/blob/enterprise-4.0/modules/create-a-kubeletconfig-crd-to-edit-kubelet-parameters.adoc $ cat set-sysctl-worker.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-sysctl-worker spec: machineConfigSelector: 01-worker-kubelet kubeletConfig: allowed-unsafe-sysctls: - "kernel.msg*,net.ipv4.route.min_pmtu" $ oc apply -f set-sysctl-worker.yaml $ oc get kubeletconfig NAME AGE set-sysctl-worker 21m Actual results: Checking on node, unsafe sysctl argument is not set nor machineconfig created for custom kubelet. $ oc get machineconfig NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 4.0.22-201903181722-dirty 2.2.0 20h 00-master-ssh 4.0.22-201903181722-dirty 2.2.0 20h 00-worker 4.0.22-201903181722-dirty 2.2.0 20h 00-worker-ssh 4.0.22-201903181722-dirty 2.2.0 20h 01-master-container-runtime 4.0.22-201903181722-dirty 2.2.0 20h 01-master-kubelet 4.0.22-201903181722-dirty 2.2.0 20h 01-worker-container-runtime 4.0.22-201903181722-dirty 2.2.0 20h 01-worker-kubelet 4.0.22-201903181722-dirty 2.2.0 20h 99-master-36c9fa32-4a34-11e9-9943-02c9a2a649ac-registries 4.0.22-201903181722-dirty 2.2.0 20h 99-worker-36d2f096-4a34-11e9-9943-02c9a2a649ac-registries 4.0.22-201903181722-dirty 2.2.0 20h master-bb7e23c9982c911eb7be482741d983fd 4.0.22-201903181722-dirty 2.2.0 20h worker-65e80488f420def832f1903ef9efc0c4 4.0.22-201903181722-dirty 2.2.0 20h cat /etc/kubernetes/kubelet.conf kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 cgroupDriver: systemd clusterDNS: - 172.30.0.10 clusterDomain: cluster.local maxPods: 250 rotateCertificates: true runtimeRequestTimeout: 10m serializeImagePulls: false staticPodPath: /etc/kubernetes/manifests systemReserved: cpu: 500m memory: 500Mi featureGates: RotateKubeletServerCertificate: true serverTLSBootstrap: true Expected results: Should be able to enable unsafe sysctl. Additional info:
FYI, I did not see any things about allowed-unsafe-sysctls in https://github.com/kubernetes/kubelet/blob/release-1.12/config/v1beta1/types.go. But you can give a try with "experimental-allowed-unsafe-sysctls" according to https://v1-12.docs.kubernetes.io/docs/reference/command-line-tools-reference/kubelet/ since the command line still support this.
I think current problem is dynamically kubelet-config don't take effect. I try for kubelet config item "maxPods", reconfig not work.
Pick to origin: https://github.com/openshift/origin/pull/23538
Just ran into a major issue here in that the vendored kube for the MCO, which will need this change to the kubelet API, is pointing to upstream and not to our fork. Thus we are currently not able to get this change into the MCO. Discussions on next steps next week (8/5).
MCO PR: https://github.com/openshift/machine-config-operator/pull/1036
PR's have been merged. Code will be in the next 4.2 release.
Btw, "worker" machineconfigpool should be labeled first before create kubeletconfig # oc label machineconfigpool worker custom-kubelet=sysctl
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922