Description of problem: The content of KubeletConfig, to be more precise spec.KubeletConfig, is dropped silently when posted. Version-Release number of selected component (if applicable): $ oc version Client Version: 4.4.0-0.ci-2020-01-24-140035 Server Version: 4.4.0-0.nightly-2020-03-02-131231 Kubernetes Version: v1.17.1 How reproducible: always Steps to Reproduce: 1. create a KubeletConfig CR, e.g.: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: test spec: kubeletConfig: {"foo": "bar2"} machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-none 2. Apply it with k apply -f test.yaml 3. Check content of that CR in the cluster: $ k get kubeletconfigs test -o=yaml apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"machineconfiguration.openshift.io/v1","kind":"KubeletConfig","metadata":{"annotations":{},"name":"test"},"spec":{"kubeletConfig":{"foo":"bar2"},"machineConfigPoolSelector":{"matchLabels":{"machineconfiguration.openshift.io/role":"worker-none"}}}} creationTimestamp: "2020-03-02T18:16:38Z" generation: 1 name: test resourceVersion: "57920" selfLink: /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/test uid: 4e7c1c2a-6fed-416b-ac19-99a971f8ff0f spec: kubeletConfig: {} machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-none Actual results: Note the empty spec.kubeletConfig in contrast to the posted spec (see last-applied-configuration) Expected results: spec.kubeletConfig should contain the posted config Additional info: MCO added openAPIV3Schema for all its CRDs recently. The part for the KubeletConfig is missing this: x-kubernetes-embedded-resource: true x-kubernetes-preserve-unknown-fields: true See https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#rawextension I posted a copy of the KubeletConfig CRD to the cluster, with fixed validation, and it worked as expected: relevant part of CRD: kubeletConfig: type: object x-kubernetes-embedded-resource: true x-kubernetes-preserve-unknown-fields: true After this the example above is rejected with an error, because it is not a valid k8s resource, makes sense. And a valid one is created as expected: $k get kubeletconfigs2 test -o=yaml apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig2 metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"machineconfiguration.openshift.io/v1","kind":"KubeletConfig2","metadata":{"annotations":{},"name":"test"},"spec":{"kubeletConfig":{"apiVersion":"v2","kind":"test","metadata":{}},"machineConfigPoolSelector":{"matchLabels":{"machineconfiguration.openshift.io/role":"worker-none"}}}} creationTimestamp: "2020-03-02T18:18:54Z" generation: 1 name: test resourceVersion: "58743" selfLink: /apis/machineconfiguration.openshift.io/v1/kubeletconfigs2/test uid: 96e4f5f4-74bc-4751-b3f9-046a965c10d6 spec: kubeletConfig: apiVersion: v2 kind: test metadata: {} machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-none
Yu is anyone looking into adding also unit tests (and/or functional tests) to prevent this from happening again?
I've opened a jira card against our board: https://issues.redhat.com/browse/GRPA-1717 to add some tests in the near future.
$ cat << EOF > foo-bar.yaml > > apiVersion: machineconfiguration.openshift.io/v1 > kind: KubeletConfig > metadata: > name: test > spec: > kubeletConfig: {"foo": "bar2"} > machineConfigPoolSelector: > matchLabels: > machineconfiguration.openshift.io/role: worker-none > EOF $ oc apply -f foo-bar.yaml kubeletconfig.machineconfiguration.openshift.io/test created [mnguyen@pet30 4.5]$ oc get kubeletconfig/test NAME AGE test 11s $ oc get kubeletconfig/test -o yaml apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"machineconfiguration.openshift.io/v1","kind":"KubeletConfig","metadata":{"annotations":{},"name":"test"},"spec":{"kubeletConfig":{"foo":"bar2"},"machineConfigPoolSelector":{"matchLabels":{"machineconfiguration.openshift.io/role":"worker-none"}}}} creationTimestamp: "2020-03-12T13:25:48Z" generation: 1 name: test resourceVersion: "196736" selfLink: /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/test uid: fd7acc48-e98d-4124-9360-13054a0e8d95 spec: kubeletConfig: foo: bar2 machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-none status: conditions: - lastTransitionTime: "2020-03-12T13:25:48Z" message: 'Error: could not find any MachineConfigPool set for KubeletConfig' status: "False" type: Failure $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.5.0-0.nightly-2020-03-12-003015 True False 9h Cluster version is 4.5.0-0.nightly-2020-03-12-003015
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409