Bug 1809274 - KubeletConfig content is dropped silently
Summary: KubeletConfig content is dropped silently
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Machine Config Operator
Version: 4.4
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: 4.5.0
Assignee: Yu Qi Zhang
QA Contact: Michael Nguyen
URL:
Whiteboard:
Depends On:
Blocks: 1771572 1809334
TreeView+ depends on / blocked
 
Reported: 2020-03-02 18:42 UTC by Marc Sluiter
Modified: 2020-07-13 17:17 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1809334 (view as bug list)
Environment:
Last Closed: 2020-07-13 17:17:25 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift machine-config-operator pull 1524 0 None closed Bug 1809274: crd/kubelet: do not prune kubelet rawExtension fields 2021-01-10 19:06:36 UTC
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-07-13 17:17:44 UTC

Description Marc Sluiter 2020-03-02 18:42:38 UTC
Description of problem:
The content of KubeletConfig, to be more precise spec.KubeletConfig, is dropped silently when posted.

Version-Release number of selected component (if applicable):

$ oc version
Client Version: 4.4.0-0.ci-2020-01-24-140035
Server Version: 4.4.0-0.nightly-2020-03-02-131231
Kubernetes Version: v1.17.1

How reproducible:
always

Steps to Reproduce:
1. create a KubeletConfig CR, e.g.:

apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
  name: test
spec:
  kubeletConfig: {"foo": "bar2"}
  machineConfigPoolSelector:
    matchLabels:
      machineconfiguration.openshift.io/role: worker-none

2. Apply it with k apply -f test.yaml

3. Check content of that CR in the cluster:

$ k get kubeletconfigs test -o=yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"machineconfiguration.openshift.io/v1","kind":"KubeletConfig","metadata":{"annotations":{},"name":"test"},"spec":{"kubeletConfig":{"foo":"bar2"},"machineConfigPoolSelector":{"matchLabels":{"machineconfiguration.openshift.io/role":"worker-none"}}}}
  creationTimestamp: "2020-03-02T18:16:38Z"
  generation: 1
  name: test
  resourceVersion: "57920"
  selfLink: /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/test
  uid: 4e7c1c2a-6fed-416b-ac19-99a971f8ff0f
spec:
  kubeletConfig: {}
  machineConfigPoolSelector:
    matchLabels:
      machineconfiguration.openshift.io/role: worker-none


Actual results:
Note the empty spec.kubeletConfig in contrast to the posted spec (see last-applied-configuration)

Expected results:
spec.kubeletConfig should contain the posted config

Additional info:

MCO added openAPIV3Schema for all its CRDs recently. The part for the KubeletConfig is missing this:

    x-kubernetes-embedded-resource: true
    x-kubernetes-preserve-unknown-fields: true

See https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#rawextension

I posted a copy of the KubeletConfig CRD to the cluster, with fixed validation, and it worked as expected:

relevant part of CRD:

kubeletConfig:
  type: object
  x-kubernetes-embedded-resource: true
  x-kubernetes-preserve-unknown-fields: true

After this the example above is rejected with an error, because it is not a valid k8s resource, makes sense. And a valid one is created as expected:

$k get kubeletconfigs2 test -o=yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig2
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"machineconfiguration.openshift.io/v1","kind":"KubeletConfig2","metadata":{"annotations":{},"name":"test"},"spec":{"kubeletConfig":{"apiVersion":"v2","kind":"test","metadata":{}},"machineConfigPoolSelector":{"matchLabels":{"machineconfiguration.openshift.io/role":"worker-none"}}}}
  creationTimestamp: "2020-03-02T18:18:54Z"
  generation: 1
  name: test
  resourceVersion: "58743"
  selfLink: /apis/machineconfiguration.openshift.io/v1/kubeletconfigs2/test
  uid: 96e4f5f4-74bc-4751-b3f9-046a965c10d6
spec:
  kubeletConfig:
    apiVersion: v2
    kind: test
    metadata: {}
  machineConfigPoolSelector:
    matchLabels:
      machineconfiguration.openshift.io/role: worker-none

Comment 2 Federico Simoncelli 2020-03-03 08:33:12 UTC
Yu is anyone looking into adding also unit tests (and/or functional tests) to prevent this from happening again?

Comment 3 Yu Qi Zhang 2020-03-03 15:11:19 UTC
I've opened a jira card against our board: https://issues.redhat.com/browse/GRPA-1717 to add some tests in the near future.

Comment 7 Michael Nguyen 2020-03-12 13:28:23 UTC
$ cat << EOF > foo-bar.yaml
> 
> apiVersion: machineconfiguration.openshift.io/v1
> kind: KubeletConfig
> metadata:
>   name: test
> spec:
>   kubeletConfig: {"foo": "bar2"}
>   machineConfigPoolSelector:
>     matchLabels:
>       machineconfiguration.openshift.io/role: worker-none
> EOF
$ oc apply -f foo-bar.yaml 
kubeletconfig.machineconfiguration.openshift.io/test created
[mnguyen@pet30 4.5]$ oc get kubeletconfig/test
NAME   AGE
test   11s
$ oc get kubeletconfig/test -o yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"machineconfiguration.openshift.io/v1","kind":"KubeletConfig","metadata":{"annotations":{},"name":"test"},"spec":{"kubeletConfig":{"foo":"bar2"},"machineConfigPoolSelector":{"matchLabels":{"machineconfiguration.openshift.io/role":"worker-none"}}}}
  creationTimestamp: "2020-03-12T13:25:48Z"
  generation: 1
  name: test
  resourceVersion: "196736"
  selfLink: /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/test
  uid: fd7acc48-e98d-4124-9360-13054a0e8d95
spec:
  kubeletConfig:
    foo: bar2
  machineConfigPoolSelector:
    matchLabels:
      machineconfiguration.openshift.io/role: worker-none
status:
  conditions:
  - lastTransitionTime: "2020-03-12T13:25:48Z"
    message: 'Error: could not find any MachineConfigPool set for KubeletConfig'
    status: "False"
    type: Failure
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.5.0-0.nightly-2020-03-12-003015   True        False         9h      Cluster version is 4.5.0-0.nightly-2020-03-12-003015

Comment 9 errata-xmlrpc 2020-07-13 17:17:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.