Bug 1811211 - KubeletConfigs are invalid
Summary: KubeletConfigs are invalid
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Machine Config Operator
Version: 4.4
Hardware: Unspecified
OS: Unspecified
urgent
unspecified
Target Milestone: ---
: 4.5.0
Assignee: Kirsten Garrison
QA Contact: Michael Nguyen
URL:
Whiteboard:
: 1811493 (view as bug list)
Depends On:
Blocks: 1811212
TreeView+ depends on / blocked
 
Reported: 2020-03-06 20:52 UTC by Kirsten Garrison
Modified: 2020-07-13 17:19 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1811212 (view as bug list)
Environment:
Last Closed: 2020-07-13 17:18:56 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift machine-config-operator pull 1541 0 None closed Bug 1811211: remove validation of kubeletcfg which is breaking our kubeletcfg 2020-08-18 12:45:35 UTC
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-07-13 17:19:27 UTC

Description Kirsten Garrison 2020-03-06 20:52:30 UTC
Description of problem:


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Kirsten Garrison 2020-03-06 20:53:32 UTC
Somehow submitted without description:
Recent changes to kubeletcfg manifests seem to have broken our kubeletcfgs. This is on a cluster using master.

When I create a new kubelet config from https://github.com/openshift/machine-config-operator/blob/master/docs/KubeletConfigDesign.md#spec

apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
  name: set-max-pods
spec:
  machineConfigPoolSelector:
    matchLabels:
      custom-kubelet: small-pods
  kubeletConfig:
    maxPods: 100

I see errors:

The KubeletConfig "set-max-pods" is invalid: 
* spec.kubeletConfig.apiVersion: Required value: must not be empty
* spec.kubeletConfig.kind: Required value: must not be empty

Comment 2 Joel Smith 2020-03-06 21:00:23 UTC
I just ran into this today. I think it's due to the spec.kubeletConfig type changing from
kubeletconfigv1beta1.KubeletConfiguration to runtime.RawExtension which requires a kind and apiVersion.

So the above KubeletConfig needs to change to something like this:

apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
  name: set-max-pods
spec:
  machineConfigPoolSelector:
    matchLabels:
      custom-kubelet: small-pods
  kubeletConfig:
    apiVersion: kubelet.config.k8s.io/v1beta1
    kind: KubeletConfiguration
    maxPods: 100

I'm not sure if the bug is that we need to change our docs to reflect this change, or if there is something missing that would make it work without specifying apiVersion and kind.  This bug is probably related to bug 1809274 and possibly a duplicate of it.

Comment 3 Ryan Phillips 2020-03-09 13:34:36 UTC
*** Bug 1811493 has been marked as a duplicate of this bug. ***

Comment 6 Kirsten Garrison 2020-03-11 22:04:01 UTC
To verify fix:

Without the fix do what I did here: https://bugzilla.redhat.com/show_bug.cgi?id=1811211#c1

With fix do the same thing and see no error...

Comment 7 Michael Nguyen 2020-03-12 03:51:30 UTC
Verified on 4.5.0-0.nightly-2020-03-12-003015

$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.5.0-0.nightly-2020-03-12-003015   True        False         5m57s   Cluster version is 4.5.0-0.nightly-2020-03-12-003015
$ oc get mcp/worker
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
worker   rendered-worker-f8d09e1f704c4a0082dcc797c967d9c5   True      False      False      3              3                   3                     0                      22m
$ oc label mcp/worker custom-kubelet=small-pods
machineconfigpool.machineconfiguration.openshift.io/worker labeled
$ oc get mcp/worker -o yaml | head -20
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
  creationTimestamp: "2020-03-12T03:25:33Z"
  generation: 2
  labels:
    custom-kubelet: small-pods
    machineconfiguration.openshift.io/mco-built-in: ""
  name: worker
  resourceVersion: "23391"
  selfLink: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker
  uid: 242e164b-1166-4ba8-9b7a-f32421138985
spec:
  configuration:
    name: rendered-worker-f8d09e1f704c4a0082dcc797c967d9c5
    source:
    - apiVersion: machineconfiguration.openshift.io/v1
      kind: MachineConfig
      name: 00-worker
    - apiVersion: machineconfiguration.openshift.io/v1
$ cat << EOF > small-pods.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
  name: set-max-pods
spec:
  machineConfigPoolSelector:
    matchLabels:
      custom-kubelet: small-pods
  kubeletConfig:
    maxPods: 100
EOF
$ oc apply -f small-pods.yaml 
kubeletconfig.machineconfiguration.openshift.io/set-max-pods created
$ oc get kubeletconfig
NAME           AGE
set-max-pods   11s
$ oc get kubeletconfig/set-max-pods -o yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"machineconfiguration.openshift.io/v1","kind":"KubeletConfig","metadata":{"annotations":{},"name":"set-max-pods"},"spec":{"kubeletConfig":{"maxPods":100},"machineConfigPoolSelector":{"matchLabels":{"custom-kubelet":"small-pods"}}}}
  creationTimestamp: "2020-03-12T03:49:16Z"
  finalizers:
  - f94bd0cd-63e1-40ff-8a98-cd5c1a8b4ab4
  generation: 1
  name: set-max-pods
  resourceVersion: "23539"
  selfLink: /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/set-max-pods
  uid: 4532f227-d248-4007-92c7-0ad7d858544e
spec:
  kubeletConfig:
    maxPods: 100
  machineConfigPoolSelector:
    matchLabels:
      custom-kubelet: small-pods
status:
  conditions:
  - lastTransitionTime: "2020-03-12T03:49:16Z"
    message: Success
    status: "True"
    type: Success

Comment 9 errata-xmlrpc 2020-07-13 17:18:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.