Bug 1811212 - KubeletConfigs are invalid
Summary: KubeletConfigs are invalid
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Machine Config Operator
Version: 4.4
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: 4.4.0
Assignee: Kirsten Garrison
QA Contact: Michael Nguyen
URL:
Whiteboard:
Depends On: 1811211
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-03-06 20:56 UTC by Kirsten Garrison
Modified: 2020-05-04 11:46 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1811211
Environment:
Last Closed: 2020-05-04 11:45:36 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift machine-config-operator pull 1542 0 None closed [4.4] Bug 1811212: remove upstream validation which is breaking our kubeletcfg 2020-07-03 07:27:04 UTC
Red Hat Product Errata RHBA-2020:0581 0 None None None 2020-05-04 11:46:11 UTC

Description Kirsten Garrison 2020-03-06 20:56:31 UTC
+++ This bug was initially created as a clone of Bug #1811211 +++


Somehow submitted without description:
Recent changes to kubeletcfg manifests seem to have broken our kubeletcfgs. This is on a cluster using master.

When I create a new kubelet config from https://github.com/openshift/machine-config-operator/blob/master/docs/KubeletConfigDesign.md#spec

apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
  name: set-max-pods
spec:
  machineConfigPoolSelector:
    matchLabels:
      custom-kubelet: small-pods
  kubeletConfig:
    maxPods: 100

I see errors:

The KubeletConfig "set-max-pods" is invalid: 
* spec.kubeletConfig.apiVersion: Required value: must not be empty
* spec.kubeletConfig.kind: Required value: must not be empty

Comment 1 Ryan Phillips 2020-03-09 16:16:28 UTC
*** Bug 1805019 has been marked as a duplicate of this bug. ***

Comment 2 Kirsten Garrison 2020-03-11 22:04:49 UTC
To verify fix:

Without the fix do what I did here: https://bugzilla.redhat.com/show_bug.cgi?id=1811212#c0

With fix: do the same thing and see no error...

Comment 5 Michael Nguyen 2020-03-14 00:31:42 UTC
Verified on 4.4.0-0.nightly-2020-03-13-073111.  Applying kubeletconfig with maxPods does not cause any errors.  Configuration is propagated down to the nodes in /etc/kubernetes/kubelet.conf


$ oc get node
NAME                                         STATUS   ROLES    AGE   VERSION
ip-10-0-131-108.us-west-2.compute.internal   Ready    worker   20m   v1.17.1
ip-10-0-142-212.us-west-2.compute.internal   Ready    master   29m   v1.17.1
ip-10-0-148-87.us-west-2.compute.internal    Ready    worker   21m   v1.17.1
ip-10-0-157-37.us-west-2.compute.internal    Ready    master   29m   v1.17.1
ip-10-0-171-15.us-west-2.compute.internal    Ready    master   29m   v1.17.1
ip-10-0-174-133.us-west-2.compute.internal   Ready    worker   19m   v1.17.1
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.4.0-0.nightly-2020-03-13-073111   True        False         12m     Cluster version is 4.4.0-0.nightly-2020-03-13-073111
$ oc get mcp/worker -o yaml | head -15
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
  creationTimestamp: "2020-03-13T21:57:02Z"
  generation: 2
  labels:
    machineconfiguration.openshift.io/mco-built-in: ""
  name: worker
  resourceVersion: "16496"
  selfLink: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker
  uid: 716f48e0-1ac4-472c-9381-75fc39e400e4
spec:
  configuration:
    name: rendered-worker-dc42dfcb9d0c32f4a7e77f653173d0e5
    source:
$ cat kc.yaml 
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
  name: set-max-pods
spec:
  machineConfigPoolSelector:
    matchLabels:
      custom-kubelet: small-pods
  kubeletConfig:
    maxPods: 100
$ oc label mcp/worker custom-kubelet=small-pods
machineconfigpool.machineconfiguration.openshift.io/worker labeled
$ oc get mcp/worker -o yaml | head -15
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
  creationTimestamp: "2020-03-13T21:57:02Z"
  generation: 2
  labels:
    custom-kubelet: small-pods
    machineconfiguration.openshift.io/mco-built-in: ""
  name: worker
  resourceVersion: "24301"
  selfLink: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker
  uid: 716f48e0-1ac4-472c-9381-75fc39e400e4
spec:
  configuration:
    name: rendered-worker-dc42dfcb9d0c32f4a7e77f653173d0e5
$ oc apply -f kc.yaml 
kubeletconfig.machineconfiguration.openshift.io/set-max-pods created
$ oc get mcp/worker -o yaml | head -15
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
  creationTimestamp: "2020-03-13T21:57:02Z"
  generation: 3
  labels:
    custom-kubelet: small-pods
    machineconfiguration.openshift.io/mco-built-in: ""
  name: worker
  resourceVersion: "42187"
  selfLink: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker
  uid: 716f48e0-1ac4-472c-9381-75fc39e400e4
spec:
  configuration:
    name: rendered-worker-98786d271734e7556339fff3f9b3878c
$ oc get mcp/worker
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
worker   rendered-worker-dc42dfcb9d0c32f4a7e77f653173d0e5   False     True       False      3              0                   0                     0                      90m
$ watch oc get nodes
$ oc get node
NAME                                         STATUS   ROLES    AGE    VERSION
ip-10-0-131-108.us-west-2.compute.internal   Ready    worker   141m   v1.17.1
ip-10-0-142-212.us-west-2.compute.internal   Ready    master   149m   v1.17.1
ip-10-0-148-87.us-west-2.compute.internal    Ready    worker   141m   v1.17.1
ip-10-0-157-37.us-west-2.compute.internal    Ready    master   149m   v1.17.1
ip-10-0-171-15.us-west-2.compute.internal    Ready    master   150m   v1.17.1
ip-10-0-174-133.us-west-2.compute.internal   Ready    worker   140m   v1.17.1
$ oc debug node/ip-10-0-131-108.us-west-2.compute.internal -- chroot /host cat /etc/kubernetes/kubelet.conf
Starting pod/ip-10-0-131-108us-west-2computeinternal-debug ...
To use host binaries, run `chroot /host`
{"kind":"KubeletConfiguration","apiVersion":"kubelet.config.k8s.io/v1beta1","staticPodPath":"/etc/kubernetes/manifests","syncFrequency":"0s","fileCheckFrequency":"0s","httpCheckFrequency":"0s","rotateCertificates":true,"serverTLSBootstrap":true,"authentication":{"x509":{"clientCAFile":"/etc/kubernetes/kubelet-ca.crt"},"webhook":{"cacheTTL":"0s"},"anonymous":{"enabled":false}},"authorization":{"webhook":{"cacheAuthorizedTTL":"0s","cacheUnauthorizedTTL":"0s"}},"clusterDomain":"cluster.local","clusterDNS":["172.30.0.10"],"streamingConnectionIdleTimeout":"0s","nodeStatusUpdateFrequency":"0s","nodeStatusReportFrequency":"0s","imageMinimumGCAge":"0s","volumeStatsAggPeriod":"0s","systemCgroups":"/system.slice","cgroupRoot":"/","cgroupDriver":"systemd","cpuManagerReconcilePeriod":"0s","runtimeRequestTimeout":"0s","maxPods":100,"kubeAPIQPS":50,"kubeAPIBurst":100,"serializeImagePulls":false,"evictionPressureTransitionPeriod":"0s","featureGates":{"LegacyNodeRoleBehavior":false,"NodeDisruptionExclusion":true,"RotateKubeletServerCertificate":true,"SCTPSupport":true,"ServiceNodeExclusion":true,"SupportPodPidsLimit":true},"containerLogMaxSize":"50Mi","systemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"}}

Comment 7 errata-xmlrpc 2020-05-04 11:45:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0581


Note You need to log in before you can comment on or make changes to this bug.