Bug 1811211
| Summary: | KubeletConfigs are invalid | |||
|---|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Kirsten Garrison <kgarriso> | |
| Component: | Machine Config Operator | Assignee: | Kirsten Garrison <kgarriso> | |
| Status: | CLOSED ERRATA | QA Contact: | Michael Nguyen <mnguyen> | |
| Severity: | unspecified | Docs Contact: | ||
| Priority: | urgent | |||
| Version: | 4.4 | CC: | jiazha, joelsmith | |
| Target Milestone: | --- | |||
| Target Release: | 4.5.0 | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | ||
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1811212 (view as bug list) | Environment: | ||
| Last Closed: | 2020-07-13 17:18:56 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1811212 | |||
|
Description
Kirsten Garrison
2020-03-06 20:52:30 UTC
Somehow submitted without description: Recent changes to kubeletcfg manifests seem to have broken our kubeletcfgs. This is on a cluster using master. When I create a new kubelet config from https://github.com/openshift/machine-config-operator/blob/master/docs/KubeletConfigDesign.md#spec apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: small-pods kubeletConfig: maxPods: 100 I see errors: The KubeletConfig "set-max-pods" is invalid: * spec.kubeletConfig.apiVersion: Required value: must not be empty * spec.kubeletConfig.kind: Required value: must not be empty I just ran into this today. I think it's due to the spec.kubeletConfig type changing from
kubeletconfigv1beta1.KubeletConfiguration to runtime.RawExtension which requires a kind and apiVersion.
So the above KubeletConfig needs to change to something like this:
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
name: set-max-pods
spec:
machineConfigPoolSelector:
matchLabels:
custom-kubelet: small-pods
kubeletConfig:
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
maxPods: 100
I'm not sure if the bug is that we need to change our docs to reflect this change, or if there is something missing that would make it work without specifying apiVersion and kind. This bug is probably related to bug 1809274 and possibly a duplicate of it.
*** Bug 1811493 has been marked as a duplicate of this bug. *** To verify fix: Without the fix do what I did here: https://bugzilla.redhat.com/show_bug.cgi?id=1811211#c1 With fix do the same thing and see no error... Verified on 4.5.0-0.nightly-2020-03-12-003015
$ oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.5.0-0.nightly-2020-03-12-003015 True False 5m57s Cluster version is 4.5.0-0.nightly-2020-03-12-003015
$ oc get mcp/worker
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
worker rendered-worker-f8d09e1f704c4a0082dcc797c967d9c5 True False False 3 3 3 0 22m
$ oc label mcp/worker custom-kubelet=small-pods
machineconfigpool.machineconfiguration.openshift.io/worker labeled
$ oc get mcp/worker -o yaml | head -20
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
creationTimestamp: "2020-03-12T03:25:33Z"
generation: 2
labels:
custom-kubelet: small-pods
machineconfiguration.openshift.io/mco-built-in: ""
name: worker
resourceVersion: "23391"
selfLink: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker
uid: 242e164b-1166-4ba8-9b7a-f32421138985
spec:
configuration:
name: rendered-worker-f8d09e1f704c4a0082dcc797c967d9c5
source:
- apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
name: 00-worker
- apiVersion: machineconfiguration.openshift.io/v1
$ cat << EOF > small-pods.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
name: set-max-pods
spec:
machineConfigPoolSelector:
matchLabels:
custom-kubelet: small-pods
kubeletConfig:
maxPods: 100
EOF
$ oc apply -f small-pods.yaml
kubeletconfig.machineconfiguration.openshift.io/set-max-pods created
$ oc get kubeletconfig
NAME AGE
set-max-pods 11s
$ oc get kubeletconfig/set-max-pods -o yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"machineconfiguration.openshift.io/v1","kind":"KubeletConfig","metadata":{"annotations":{},"name":"set-max-pods"},"spec":{"kubeletConfig":{"maxPods":100},"machineConfigPoolSelector":{"matchLabels":{"custom-kubelet":"small-pods"}}}}
creationTimestamp: "2020-03-12T03:49:16Z"
finalizers:
- f94bd0cd-63e1-40ff-8a98-cd5c1a8b4ab4
generation: 1
name: set-max-pods
resourceVersion: "23539"
selfLink: /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/set-max-pods
uid: 4532f227-d248-4007-92c7-0ad7d858544e
spec:
kubeletConfig:
maxPods: 100
machineConfigPoolSelector:
matchLabels:
custom-kubelet: small-pods
status:
conditions:
- lastTransitionTime: "2020-03-12T03:49:16Z"
message: Success
status: "True"
type: Success
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409 |