Bug 1992016
| Summary: | Expose kubelet configuration parameters | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | browsell |
| Component: | Node | Assignee: | Ryan Phillips <rphillips> |
| Node sub component: | Kubelet | QA Contact: | MinLi <minmli> |
| Status: | CLOSED ERRATA | Docs Contact: | |
| Severity: | high | ||
| Priority: | unspecified | CC: | aos-bugs, minmli, rphillips |
| Version: | 4.9 | ||
| Target Milestone: | --- | ||
| Target Release: | 4.9.0 | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2021-10-18 17:45:44 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Comment 1
Ryan Phillips
2021-08-12 15:47:39 UTC
test on 4.9.0-0.nightly-2021-09-05-122658, and cluster is a sno on gcp, but can't see OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION and OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION take effect.(I mean these two parameters present in kubelet config file)
@Ryan, can you check if the following steps and results are as expected?
1. $ oc get node
NAME STATUS ROLES AGE VERSION
minmli0906sno01-nbhgp-master-0.c.openshift-qe.internal Ready master,worker 84m v1.22.0-rc.0+75ee307
2. check kubelet configuration before adding OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION and OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION
sh-4.4# chroot /host
sh-4.4# cat /etc/kubernetes/kubelet.conf
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
x509:
clientCAFile: /etc/kubernetes/kubelet-ca.crt
anonymous:
enabled: false
cgroupDriver: systemd
cgroupRoot: /
clusterDNS:
- 172.30.0.10
clusterDomain: cluster.local
containerLogMaxSize: 50Mi
maxPods: 250
kubeAPIQPS: 50
kubeAPIBurst: 100
rotateCertificates: true
serializeImagePulls: false
staticPodPath: /etc/kubernetes/manifests
systemCgroups: /system.slice
systemReserved:
ephemeral-storage: 1Gi
featureGates:
APIPriorityAndFairness: true
LegacyNodeRoleBehavior: false
NodeDisruptionExclusion: true
RotateKubeletServerCertificate: true
ServiceNodeExclusion: true
SupportPodPidsLimit: true
DownwardAPIHugePages: true
serverTLSBootstrap: true
tlsMinVersion: VersionTLS12
tlsCipherSuites:
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
sh-4.4#
3. create a kubeletconfig like this:
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
name: custom-kubelet-test
spec:
machineConfigPoolSelector:
matchLabels:
custom-kubelet: test-pods
kubeletConfig:
maxPods: 244
OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION: 5m0s
OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION: 5m0s
4. after mcp master finish rolling out, check kubelet configuration again
$ oc get mcp
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
master rendered-master-32486e5b16a62b7ae93675dc7a98f957 True False False 1 1 1 0 97m
worker rendered-worker-735e30a64bba03ab4fd6916f9b0fa306 True False False 0 0 0 0 97m
sh-4.4# chroot /host
sh-4.4# cat /etc/kubernetes/kubelet.conf
{
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1",
"staticPodPath": "/etc/kubernetes/manifests",
"syncFrequency": "0s",
"fileCheckFrequency": "0s",
"httpCheckFrequency": "0s",
"tlsCipherSuites": [
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256",
"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"
],
"tlsMinVersion": "VersionTLS12",
"rotateCertificates": true,
"serverTLSBootstrap": true,
"authentication": {
"x509": {
"clientCAFile": "/etc/kubernetes/kubelet-ca.crt"
},
"webhook": {
"cacheTTL": "0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"webhook": {
"cacheAuthorizedTTL": "0s",
"cacheUnauthorizedTTL": "0s"
}
},
"clusterDomain": "cluster.local",
"clusterDNS": [
"172.30.0.10"
],
"streamingConnectionIdleTimeout": "0s",
"nodeStatusUpdateFrequency": "0s",
"nodeStatusReportFrequency": "0s",
"imageMinimumGCAge": "0s",
"volumeStatsAggPeriod": "0s",
"systemCgroups": "/system.slice",
"cgroupRoot": "/",
"cgroupDriver": "systemd",
"cpuManagerReconcilePeriod": "0s",
"runtimeRequestTimeout": "0s",
"maxPods": 244,
"kubeAPIQPS": 50,
"kubeAPIBurst": 100,
"serializeImagePulls": false,
"evictionPressureTransitionPeriod": "0s",
"featureGates": {
"APIPriorityAndFairness": true,
"DownwardAPIHugePages": true,
"LegacyNodeRoleBehavior": false,
"NodeDisruptionExclusion": true,
"RotateKubeletServerCertificate": true,
"ServiceNodeExclusion": true,
"SupportPodPidsLimit": true
},
"memorySwap": {},
"containerLogMaxSize": "50Mi",
"systemReserved": {
"ephemeral-storage": "1Gi"
},
"logging": {},
"shutdownGracePeriod": "0s",
"shutdownGracePeriodCriticalPods": "0s"
}
sh-4.4# exit
confirm with @Ryan, this just need to set environment variables OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION and OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION verified. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:3759 The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days |