Bug 1797908
Summary: | kubelet service failed to start on RHEL worker | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Gaoyun Pei <gpei> |
Component: | Machine Config Operator | Assignee: | Antonio Murdaca <amurdaca> |
Status: | CLOSED ERRATA | QA Contact: | Gaoyun Pei <gpei> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 4.4 | CC: | jialiu, rteague, wjiang, yanyang |
Target Milestone: | --- | Keywords: | Regression, TestBlocker |
Target Release: | 4.4.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2020-05-04 11:33:06 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Gaoyun Pei
2020-02-04 08:10:15 UTC
--v="${KUBELET_LOG_LEVEL}" Quotes around KUBELET_LOG_LEVEL seem to be crashing this. Removing quotes allowed the kubelet to start up. Introduced here, https://github.com/openshift/machine-config-operator/pull/1390 Moving to MCO Verify this bug on payload 4.4.0-0.nightly-2020-02-07-012035. After launching a cluster using payload 4.4.0-0.nightly-2020-02-07-012035, add RHEL worker to the cluster. RHEL worker could be added, kubelet service is running well. [root@gpei-4-kx6l2-w-a-l-rhel-0 ~]# systemctl status kubelet ● kubelet.service - Kubernetes Kubelet Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-default-env.conf Active: active (running) since Fri 2020-02-07 08:47:49 UTC; 1min 59s ago Main PID: 1928 (kubelet) Memory: 65.8M CGroup: /system.slice/kubelet.service └─1928 kubelet --config=/etc/kubernetes/kubelet.conf --bootstrap-kubeconfig=/etc/kubernetes/kubeconfig --kubeconfig=/var/lib/kubelet/kubeconfig --container-runtime=remote --container-... Feb 07 08:49:46 gpei-4-kx6l2-w-a-l-rhel-0 hyperkube[1928]: I0207 08:49:46.593479 1928 helpers.go:781] eviction manager: observations: signal=imagefs.available, available: 55124216K...7.390078863 Feb 07 08:49:46 gpei-4-kx6l2-w-a-l-rhel-0 hyperkube[1928]: I0207 08:49:46.593524 1928 helpers.go:781] eviction manager: observations: signal=imagefs.inodesFree, available: 31368371...7.390078863 Feb 07 08:49:46 gpei-4-kx6l2-w-a-l-rhel-0 hyperkube[1928]: I0207 08:49:46.593537 1928 helpers.go:781] eviction manager: observations: signal=pid.available, available: 4193911, capa...7.405779327 Feb 07 08:49:46 gpei-4-kx6l2-w-a-l-rhel-0 hyperkube[1928]: I0207 08:49:46.593548 1928 helpers.go:781] eviction manager: observations: signal=memory.available, available: 14378836Ki...7.390078863 Feb 07 08:49:46 gpei-4-kx6l2-w-a-l-rhel-0 hyperkube[1928]: I0207 08:49:46.593557 1928 helpers.go:781] eviction manager: observations: signal=allocatableMemory.available, available:...7.406378244 Feb 07 08:49:46 gpei-4-kx6l2-w-a-l-rhel-0 hyperkube[1928]: I0207 08:49:46.593566 1928 helpers.go:781] eviction manager: observations: signal=nodefs.available, available: 55124216Ki...7.390078863 Feb 07 08:49:46 gpei-4-kx6l2-w-a-l-rhel-0 hyperkube[1928]: I0207 08:49:46.593575 1928 helpers.go:781] eviction manager: observations: signal=nodefs.inodesFree, available: 31368371,...7.390078863 Feb 07 08:49:46 gpei-4-kx6l2-w-a-l-rhel-0 hyperkube[1928]: I0207 08:49:46.593636 1928 eviction_manager.go:320] eviction manager: no resources are starved Feb 07 08:49:46 gpei-4-kx6l2-w-a-l-rhel-0 hyperkube[1928]: I0207 08:49:46.777180 1928 prober.go:129] Liveness probe for "ovs-4wzcz_openshift-sdn(1d3517e7-a06b-4962-98b0-b9672bf5ff2..." succeeded Feb 07 08:49:46 gpei-4-kx6l2-w-a-l-rhel-0 hyperkube[1928]: I0207 08:49:46.826080 1928 prober.go:129] Readiness probe for "ovs-4wzcz_openshift-sdn(1d3517e7-a06b-4962-98b0-b9672bf5ff..." succeeded Hint: Some lines were ellipsized, use -l to show in full. [root@gpei-4-kx6l2-w-a-l-rhel-0 ~]# systemctl cat kubelet # /etc/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet Wants=rpc-statd.service network-online.target crio.service After=network-online.target crio.service [Service] Type=notify ExecStartPre=/bin/mkdir --parents /etc/kubernetes/manifests ExecStartPre=/bin/rm -f /var/lib/kubelet/cpu_manager_state Environment="KUBELET_LOG_LEVEL=3" EnvironmentFile=/etc/os-release EnvironmentFile=-/etc/kubernetes/kubelet-workaround EnvironmentFile=-/etc/kubernetes/kubelet-env ExecStart=/usr/bin/hyperkube \ kubelet \ --config=/etc/kubernetes/kubelet.conf \ --bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \ --kubeconfig=/var/lib/kubelet/kubeconfig \ --container-runtime=remote \ --container-runtime-endpoint=/var/run/crio/crio.sock \ --node-labels=node-role.kubernetes.io/worker,node.openshift.io/os_id=${ID} \ --minimum-container-ttl-duration=6m0s \ --volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec \ --cloud-provider=gce \ \ --v=${KUBELET_LOG_LEVEL} Restart=always RestartSec=10 [Install] WantedBy=multi-user.target # /etc/systemd/system/kubelet.service.d/10-default-env.conf Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0581 |