Description of problem: Bump kubelet log level to 4 to provide my insight. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
verified with version : 4.5.0-0.nightly-2020-04-30-002507 $ oc get machineconfig 01-worker-kubelet -o yaml | grep KUBELET_LOG_LEVEL= Environment="KUBELET_LOG_LEVEL=4" $ oc get machineconfig 01-master-kubelet -o yaml | grep KUBELET_LOG_LEVEL= Environment="KUBELET_LOG_LEVEL=4" sh-4.4# ps aux | grep kubelet root 1395 5.6 1.7 1443408 139752 ? Ssl 07:12 0:10 kubelet --config=/etc/kubernetes/kubelet.conf --bootstrap-kubeconfig=/etc/kubernetes/kubeconfig --kubeconfig=/var/lib/kubelet/kubeconfig --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --runtime-cgroups=/system.slice/crio.service --node-labels=node-role.kubernetes.io/worker,node.openshift.io/os_id=rhcos --minimum-container-ttl-duration=6m0s --volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --cloud-provider=aws --v=4
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409