The readiness of nodes in a cluster is a key measurement to assess the success of a rollout and upgrade. In general, in a working cluster nodes should always be ready unless they are rebooting, and always be reachable from metrics unless the node is down or the kubelet is being restarted. Tracking readiness allows us to estimate a bound on how often our nodes are being restarted or failing unexpectedly. This will give us information in the future to identify how we can more accurately detect, reflect, and measure service level objectives for node uptime. Add two recording rules to telemetry that track: * the average readiness and metrics reachability (to catch kubelet restarts) of schedulable nodes, which is our rough "how often is the kubelet running AND the node is ready from an API perspective, excluding maintenance systems" * the average readiness as viewed in the API of all nodes, which is the rough "does a user think the node should be working" metric This will give us rough fleet level measurements to begin quantifying sources of impact.
Verified on 4.7.0-0.nightly-2021-01-19-033533. I see cluster:usage:kube_schedulable_node_ready_reachable:avg5m & cluster:usage:kube_node_ready:avg5m metric being recorded. $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.7.0-0.nightly-2021-01-19-033533 True False 89m Cluster version is 4.7.0-0.nightly-2021-01-19-033533 $ oc get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ip-10-0-56-217.us-east-2.compute.internal Ready master 114m v1.20.0+d9c52cc 10.0.56.217 <none> Red Hat Enterprise Linux CoreOS 47.83.202101171239-0 (Ootpa) 4.18.0-240.10.1.el8_3.x86_64 cri-o://1.20.0-0.rhaos4.7.gitd9f17c8.el8.42 ip-10-0-59-181.us-east-2.compute.internal Ready master 114m v1.20.0+d9c52cc 10.0.59.181 <none> Red Hat Enterprise Linux CoreOS 47.83.202101171239-0 (Ootpa) 4.18.0-240.10.1.el8_3.x86_64 cri-o://1.20.0-0.rhaos4.7.gitd9f17c8.el8.42 ip-10-0-63-227.us-east-2.compute.internal Ready worker 104m v1.20.0+d9c52cc 10.0.63.227 <none> Red Hat Enterprise Linux CoreOS 47.83.202101171239-0 (Ootpa) 4.18.0-240.10.1.el8_3.x86_64 cri-o://1.20.0-0.rhaos4.7.gitd9f17c8.el8.42 ip-10-0-69-79.us-east-2.compute.internal Ready master 115m v1.20.0+d9c52cc 10.0.69.79 <none> Red Hat Enterprise Linux CoreOS 47.83.202101171239-0 (Ootpa) 4.18.0-240.10.1.el8_3.x86_64 cri-o://1.20.0-0.rhaos4.7.gitd9f17c8.el8.42 ip-10-0-70-235.us-east-2.compute.internal Ready worker 104m v1.20.0+d9c52cc 10.0.70.235 <none> Red Hat Enterprise Linux CoreOS 47.83.202101171239-0 (Ootpa) 4.18.0-240.10.1.el8_3.x86_64 cri-o://1.20.0-0.rhaos4.7.gitd9f17c8.el8.42
Created attachment 1748685 [details] kube_node_ready.png
Created attachment 1748686 [details] kube_schedulable_node_ready_reachable.png
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5633