% oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.7.56 True False 9m24s Cluster version is 4.7.56
% oc get node
NAME STATUS ROLES AGE VERSION
ip-10-0-146-75.us-east-2.compute.internal Ready master 29m v1.20.15+98b2293
ip-10-0-147-7.us-east-2.compute.internal Ready worker 20m v1.20.15+98b2293
ip-10-0-166-211.us-east-2.compute.internal Ready master 26m v1.20.15+98b2293
ip-10-0-181-205.us-east-2.compute.internal Ready worker 20m v1.20.15+98b2293
ip-10-0-215-192.us-east-2.compute.internal Ready worker 24m v1.20.15+98b2293
ip-10-0-216-185.us-east-2.compute.internal Ready master 26m v1.20.15+98b2293
% oc debug node/ip-10-0-147-7.us-east-2.compute.internal
...
sh-4.4# chroot /host
sh-4.4# rpm -q runc
runc-1.0.0-96.rhaos4.8.gitcd80260.el8.x86_64
Checked by deleting multiple pods before they get ready, don't see error messages in kubelet logs
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Moderate: OpenShift Container Platform 4.7.56 security and bug fix update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2022:6053