Most of the pods were stuck in the ContainerCreating state because there were equivalent pods stuck in the Terminating state. Example: busybox-workloads-1 busybox-1-7f8f44d74f-k4nnl 0/1 ContainerCreating 0 3h39m busybox-workloads-1 busybox-1-7f8f44d74f-r8xbw 1/1 Terminating 0 22h a "kubectl describe" on the one in Terminating state shows: ------ Warning FailedKillPod 54s (x163 over 15h) kubelet error killing pod: [failed to "KillContainer" for "busybox" with KillContainerError: "rpc error: code = DeadlineExceeded desc = context deadline exceeded", failed to "KillPodSandbox" for "65a4abda-5f77-4145-a46b-bbb0e2b2242f" with KillPodSandboxError: "rpc error: code = DeadlineExceeded desc = context deadline exceeded"] ------ a "kubectl describe" on the one in ContainerCreating state shows: ----- Warning FailedMount 18m (x21 over 3h55m) kubelet Unable to attach or mount volumes: unmounted volumes=[mypvc], unattached volumes=[kube-api-access-bpjmc mypvc]: timed out waiting for the condition ----- Force killing the Pod in Terminating state will cause the one in ContainerCreating state to make a progress but it gets stuck at the mount stop. VRs are all showing the CURRENT STATE as "Unknown" That might be because the mirror state for all the images is not healthy Trying to query the mirror status get stuck Checking the Ceph pods I see 2 OSDs down as well as the RGW. That may explain why I am unable to query mirror status and why the VR is showing the CURRENT STATE as unknown and why the PODs are stuck in Terminating and/or ContainerCreating state
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenShift Data Foundation 4.10.3 bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:5023