@Polina Rabinovich Could you again help verified this bug on 4.11 version since you did 4.12 version? thanks.
Verified in 4.11.0-0.nightly-2022-08-22-195828:
[kni@provisionhost-0-0 ~]$ oc version
Client Version: 4.11.0-0.nightly-2022-08-22-195828
Kustomize Version: v4.5.4
Server Version: 4.11.0-0.nightly-2022-08-22-195828
Kubernetes Version: v1.24.0+b62823b
I ran remediation process 6 times (using Node Deletion strategy) and all pods are Running:
[kni@provisionhost-0-0 ~]$ oc get pods -o wide -n openshift-operators
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
node-healthcheck-operator-controller-manager-66c7648d44-xf88m 2/2 Running 0 53m 10.130.0.105 master-0-0 <none> <none>
self-node-remediation-controller-manager-667dfb7f7f-ws626 1/1 Running 1 (52m ago) 53m 10.129.2.16 worker-0-2 <none> <none>
self-node-remediation-ds-9b4qv 1/1 Running 0 52m 10.129.2.17 worker-0-2 <none> <none>
self-node-remediation-ds-ktdtf 1/1 Running 0 52m 10.131.0.26 worker-0-1 <none> <none>
self-node-remediation-ds-lfflf 1/1 Running 0 2m54s 10.128.2.3 worker-0-0 <none> <none>
[kni@provisionhost-0-0 ~]$ oc get nodes
NAME STATUS ROLES AGE VERSION
master-0-0 Ready master 4h v1.24.0+b62823b
master-0-1 Ready master 4h v1.24.0+b62823b
master-0-2 Ready master 4h v1.24.0+b62823b
worker-0-0 Ready worker 2m51s v1.24.0+b62823b
worker-0-1 Ready worker 3h38m v1.24.0+b62823b
worker-0-2 Ready worker 3h37m v1.24.0+b62823b
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (OpenShift Container Platform 4.11.2 bug fix update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.