Description of problem (please be detailed as possible and provide log snippests): Wrong CIDR gets blocklisted and pod does not get mounted upon fencing when having gateway in between Version of all relevant components (if applicable): OCP 4.14.0-0.nightly-2023-08-11-055332 ODF 4.14.0-115 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? NA Is there any workaround available to the best of your knowledge? No Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? Yes Can this issue reproduce from the UI? Yes If this is a regression, please provide more details to justify this: NA Steps to Reproduce: 1. Install Openshift data foundation and deploy a app pod in same node as that of rook ceph operator pod 2. Shutdown the node on which RBD RWO pod is deployed 3.Once the node is down, add taint ```oc taint nodes <node-name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute ``` Wait for some time(if the application pod and rook operator are on the same node wait for bit logger) then check the networkFence cr status and make sure its state is fenced and state is ready ```oc get networkfences.csiaddons.openshift.io NAME DRIVER CIDRS FENCESTATE AGE RESULT minikube-m03 rook-ceph.rbd.csi.ceph.com ["92.168.39.187:0/32"] Fenced 96s Succeeded 4. Wait for pod to come up on new node. Actual results: The pod will be stuck in container creating state (Waited for more than one hour) Expected results: Pod should come up on new node Additional info: Event log : Generated from kubelet on compute-1 34 times in the last 1 hourMountVolume.MountDevice failed for volume "pvc-703328fc-2637-435a-918c-c61baf96ab8b" : rpc error: code = Internal desc = error generating volume 0001-0011-openshift-storage-0000000000000001-a059cc2e-e5f5-4094-ac54-278006bde5a9: rados: ret=-108, Cannot send after transport endpoint shutdown
already provided the info, removing needinfo on me
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.14.0 security, enhancement & bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:6832